[PoC] Federated Authn/z with OAUTHBEARER

Started by Jacob Championover 4 years ago429 messages
#1Jacob Champion
pchampion@vmware.com
7 attachment(s)

Hi all,

We've been working on ways to expand the list of third-party auth
methods that Postgres provides. Some example use cases might be "I want
to let anyone with a Google account read this table" or "let anyone who
belongs to this GitHub organization connect as a superuser".

Attached is a proof of concept that implements pieces of OAuth 2.0
federated authorization, via the OAUTHBEARER SASL mechanism from RFC
7628 [1]https://datatracker.ietf.org/doc/html/rfc7628. Currently, only Linux is supported due to some ugly hacks in
the backend.

The architecture can support the following use cases, as long as your
OAuth issuer of choice implements the necessary specs, and you know how
to write a validator for your issuer's bearer tokens:

- Authentication only, where an external validator uses the bearer
token to determine the end user's identity, and Postgres decides
whether that user ID is authorized to connect via the standard pg_ident
user mapping.

- Authorization only, where the validator uses the bearer token to
determine the allowed roles for the end user, and then checks to make
sure that the connection's role is one of those. This bypasses pg_ident
and allows pseudonymous connections, where Postgres doesn't care who
you are as long as the token proves you're allowed to assume the role
you want.

- A combination, where the validator provides both an authn_id (for
later audits of database access) and an authorization decision based on
the bearer token and role provided.

It looks kinda like this during use:

$ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

= Quickstart =

For anyone who likes building and seeing green tests ASAP.

Prerequisite software:
- iddawc v0.9.9 [2]https://github.com/babelouest/iddawc, library and dev headers, for client support
- Python 3, for the test suite only

(Some newer distributions have dev packages for iddawc, but mine did
not.)

Configure using --with-oauth (and, if you've installed iddawc into a
non-standard location, be sure to use --with-includes and --with-
libraries. Make sure either rpath or LD_LIBRARY_PATH will get you what
you need). Install as usual.

To run the test suite, make sure the contrib/authn_id extension is
installed, then init and start your dev cluster. No other configuration
is required; the test will do it for you. Switch to the src/test/python
directory, point your PG* envvars to a superuser connection on the
cluster (so that a "bare" psql will connect automatically), and run
`make installcheck`.

= Production Setup =

(but don't use this in production, please)

Actually setting up a "real" system requires knowing the specifics of
your third-party issuer of choice. Your issuer MUST implement OpenID
Discovery and the OAuth Device Authorization flow! Seriously, check
this before spending a lot of time writing a validator against an
issuer that can't actually talk to libpq.

The broad strokes are as follows:

1. Register a new public client with your issuer to get an OAuth client
ID for libpq. You'll use this as the oauth_client_id in the connection
string. (If your issuer doesn't support public clients and gives you a
client secret, you can use the oauth_client_secret connection parameter
to provide that too.)

The client you register must be able to use a device authorization
flow; some issuers require additional setup for that.

2. Set up your HBA with the 'oauth' auth method, and set the 'issuer'
and 'scope' options. 'issuer' is the base URL identifying your third-
party issuer (for example, https://accounts.google.com), and 'scope' is
the set of OAuth scopes that the client and server will need to
authenticate and/or authorize the user (e.g. "openid email").

So a sample HBA line might look like

host all all samehost oauth issuer="https://accounts.google.com" scope="openid email"

3. In postgresql.conf, set up an oauth_validator_command that's capable
of verifying bearer tokens and implements the validator protocol. This
is the hardest part. See below.

= Design =

On the client side, I've implemented the Device Authorization flow (RFC
8628, [3]https://datatracker.ietf.org/doc/html/rfc8628). What this means in practice is that libpq reaches out to a
third-party issuer (e.g. Google, Azure, etc.), identifies itself with a
client ID, and requests permission to act on behalf of the end user.
The issuer responds with a login URL and a one-time code, which libpq
presents to the user using the notice hook. The end user then navigates
to that URL, presents their code, authenticates to the issuer, and
grants permission for libpq to retrieve a bearer token. libpq grabs a
token and sends it to the server for verification.

(The bearer token, in this setup, is essentially a plaintext password,
and you must secure it like you would a plaintext password. The token
has an expiration date and can be explicitly revoked, which makes it
slightly better than a password, but this is still a step backwards
from something like SCRAM with channel binding. There are ways to bind
a bearer token to a client certificate [4]https://datatracker.ietf.org/doc/html/rfc8705, which would mitigate the
risk of token theft -- but your issuer has to support that, and I
haven't found much support in the wild.)

The server side is where things get more difficult for the DBA. The
OAUTHBEARER spec has this to say about the server side implementation:

The server validates the response according to the specification for
the OAuth Access Token Types used.

And here's what the Bearer Token specification [5]https://datatracker.ietf.org/doc/html/rfc6750#section-5.2 says:

This document does not specify the encoding or the contents of the
token; hence, detailed recommendations about the means of
guaranteeing token integrity protection are outside the scope of
this document.

It's the Wild West. Every issuer does their own thing in their own
special way. Some don't really give you a way to introspect information
about a bearer token at all, because they assume that the issuer of the
token and the consumer of the token are essentially the same service.
Some major players provide their own custom libraries, implemented in
your-language-of-choice, to deal with their particular brand of magic.

So I punted and added the oauth_validator_command GUC. A token
validator command reads the bearer token from a file descriptor that's
passed to it, then does whatever magic is necessary to validate that
token and find out who owns it. Optionally, it can look at the role
that's being connected and make sure that the token authorizes the user
to actually use that role. Then it says yea or nay to Postgres, and
optionally tells the server who the user is so that their ID can be
logged and mapped through pg_ident.

(See the commit message in 0005 for a full description of the protocol.
The test suite also has two toy implementations that illustrate the
protocol, but they provide zero security.)

This is easily the worst part of the patch, not only because my
implementation is a bad hack on OpenPipeStream(), but because it
balances the security of the entire system on the shoulders of a DBA
who does not have time to read umpteen OAuth specifications cover to
cover. More thought and coding effort is needed here, but I didn't want
to gold-plate a bad design. I'm not sure what alternatives there are
within the rules laid out by OAUTHBEARER. And the system is _extremely_
flexible, in the way that only code that's maintained by somebody else
can be.

= Patchset Roadmap =

The seven patches can be grouped into three:

1. Prep

0001 decouples the SASL code from the SCRAM implementation.
0002 makes it possible to use common/jsonapi from the frontend.
0003 lets the json_errdetail() result be freed, to avoid leaks.

2. OAUTHBEARER Implementation

0004 implements the client with libiddawc.
0005 implements server HBA support and oauth_validator_command.

3. Testing

0006 adds a simple test extension to retrieve the authn_id.
0007 adds the Python test suite I've been developing against.

The first three patches are, hopefully, generally useful outside of
this implementation, and I'll plan to register them in the next
commitfest. The middle two patches are the "interesting" pieces, and
I've split them into client and server for ease of understanding,
though neither is particularly useful without the other.

The last two patches grew out of a test suite that I originally built
to be able to exercise NSS corner cases at the protocol/byte level. It
was incredibly helpful during implementation of this new SASL
mechanism, since I could write the client and server independently of
each other and get high coverage of broken/malicious implementations.
It's based on pytest and Construct, and the Python 3 requirement might
turn some away, but I wanted to include it in case anyone else wanted
to hack on the code. src/test/python/README explains more.

= Thoughts/Reflections =

...in no particular order.

I picked OAuth 2.0 as my first experiment in federated auth mostly
because I was already familiar with pieces of it. I think SAML (via the
SAML20 mechanism, RFC 6595) would be a good companion to this proof of
concept, if there is general interest in federated deployments.

I don't really like the OAUTHBEARER spec, but I'm not sure there's a
better alternative. Everything is left as an exercise for the reader.
It's not particularly extensible. Standard OAuth is built for
authorization, not authentication, and from reading the RFC's history,
it feels like it was a hack to just get something working. New
standards like OpenID Connect have begun to fill in the gaps, but the
SASL mechanisms have not kept up. (The OPENID20 mechanism is, to my
understanding, unrelated/obsolete.) And support for helpful OIDC
features seems to be spotty in the real world.

The iddawc dependency for client-side OAuth was extremely helpful to
develop this proof of concept quickly, but I don't think it would be an
appropriate component to build a real feature on. It's extremely
heavyweight -- it incorporates a huge stack of dependencies, including
a logging framework and a web server, to implement features we would
probably never use -- and it's fairly difficult to debug in practice.
If a device authorization flow were the only thing that libpq needed to
support natively, I think we should just depend on a widely used HTTP
client, like libcurl or neon, and implement the minimum spec directly
against the existing test suite.

There are a huge number of other authorization flows besides Device
Authorization; most would involve libpq automatically opening a web
browser for you. I felt like that wasn't an appropriate thing for a
library to do by default, especially when one of the most important
clients is a command-line application. Perhaps there could be a hook
for applications to be able to override the builtin flow and substitute
their own.

Since bearer tokens are essentially plaintext passwords, the relevant
specs require the use of transport-level protection, and I think it'd
be wise for the client to require TLS to be in place before performing
the initial handshake or sending a token.

Not every OAuth issuer is also an OpenID Discovery provider, so it's
frustrating that OAUTHBEARER (which is purportedly an OAuth 2.0
feature) requires OIDD for real-world implementations. Perhaps we could
hack around this with a data: URI or something.

The client currently performs the OAuth login dance every single time a
connection is made, but a proper OAuth client would cache its tokens to
reuse later, and keep an eye on their expiration times. This would make
daily use a little more like that of Kerberos, but we would have to
design a way to create and secure a token cache on disk.

If you've read this far, thank you for your interest, and I hope you
enjoy playing with it!

--Jacob

[1]: https://datatracker.ietf.org/doc/html/rfc7628
[2]: https://github.com/babelouest/iddawc
[3]: https://datatracker.ietf.org/doc/html/rfc8628
[4]: https://datatracker.ietf.org/doc/html/rfc8705
[5]: https://datatracker.ietf.org/doc/html/rfc6750#section-5.2

Attachments:

0001-auth-generalize-SASL-mechanisms.patchtext/x-patch; name=0001-auth-generalize-SASL-mechanisms.patchDownload
From a6a65b66cc3dc5da7219378dbadb090ff10fd42b Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:25:48 -0700
Subject: [PATCH 1/7] auth: generalize SASL mechanisms

Split the SASL logic out from the SCRAM implementation, so that it can
be reused by other mechanisms.  New implementations will implement both
a pg_sasl_mech and a pg_be_sasl_mech.
---
 src/backend/libpq/auth-scram.c       | 34 ++++++++++++++---------
 src/backend/libpq/auth.c             | 34 ++++++++++++++++-------
 src/include/libpq/sasl.h             | 34 +++++++++++++++++++++++
 src/include/libpq/scram.h            | 13 +++------
 src/interfaces/libpq/fe-auth-scram.c | 40 +++++++++++++++++++---------
 src/interfaces/libpq/fe-auth.c       | 16 ++++++++---
 src/interfaces/libpq/fe-auth.h       | 11 ++------
 src/interfaces/libpq/fe-connect.c    |  6 +----
 src/interfaces/libpq/libpq-int.h     | 14 ++++++++++
 9 files changed, 139 insertions(+), 63 deletions(-)
 create mode 100644 src/include/libpq/sasl.h

diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index f9e1026a12..db3ca75a60 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -101,11 +101,25 @@
 #include "common/sha2.h"
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
+#include "libpq/sasl.h"
 #include "libpq/scram.h"
 #include "miscadmin.h"
 #include "utils/builtins.h"
 #include "utils/timestamp.h"
 
+static void  scram_get_mechanisms(Port *port, StringInfo buf);
+static void *scram_init(Port *port, const char *selected_mech,
+						const char *shadow_pass);
+static int   scram_exchange(void *opaq, const char *input, int inputlen,
+							char **output, int *outputlen, char **logdetail);
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_scram_mech = {
+	scram_get_mechanisms,
+	scram_init,
+	scram_exchange,
+};
+
 /*
  * Status data for a SCRAM authentication exchange.  This should be kept
  * internal to this file.
@@ -170,16 +184,14 @@ static char *sanitize_str(const char *s);
 static char *scram_mock_salt(const char *username);
 
 /*
- * pg_be_scram_get_mechanisms
- *
  * Get a list of SASL mechanisms that this module supports.
  *
  * For the convenience of building the FE/BE packet that lists the
  * mechanisms, the names are appended to the given StringInfo buffer,
  * separated by '\0' bytes.
  */
-void
-pg_be_scram_get_mechanisms(Port *port, StringInfo buf)
+static void
+scram_get_mechanisms(Port *port, StringInfo buf)
 {
 	/*
 	 * Advertise the mechanisms in decreasing order of importance.  So the
@@ -199,8 +211,6 @@ pg_be_scram_get_mechanisms(Port *port, StringInfo buf)
 }
 
 /*
- * pg_be_scram_init
- *
  * Initialize a new SCRAM authentication exchange status tracker.  This
  * needs to be called before doing any exchange.  It will be filled later
  * after the beginning of the exchange with authentication information.
@@ -215,10 +225,8 @@ pg_be_scram_get_mechanisms(Port *port, StringInfo buf)
  * an authentication exchange, but it will fail, as if an incorrect password
  * was given.
  */
-void *
-pg_be_scram_init(Port *port,
-				 const char *selected_mech,
-				 const char *shadow_pass)
+static void *
+scram_init(Port *port, const char *selected_mech, const char *shadow_pass)
 {
 	scram_state *state;
 	bool		got_secret;
@@ -325,9 +333,9 @@ pg_be_scram_init(Port *port,
  * string at *logdetail that will be sent to the postmaster log (but not
  * the client).
  */
-int
-pg_be_scram_exchange(void *opaq, const char *input, int inputlen,
-					 char **output, int *outputlen, char **logdetail)
+static int
+scram_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, char **logdetail)
 {
 	scram_state *state = (scram_state *) opaq;
 	int			result;
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 68372fcea8..e20740a7c5 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -26,11 +26,11 @@
 #include "commands/user.h"
 #include "common/ip.h"
 #include "common/md5.h"
-#include "common/scram-common.h"
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
+#include "libpq/sasl.h"
 #include "libpq/scram.h"
 #include "miscadmin.h"
 #include "port/pg_bswap.h"
@@ -51,6 +51,13 @@ static void auth_failed(Port *port, int status, char *logdetail);
 static char *recv_password_packet(Port *port);
 static void set_authn_id(Port *port, const char *id);
 
+/*----------------------------------------------------------------
+ * SASL common authentication
+ *----------------------------------------------------------------
+ */
+static int	SASL_exchange(const pg_be_sasl_mech *mech, Port *port,
+						  char *shadow_pass, char **logdetail);
+
 
 /*----------------------------------------------------------------
  * Password-based authentication methods (password, md5, and scram-sha-256)
@@ -912,12 +919,13 @@ CheckMD5Auth(Port *port, char *shadow_pass, char **logdetail)
 }
 
 static int
-CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail)
+SASL_exchange(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
+			  char **logdetail)
 {
 	StringInfoData sasl_mechs;
 	int			mtype;
 	StringInfoData buf;
-	void	   *scram_opaq = NULL;
+	void	   *opaq = NULL;
 	char	   *output = NULL;
 	int			outputlen = 0;
 	const char *input;
@@ -931,7 +939,7 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail)
 	 */
 	initStringInfo(&sasl_mechs);
 
-	pg_be_scram_get_mechanisms(port, &sasl_mechs);
+	mech->get_mechanisms(port, &sasl_mechs);
 	/* Put another '\0' to mark that list is finished. */
 	appendStringInfoChar(&sasl_mechs, '\0');
 
@@ -998,7 +1006,7 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail)
 			 * This is because we don't want to reveal to an attacker what
 			 * usernames are valid, nor which users have a valid password.
 			 */
-			scram_opaq = pg_be_scram_init(port, selected_mech, shadow_pass);
+			opaq = mech->init(port, selected_mech, shadow_pass);
 
 			inputlen = pq_getmsgint(&buf, 4);
 			if (inputlen == -1)
@@ -1022,12 +1030,11 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail)
 		Assert(input == NULL || input[inputlen] == '\0');
 
 		/*
-		 * we pass 'logdetail' as NULL when doing a mock authentication,
-		 * because we should already have a better error message in that case
+		 * Hand the incoming message to the mechanism implementation.
 		 */
-		result = pg_be_scram_exchange(scram_opaq, input, inputlen,
-									  &output, &outputlen,
-									  logdetail);
+		result = mech->exchange(opaq, input, inputlen,
+								&output, &outputlen,
+								logdetail);
 
 		/* input buffer no longer used */
 		pfree(buf.data);
@@ -1039,6 +1046,7 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail)
 			 */
 			elog(DEBUG4, "sending SASL challenge of length %u", outputlen);
 
+			/* TODO: SASL_EXCHANGE_FAILURE with output is forbidden in SASL */
 			if (result == SASL_EXCHANGE_SUCCESS)
 				sendAuthRequest(port, AUTH_REQ_SASL_FIN, output, outputlen);
 			else
@@ -1057,6 +1065,12 @@ CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail)
 	return STATUS_OK;
 }
 
+static int
+CheckSCRAMAuth(Port *port, char *shadow_pass, char **logdetail)
+{
+	return SASL_exchange(&pg_be_scram_mech, port, shadow_pass, logdetail);
+}
+
 
 /*----------------------------------------------------------------
  * GSSAPI authentication system
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
new file mode 100644
index 0000000000..8c9c9983d4
--- /dev/null
+++ b/src/include/libpq/sasl.h
@@ -0,0 +1,34 @@
+/*-------------------------------------------------------------------------
+ *
+ * sasl.h
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/sasl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_SASL_H
+#define PG_SASL_H
+
+#include "libpq/libpq-be.h"
+
+/* Status codes for message exchange */
+#define SASL_EXCHANGE_CONTINUE		0
+#define SASL_EXCHANGE_SUCCESS		1
+#define SASL_EXCHANGE_FAILURE		2
+
+/* Backend mechanism API */
+typedef void  (*pg_be_sasl_mechanism_func)(Port *, StringInfo);
+typedef void *(*pg_be_sasl_init_func)(Port *, const char *, const char *);
+typedef int   (*pg_be_sasl_exchange_func)(void *, const char *, int, char **, int *, char **);
+
+typedef struct
+{
+	pg_be_sasl_mechanism_func	get_mechanisms;
+	pg_be_sasl_init_func		init;
+	pg_be_sasl_exchange_func	exchange;
+} pg_be_sasl_mech;
+
+#endif /* PG_SASL_H */
diff --git a/src/include/libpq/scram.h b/src/include/libpq/scram.h
index 2c879150da..9e4540bde3 100644
--- a/src/include/libpq/scram.h
+++ b/src/include/libpq/scram.h
@@ -15,17 +15,10 @@
 
 #include "lib/stringinfo.h"
 #include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
 
-/* Status codes for message exchange */
-#define SASL_EXCHANGE_CONTINUE		0
-#define SASL_EXCHANGE_SUCCESS		1
-#define SASL_EXCHANGE_FAILURE		2
-
-/* Routines dedicated to authentication */
-extern void pg_be_scram_get_mechanisms(Port *port, StringInfo buf);
-extern void *pg_be_scram_init(Port *port, const char *selected_mech, const char *shadow_pass);
-extern int	pg_be_scram_exchange(void *opaq, const char *input, int inputlen,
-								 char **output, int *outputlen, char **logdetail);
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_scram_mech;
 
 /* Routines to handle and check SCRAM-SHA-256 secret */
 extern char *pg_be_scram_build_secret(const char *password);
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 5881386e37..04d5703d89 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -21,6 +21,22 @@
 #include "fe-auth.h"
 
 
+/* The exported SCRAM callback mechanism. */
+static void *scram_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static void scram_exchange(void *opaq, char *input, int inputlen,
+						   char **output, int *outputlen,
+						   bool *done, bool *success);
+static bool scram_channel_bound(void *opaq);
+static void scram_free(void *opaq);
+
+const pg_sasl_mech pg_scram_mech = {
+	scram_init,
+	scram_exchange,
+	scram_channel_bound,
+	scram_free,
+};
+
 /*
  * Status of exchange messages used for SCRAM authentication via the
  * SASL protocol.
@@ -72,10 +88,10 @@ static bool calculate_client_proof(fe_scram_state *state,
 /*
  * Initialize SCRAM exchange status.
  */
-void *
-pg_fe_scram_init(PGconn *conn,
-				 const char *password,
-				 const char *sasl_mechanism)
+static void *
+scram_init(PGconn *conn,
+		   const char *password,
+		   const char *sasl_mechanism)
 {
 	fe_scram_state *state;
 	char	   *prep_password;
@@ -128,8 +144,8 @@ pg_fe_scram_init(PGconn *conn,
  * Note that the caller must also ensure that the exchange was actually
  * successful.
  */
-bool
-pg_fe_scram_channel_bound(void *opaq)
+static bool
+scram_channel_bound(void *opaq)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
 
@@ -152,8 +168,8 @@ pg_fe_scram_channel_bound(void *opaq)
 /*
  * Free SCRAM exchange status
  */
-void
-pg_fe_scram_free(void *opaq)
+static void
+scram_free(void *opaq)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
 
@@ -188,10 +204,10 @@ pg_fe_scram_free(void *opaq)
 /*
  * Exchange a SCRAM message with backend.
  */
-void
-pg_fe_scram_exchange(void *opaq, char *input, int inputlen,
-					 char **output, int *outputlen,
-					 bool *done, bool *success)
+static void
+scram_exchange(void *opaq, char *input, int inputlen,
+			   char **output, int *outputlen,
+			   bool *done, bool *success)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
 	PGconn	   *conn = state->conn;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index e8062647e6..d5cbac108e 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -482,7 +482,10 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 				 * channel_binding is not disabled.
 				 */
 				if (conn->channel_binding[0] != 'd')	/* disable */
+				{
 					selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
+					conn->sasl = &pg_scram_mech;
+				}
 #else
 				/*
 				 * The client does not support channel binding.  If it is
@@ -516,7 +519,10 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		}
 		else if (strcmp(mechanism_buf.data, SCRAM_SHA_256_NAME) == 0 &&
 				 !selected_mechanism)
+		{
 			selected_mechanism = SCRAM_SHA_256_NAME;
+			conn->sasl = &pg_scram_mech;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -555,20 +561,22 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
+	Assert(conn->sasl);
+
 	/*
 	 * Initialize the SASL state information with all the information gathered
 	 * during the initial exchange.
 	 *
 	 * Note: Only tls-unique is supported for the moment.
 	 */
-	conn->sasl_state = pg_fe_scram_init(conn,
+	conn->sasl_state = conn->sasl->init(conn,
 										password,
 										selected_mechanism);
 	if (!conn->sasl_state)
 		goto oom_error;
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	pg_fe_scram_exchange(conn->sasl_state,
+	conn->sasl->exchange(conn->sasl_state,
 						 NULL, -1,
 						 &initialresponse, &initialresponselen,
 						 &done, &success);
@@ -649,7 +657,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	pg_fe_scram_exchange(conn->sasl_state,
+	conn->sasl->exchange(conn->sasl_state,
 						 challenge, payloadlen,
 						 &output, &outputlen,
 						 &done, &success);
@@ -830,7 +838,7 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
 			case AUTH_REQ_SASL_FIN:
 				break;
 			case AUTH_REQ_OK:
-				if (!pg_fe_scram_channel_bound(conn->sasl_state))
+				if (!conn->sasl || !conn->sasl->channel_bound(conn->sasl_state))
 				{
 					appendPQExpBufferStr(&conn->errorMessage,
 										 libpq_gettext("channel binding required, but server authenticated client without channel binding\n"));
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 7877dcbd09..1e4fcbff62 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -22,15 +22,8 @@
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
-/* Prototypes for functions in fe-auth-scram.c */
-extern void *pg_fe_scram_init(PGconn *conn,
-							  const char *password,
-							  const char *sasl_mechanism);
-extern bool pg_fe_scram_channel_bound(void *opaq);
-extern void pg_fe_scram_free(void *opaq);
-extern void pg_fe_scram_exchange(void *opaq, char *input, int inputlen,
-								 char **output, int *outputlen,
-								 bool *done, bool *success);
+/* Mechanisms in fe-auth-scram.c */
+extern const pg_sasl_mech pg_scram_mech;
 extern char *pg_fe_scram_build_secret(const char *password);
 
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 80703698b8..10d007582c 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -517,11 +517,7 @@ pqDropConnection(PGconn *conn, bool flushInput)
 #endif
 	if (conn->sasl_state)
 	{
-		/*
-		 * XXX: if support for more authentication mechanisms is added, this
-		 * needs to call the right 'free' function.
-		 */
-		pg_fe_scram_free(conn->sasl_state);
+		conn->sasl->free(conn->sasl_state);
 		conn->sasl_state = NULL;
 	}
 }
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e81dc37906..25eaa231c5 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -339,6 +339,19 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef void *(*pg_sasl_init_func)(PGconn *, const char *, const char *);
+typedef void  (*pg_sasl_exchange_func)(void *, char *, int, char **, int *, bool *, bool *);
+typedef bool  (*pg_sasl_channel_bound_func)(void *);
+typedef void  (*pg_sasl_free_func)(void *);
+
+typedef struct
+{
+	pg_sasl_init_func			init;
+	pg_sasl_exchange_func		exchange;
+	pg_sasl_channel_bound_func	channel_bound;
+	pg_sasl_free_func			free;
+} pg_sasl_mech;
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -500,6 +513,7 @@ struct pg_conn
 	PGresult   *next_result;	/* next result (used in single-row mode) */
 
 	/* Assorted state for SASL, SSL, GSS, etc */
+	const pg_sasl_mech *sasl;
 	void	   *sasl_state;
 
 	/* SSL structures */
-- 
2.25.1

0002-src-common-remove-logging-from-jsonapi-for-shlib.patchtext/x-patch; name=0002-src-common-remove-logging-from-jsonapi-for-shlib.patchDownload
From 0541598e4f0bad1b9ff41a4640ec69491b393d54 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 11:15:15 -0700
Subject: [PATCH 2/7] src/common: remove logging from jsonapi for shlib

The can't-happen code in jsonapi was pulling in logging code, which for
libpq is not included.
---
 src/common/Makefile  |  4 ++++
 src/common/jsonapi.c | 11 ++++++++---
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/src/common/Makefile b/src/common/Makefile
index 38a8599337..6f1039bc78 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -28,6 +28,10 @@ subdir = src/common
 top_builddir = ../..
 include $(top_builddir)/src/Makefile.global
 
+# For use in shared libraries, jsonapi needs to not link in any logging
+# functions.
+override CFLAGS_SL += -DJSONAPI_NO_LOG
+
 # don't include subdirectory-path-dependent -I and -L switches
 STD_CPPFLAGS := $(filter-out -I$(top_srcdir)/src/include -I$(top_builddir)/src/include,$(CPPFLAGS))
 STD_LDFLAGS := $(filter-out -L$(top_builddir)/src/common -L$(top_builddir)/src/port,$(LDFLAGS))
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 1bf38d7b42..6b6001b118 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -27,11 +27,16 @@
 #endif
 
 #ifdef FRONTEND
-#define check_stack_depth()
-#define json_log_and_abort(...) \
+#  define check_stack_depth()
+#  ifdef JSONAPI_NO_LOG
+#    define json_log_and_abort(...) \
+	do { fprintf(stderr, __VA_ARGS__); exit(1); } while(0)
+#  else
+#    define json_log_and_abort(...) \
 	do { pg_log_fatal(__VA_ARGS__); exit(1); } while(0)
+#  endif
 #else
-#define json_log_and_abort(...) elog(ERROR, __VA_ARGS__)
+#  define json_log_and_abort(...) elog(ERROR, __VA_ARGS__)
 #endif
 
 /*
-- 
2.25.1

0003-common-jsonapi-always-palloc-the-error-strings.patchtext/x-patch; name=0003-common-jsonapi-always-palloc-the-error-strings.patchDownload
From 5ad4b3c7835fe9e0f284702ec7b827c27770854e Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH 3/7] common/jsonapi: always palloc the error strings

...so that client code can pfree() to avoid memory leaks in long-running
operations.
---
 src/common/jsonapi.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 6b6001b118..f7304f584f 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -1089,7 +1089,7 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			return psprintf(_("Expected JSON value, but found \"%s\"."),
 							extract_token(lex));
 		case JSON_EXPECTED_MORE:
-			return _("The input string ended unexpectedly.");
+			return pstrdup(_("The input string ended unexpectedly."));
 		case JSON_EXPECTED_OBJECT_FIRST:
 			return psprintf(_("Expected string or \"}\", but found \"%s\"."),
 							extract_token(lex));
@@ -1103,16 +1103,16 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			return psprintf(_("Token \"%s\" is invalid."),
 							extract_token(lex));
 		case JSON_UNICODE_CODE_POINT_ZERO:
-			return _("\\u0000 cannot be converted to text.");
+			return pstrdup(_("\\u0000 cannot be converted to text."));
 		case JSON_UNICODE_ESCAPE_FORMAT:
-			return _("\"\\u\" must be followed by four hexadecimal digits.");
+			return pstrdup(_("\"\\u\" must be followed by four hexadecimal digits."));
 		case JSON_UNICODE_HIGH_ESCAPE:
 			/* note: this case is only reachable in frontend not backend */
-			return _("Unicode escape values cannot be used for code point values above 007F when the encoding is not UTF8.");
+			return pstrdup(_("Unicode escape values cannot be used for code point values above 007F when the encoding is not UTF8."));
 		case JSON_UNICODE_HIGH_SURROGATE:
-			return _("Unicode high surrogate must not follow a high surrogate.");
+			return pstrdup(_("Unicode high surrogate must not follow a high surrogate."));
 		case JSON_UNICODE_LOW_SURROGATE:
-			return _("Unicode low surrogate must follow a high surrogate.");
+			return pstrdup(_("Unicode low surrogate must follow a high surrogate."));
 	}
 
 	/*
-- 
2.25.1

0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchtext/x-patch; name=0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From e3d95709e147ae3670bd8acd0c265493a6116b9a Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH 4/7] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented.

The client implementation requires libiddawc and its development
headers. Configure --with-oauth (and --with-includes/--with-libraries to
point at the iddawc installation, if it's in a custom location).

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- ...and more.
---
 configure                            | 100 ++++
 configure.ac                         |  19 +
 src/Makefile.global.in               |   1 +
 src/include/common/oauth-common.h    |  19 +
 src/include/pg_config.h.in           |   6 +
 src/interfaces/libpq/Makefile        |   7 +-
 src/interfaces/libpq/fe-auth-oauth.c | 724 +++++++++++++++++++++++++++
 src/interfaces/libpq/fe-auth-scram.c |   6 +-
 src/interfaces/libpq/fe-auth.c       |  42 +-
 src/interfaces/libpq/fe-auth.h       |   3 +
 src/interfaces/libpq/fe-connect.c    |  38 ++
 src/interfaces/libpq/libpq-int.h     |  10 +-
 12 files changed, 956 insertions(+), 19 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c

diff --git a/configure b/configure
index e9b98f442f..c3b7a89bf0 100755
--- a/configure
+++ b/configure
@@ -713,6 +713,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -856,6 +857,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1562,6 +1564,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth            build with OAuth 2.0 support
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8046,6 +8049,42 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-oauth option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_oauth=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13048,6 +13087,56 @@ fi
 
 
 
+if test "$with_oauth" = yes ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for i_init_session in -liddawc" >&5
+$as_echo_n "checking for i_init_session in -liddawc... " >&6; }
+if ${ac_cv_lib_iddawc_i_init_session+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-liddawc  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char i_init_session ();
+int
+main ()
+{
+return i_init_session ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_iddawc_i_init_session=yes
+else
+  ac_cv_lib_iddawc_i_init_session=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_iddawc_i_init_session" >&5
+$as_echo "$ac_cv_lib_iddawc_i_init_session" >&6; }
+if test "x$ac_cv_lib_iddawc_i_init_session" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBIDDAWC 1
+_ACEOF
+
+  LIBS="-liddawc $LIBS"
+
+else
+  as_fn_error $? "library 'iddawc' is required for OAuth support" "$LINENO" 5
+fi
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -13942,6 +14031,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" != no; then
+  ac_fn_c_check_header_mongrel "$LINENO" "iddawc.h" "ac_cv_header_iddawc_h" "$ac_includes_default"
+if test "x$ac_cv_header_iddawc_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <iddawc.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 3b42d8bdc9..f15f6f64d5 100644
--- a/configure.ac
+++ b/configure.ac
@@ -842,6 +842,17 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_BOOL(with, oauth, no,
+              [build with OAuth 2.0 support],
+              [AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])])
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1313,6 +1324,10 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = yes ; then
+  AC_CHECK_LIB(iddawc, i_init_session, [], [AC_MSG_ERROR([library 'iddawc' is required for OAuth support])])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1523,6 +1538,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" != no; then
+  AC_CHECK_HEADER(iddawc.h, [], [AC_MSG_ERROR([header file <iddawc.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 8f05840821..3a61dd46d3 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..3fa95ac7e8
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif /* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 783b8fc1ba..db5bc56ac5 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -319,6 +319,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `iddawc' library (-liddawc). */
+#undef HAVE_LIBIDDAWC
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -920,6 +923,9 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 0c4e55b6ad..8e89d50900 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -62,6 +62,11 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_oauth),yes)
+OBJS += \
+	fe-auth-oauth.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -83,7 +88,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -liddawc -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..a27f974369
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,724 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include <iddawc.h>
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static void oauth_exchange(void *opaq, bool final,
+						   char *input, int inputlen,
+						   char **output, int *outputlen,
+						   bool *done, bool *success);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+} fe_oauth_state;
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(!strcmp(sasl_mechanism, OAUTHBEARER_NAME));
+
+	state = malloc(sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+static const char *
+iddawc_error_string(int errcode)
+{
+	switch (errcode)
+	{
+		case I_OK:
+			return "I_OK";
+
+		case I_ERROR:
+			return "I_ERROR";
+
+		case I_ERROR_PARAM:
+			return "I_ERROR_PARAM";
+
+		case I_ERROR_MEMORY:
+			return "I_ERROR_MEMORY";
+
+		case I_ERROR_UNAUTHORIZED:
+			return "I_ERROR_UNAUTHORIZED";
+
+		case I_ERROR_SERVER:
+			return "I_ERROR_SERVER";
+	}
+
+	return "<unknown>";
+}
+
+static void
+iddawc_error(PGconn *conn, int errcode, const char *msg)
+{
+	appendPQExpBufferStr(&conn->errorMessage, libpq_gettext(msg));
+	appendPQExpBuffer(&conn->errorMessage,
+					  libpq_gettext(" (iddawc error %s)\n"),
+					  iddawc_error_string(errcode));
+}
+
+static void
+iddawc_request_error(PGconn *conn, struct _i_session *i, int err, const char *msg)
+{
+	const char *error_code;
+	const char *desc;
+
+	appendPQExpBuffer(&conn->errorMessage, "%s: ", libpq_gettext(msg));
+
+	error_code = i_get_str_parameter(i, I_OPT_ERROR);
+	if (!error_code)
+	{
+		/*
+		 * The server didn't give us any useful information, so just print the
+		 * error code.
+		 */
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("(iddawc error %s)\n"),
+						  iddawc_error_string(err));
+		return;
+	}
+
+	/* If the server gave a string description, print that too. */
+	desc = i_get_str_parameter(i, I_OPT_ERROR_DESCRIPTION);
+	if (desc)
+		appendPQExpBuffer(&conn->errorMessage, "%s ", desc);
+
+	appendPQExpBuffer(&conn->errorMessage, "(%s)\n", error_code);
+}
+
+static char *
+get_auth_token(PGconn *conn)
+{
+	PQExpBuffer	token_buf = NULL;
+	struct _i_session session;
+	int			err;
+	int			auth_method;
+	bool		user_prompted = false;
+	const char *verification_uri;
+	const char *user_code;
+	const char *access_token;
+	const char *token_type;
+	char	   *token = NULL;
+
+	if (!conn->oauth_discovery_uri)
+		return strdup(""); /* ask the server for one */
+
+	i_init_session(&session);
+
+	if (!conn->oauth_client_id)
+	{
+		/* We can't talk to a server without a client identifier. */
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("no oauth_client_id is set for the connection"));
+		goto cleanup;
+	}
+
+	token_buf = createPQExpBuffer();
+
+	if (!token_buf)
+		goto cleanup;
+
+	err = i_set_str_parameter(&session, I_OPT_OPENID_CONFIG_ENDPOINT, conn->oauth_discovery_uri);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set OpenID config endpoint");
+		goto cleanup;
+	}
+
+	err = i_get_openid_config(&session);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to fetch OpenID discovery document");
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_TOKEN_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer has no token endpoint"));
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_DEVICE_AUTHORIZATION_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer does not support device authorization"));
+		goto cleanup;
+	}
+
+	err = i_set_response_type(&session, I_RESPONSE_TYPE_DEVICE_CODE);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set device code response type");
+		goto cleanup;
+	}
+
+	auth_method = I_TOKEN_AUTH_METHOD_NONE;
+	if (conn->oauth_client_secret && *conn->oauth_client_secret)
+		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
+
+	err = i_set_parameter_list(&session,
+		I_OPT_CLIENT_ID, conn->oauth_client_id,
+		I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
+		I_OPT_TOKEN_METHOD, auth_method,
+		I_OPT_SCOPE, conn->oauth_scope,
+		I_OPT_NONE
+	);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set client identifier");
+		goto cleanup;
+	}
+
+	err = i_run_device_auth_request(&session);
+	if (err)
+	{
+		iddawc_request_error(conn, &session, err,
+							"failed to obtain device authorization");
+		goto cleanup;
+	}
+
+	verification_uri = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_VERIFICATION_URI);
+	if (!verification_uri)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a verification URI"));
+		goto cleanup;
+	}
+
+	user_code = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_USER_CODE);
+	if (!user_code)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a user code"));
+		goto cleanup;
+	}
+
+	/*
+	 * Poll the token endpoint until either the user logs in and authorizes the
+	 * use of a token, or a hard failure occurs. We perform one ping _before_
+	 * prompting the user, so that we don't make them do the work of logging in
+	 * only to find that the token endpoint is completely unreachable.
+	 */
+	err = i_run_token_request(&session);
+	while (err)
+	{
+		const char *error_code;
+		uint		interval;
+
+		error_code = i_get_str_parameter(&session, I_OPT_ERROR);
+
+		/*
+		 * authorization_pending and slow_down are the only acceptable errors;
+		 * anything else and we bail.
+		 */
+		if (!error_code || (strcmp(error_code, "authorization_pending")
+							&& strcmp(error_code, "slow_down")))
+		{
+			iddawc_request_error(conn, &session, err,
+								"OAuth token retrieval failed");
+			goto cleanup;
+		}
+
+		if (!user_prompted)
+		{
+			/*
+			 * Now that we know the token endpoint isn't broken, give the user
+			 * the login instructions.
+			 */
+			pqInternalNotice(&conn->noticeHooks,
+							 "Visit %s and enter the code: %s",
+							 verification_uri, user_code);
+
+			user_prompted = true;
+		}
+
+		/*
+		 * We are required to wait between polls; the server tells us how long.
+		 * TODO: if interval's not set, we need to default to five seconds
+		 * TODO: sanity check the interval
+		 */
+		interval = i_get_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL);
+
+		/*
+		 * A slow_down error requires us to permanently increase our retry
+		 * interval by five seconds. RFC 8628, Sec. 3.5.
+		 */
+		if (!strcmp(error_code, "slow_down"))
+		{
+			interval += 5;
+			i_set_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL, interval);
+		}
+
+		sleep(interval);
+
+		/*
+		 * XXX Reset the error code before every call, because iddawc won't do
+		 * that for us. This matters if the server first sends a "pending" error
+		 * code, then later hard-fails without sending an error code to
+		 * overwrite the first one.
+		 *
+		 * That we have to do this at all seems like a bug in iddawc.
+		 */
+		i_set_str_parameter(&session, I_OPT_ERROR, NULL);
+
+		err = i_run_token_request(&session);
+	}
+
+	access_token = i_get_str_parameter(&session, I_OPT_ACCESS_TOKEN);
+	token_type = i_get_str_parameter(&session, I_OPT_TOKEN_TYPE);
+
+	if (!access_token || !token_type || strcasecmp(token_type, "Bearer"))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a bearer token"));
+		goto cleanup;
+	}
+
+	appendPQExpBufferStr(token_buf, "Bearer ");
+	appendPQExpBufferStr(token_buf, access_token);
+
+	if (PQExpBufferBroken(token_buf))
+		goto cleanup;
+
+	token = strdup(token_buf->data);
+
+cleanup:
+	if (token_buf)
+		destroyPQExpBuffer(token_buf);
+	i_clean_session(&session);
+
+	return token;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn)
+{
+	static const char * const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBuffer	token_buf;
+	PQExpBuffer	discovery_buf = NULL;
+	char	   *token = NULL;
+	char	   *response = NULL;
+
+	token_buf = createPQExpBuffer();
+	if (!token_buf)
+		goto cleanup;
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	if (!conn->oauth_discovery_uri && conn->oauth_issuer)
+	{
+		discovery_buf = createPQExpBuffer();
+		if (!discovery_buf)
+			goto cleanup;
+
+		appendPQExpBufferStr(discovery_buf, conn->oauth_issuer);
+		appendPQExpBufferStr(discovery_buf, "/.well-known/openid-configuration");
+
+		if (PQExpBufferBroken(discovery_buf))
+			goto cleanup;
+
+		conn->oauth_discovery_uri = strdup(discovery_buf->data);
+	}
+
+	token = get_auth_token(conn);
+	if (!token)
+		goto cleanup;
+
+	appendPQExpBuffer(token_buf, resp_format, token);
+	if (PQExpBufferBroken(token_buf))
+		goto cleanup;
+
+	response = strdup(token_buf->data);
+
+cleanup:
+	if (token)
+		free(token);
+	if (discovery_buf)
+		destroyPQExpBuffer(discovery_buf);
+	if (token_buf)
+		destroyPQExpBuffer(token_buf);
+
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char		   *errmsg; /* any non-NULL value stops all processing */
+	int				nested; /* nesting level (zero is the top) */
+
+	const char	   *target_field_name; /* points to a static allocation */
+	char		  **target_field;      /* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char		   *status;
+	char		   *scope;
+	char		   *discovery_uri;
+};
+
+static void
+oauth_json_object_start(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (ctx->errmsg)
+		return; /* short-circuit */
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		ctx->errmsg = psprintf(libpq_gettext("field \"%s\" must be a string"),
+							   ctx->target_field_name);
+	}
+
+	++ctx->nested;
+}
+
+static void
+oauth_json_object_end(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (ctx->errmsg)
+		return; /* short-circuit */
+
+	--ctx->nested;
+}
+
+static void
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (ctx->errmsg)
+	{
+		/* short-circuit */
+		pfree(name);
+		return;
+	}
+
+	if (ctx->nested == 1)
+	{
+		if (!strcmp(name, ERROR_STATUS_FIELD))
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (!strcmp(name, ERROR_SCOPE_FIELD))
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (!strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD))
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	pfree(name);
+}
+
+static void
+oauth_json_array_start(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (ctx->errmsg)
+		return; /* short-circuit */
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = pstrdup(libpq_gettext("top-level element must be an object"));
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		ctx->errmsg = psprintf(libpq_gettext("field \"%s\" must be a string"),
+							   ctx->target_field_name);
+	}
+}
+
+static void
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (ctx->errmsg)
+	{
+		/* short-circuit */
+		pfree(token);
+		return;
+	}
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = pstrdup(libpq_gettext("top-level element must be an object"));
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return; /* don't pfree the token we're using */
+		}
+
+		ctx->errmsg = psprintf(libpq_gettext("field \"%s\" must be a string"),
+							   ctx->target_field_name);
+	}
+
+	pfree(token);
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext	   *lex;
+	JsonSemAction		sem = {0};
+	JsonParseErrorType	err;
+	struct json_ctx		ctx = {0};
+	char			   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL"));
+		return false;
+	}
+
+	lex = makeJsonLexContextCstringLen(msg, msglen, PG_UTF8, true);
+
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		errmsg = json_errdetail(err, lex);
+	}
+	else if (ctx.errmsg)
+	{
+		errmsg = ctx.errmsg;
+	}
+
+	if (errmsg)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+		pfree(errmsg);
+		return false;
+	}
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (!strcmp(ctx.status, "invalid_token"))
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen,
+			   bool *done, bool *success)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*done = false;
+	*success = false;
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn);
+			if (!*output)
+				goto error;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			break;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				*done = true;
+				*success = true;
+
+				break;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				goto error;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output); /* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			break;
+
+		case FE_OAUTH_SERVER_ERROR:
+			/*
+			 * After an error, the server should send an error response to fail
+			 * the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge which
+			 * isn't defined in the RFC, or completed the handshake successfully
+			 * after telling us it was going to fail. Neither is acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			goto error;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			goto error;
+	}
+
+	return;
+
+error:
+	*done = true;
+	*success = false;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 04d5703d89..f2ba3bca37 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static void scram_exchange(void *opaq, char *input, int inputlen,
+static void scram_exchange(void *opaq, bool final,
+						   char *input, int inputlen,
 						   char **output, int *outputlen,
 						   bool *done, bool *success);
 static bool scram_channel_bound(void *opaq);
@@ -205,7 +206,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static void
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen,
 			   bool *done, bool *success)
 {
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index d5cbac108e..690b23b9d9 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "libpq-fe.h"
@@ -422,7 +423,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	bool		success;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
-	char	   *password;
+	char	   *password = NULL;
 
 	initPQExpBuffer(&mechanism_buf);
 
@@ -444,8 +445,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	/*
 	 * Parse the list of SASL authentication mechanisms in the
 	 * AuthenticationSASL message, and select the best mechanism that we
-	 * support.  SCRAM-SHA-256-PLUS and SCRAM-SHA-256 are the only ones
-	 * supported at the moment, listed by order of decreasing importance.
+	 * support.  Mechanisms are listed by order of decreasing importance.
 	 */
 	selected_mechanism = NULL;
 	for (;;)
@@ -485,6 +485,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 				{
 					selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
 					conn->sasl = &pg_scram_mech;
+					conn->password_needed = true;
 				}
 #else
 				/*
@@ -522,7 +523,17 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		{
 			selected_mechanism = SCRAM_SHA_256_NAME;
 			conn->sasl = &pg_scram_mech;
+			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				!selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -547,18 +558,19 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	/*
 	 * First, select the password to use for the exchange, complaining if
-	 * there isn't one.  Currently, all supported SASL mechanisms require a
-	 * password, so we can just go ahead here without further distinction.
+	 * there isn't one and the SASL mechanism needs it.
 	 */
-	conn->password_needed = true;
-	password = conn->connhost[conn->whichhost].password;
-	if (password == NULL)
-		password = conn->pgpass;
-	if (password == NULL || password[0] == '\0')
+	if (conn->password_needed)
 	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 PQnoPasswordSupplied);
-		goto error;
+		password = conn->connhost[conn->whichhost].password;
+		if (password == NULL)
+			password = conn->pgpass;
+		if (password == NULL || password[0] == '\0')
+		{
+			appendPQExpBufferStr(&conn->errorMessage,
+								 PQnoPasswordSupplied);
+			goto error;
+		}
 	}
 
 	Assert(conn->sasl);
@@ -576,7 +588,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto oom_error;
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	conn->sasl->exchange(conn->sasl_state,
+	conn->sasl->exchange(conn->sasl_state, false,
 						 NULL, -1,
 						 &initialresponse, &initialresponselen,
 						 &done, &success);
@@ -657,7 +669,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	conn->sasl->exchange(conn->sasl_state,
+	conn->sasl->exchange(conn->sasl_state, final,
 						 challenge, payloadlen,
 						 &output, &outputlen,
 						 &done, &success);
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1e4fcbff62..edc748fd3a 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -26,4 +26,7 @@ extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 extern const pg_sasl_mech pg_scram_mech;
 extern char *pg_fe_scram_build_secret(const char *password);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 10d007582c..1d4bca9194 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -345,6 +345,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Target-Session-Attrs", "", 15, /* sizeof("prefer-standby") = 15 */
 	offsetof(struct pg_conn, target_session_attrs)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -607,6 +624,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_err_msg = NULL;
 	conn->be_pid = 0;
 	conn->be_key = 0;
+	/* conn->oauth_want_retry = false; TODO */
 }
 
 
@@ -3356,6 +3374,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 #ifdef ENABLE_GSS
 
 					/*
@@ -4130,6 +4158,16 @@ freePGconn(PGconn *conn)
 		free(conn->rowBuf);
 	if (conn->target_session_attrs)
 		free(conn->target_session_attrs);
+	if (conn->oauth_issuer)
+		free(conn->oauth_issuer);
+	if (conn->oauth_discovery_uri)
+		free(conn->oauth_discovery_uri);
+	if (conn->oauth_client_id)
+		free(conn->oauth_client_id);
+	if (conn->oauth_client_secret)
+		free(conn->oauth_client_secret);
+	if (conn->oauth_scope)
+		free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 25eaa231c5..b749c6c05d 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -340,7 +340,7 @@ typedef struct pg_conn_host
 } pg_conn_host;
 
 typedef void *(*pg_sasl_init_func)(PGconn *, const char *, const char *);
-typedef void  (*pg_sasl_exchange_func)(void *, char *, int, char **, int *, bool *, bool *);
+typedef void  (*pg_sasl_exchange_func)(void *, bool, char *, int, char **, int *, bool *, bool *);
 typedef bool  (*pg_sasl_channel_bound_func)(void *);
 typedef void  (*pg_sasl_free_func)(void *);
 
@@ -406,6 +406,14 @@ struct pg_conn
 	char	   *ssl_max_protocol_version;	/* maximum TLS protocol version */
 	char	   *target_session_attrs;	/* desired session properties */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;			/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery document */
+	char	   *oauth_client_id;		/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;			/* access token scope */
+	bool		oauth_want_retry;		/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
-- 
2.25.1

0005-backend-add-OAUTHBEARER-SASL-mechanism.patchtext/x-patch; name=0005-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From ee8e85d3416f381ba9d44f8d4a681e5006bd5b82 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH 5/7] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external program: the oauth_validator_command.
This command must do the following:

1. Receive the bearer token by reading its contents from a file
   descriptor passed from the server. (The numeric value of this
   descriptor may be inserted into the oauth_validator_command using the
   %f specifier.)

   This MUST be the first action the command performs. The server will
   not begin reading stdout from the command until the token has been
   read in full, so if the command tries to print anything and hits a
   buffer limit, the backend will deadlock and time out.

2. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the command must exit with a
   non-zero status. Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The command should print the
      authenticated identity string to stdout, followed by a newline.

      If the user cannot be authenticated, the validator should not
      print anything to stdout. It should also exit with a non-zero
      status, unless the token may be used to authorize the connection
      through some other means (see below).

      On a success, the command may then exit with a zero success code.
      By default, the server will then check to make sure the identity
      string matches the role that is being used (or matches a usermap
      entry, if one is in use).

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below), the validator simply
      returns a zero exit code if the client should be allowed to
      connect with its presented role (which can be passed to the
      command using the %r specifier), or a non-zero code otherwise.

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the command may print
      the authenticated ID and then fail with a non-zero exit code.
      (This makes it easier to see what's going on in the Postgres
      logs.)

4. Token validators may optionally log to stderr. This will be printed
   verbatim into the Postgres server logs.

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Unlike the client, servers support OAuth without needing to be built
against libiddawc (since the responsibility for "speaking" OAuth/OIDC
correctly is delegated entirely to the oauth_validator_command).

Several TODOs:
- port to platforms other than "modern Linux"
- overhaul the communication with oauth_validator_command, which is
  currently a bad hack on OpenPipeStream()
- implement more sanity checks on the OAUTHBEARER message format and
  tokens sent by the client
- implement more helpful handling of HBA misconfigurations
- properly interpolate JSON when generating error responses
- use logdetail during auth failures
- deal with role names that can't be safely passed to system() without
  shell-escaping
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.
---
 src/backend/libpq/Makefile     |   1 +
 src/backend/libpq/auth-oauth.c | 797 +++++++++++++++++++++++++++++++++
 src/backend/libpq/auth-scram.c |   2 +
 src/backend/libpq/auth.c       |  43 +-
 src/backend/libpq/hba.c        |  29 +-
 src/backend/utils/misc/guc.c   |  12 +
 src/include/libpq/auth.h       |   1 +
 src/include/libpq/hba.h        |   8 +-
 src/include/libpq/oauth.h      |  24 +
 src/include/libpq/sasl.h       |  26 ++
 10 files changed, 915 insertions(+), 28 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 8d1d16b0fc..40f2c50c3c 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-scram.o \
 	auth.o \
 	be-fsstubs.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..b2b9d56e7c
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,797 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+
+/* GUC */
+char *oauth_validator_command;
+
+static void  oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int   oauth_exchange(void *opaq, const char *input, int inputlen,
+							char **output, int *outputlen, char **logdetail);
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state	state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth, char **logdetail);
+static bool run_validator_command(Port *port, const char *token);
+static bool check_exit(FILE **fh, const char *command);
+static bool unset_cloexec(int fd);
+static bool username_ok_for_shell(const char *username);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, char **logdetail)
+{
+	char   *p;
+	char	cbind_flag;
+	char   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = strdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a 'y'
+	 * specifier purely for the remote chance that a future specification could
+	 * define one; then future clients can still interoperate with this server
+	 * implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y': /* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag \"%s\".",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character \"%s\".",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth, logdetail))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char   *pos = *input;
+	char   *auth = NULL;
+
+	/*
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char   *end;
+		char   *sep;
+		char   *key;
+		char   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 *
+		 * TODO further validate the key/value grammar? empty keys, bad chars...
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per
+			 * Sec. 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL; /* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData	buf;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's not
+	 * really a way to hide this from the user, either, because we can't choose
+	 * a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: JSON escaping
+	 */
+	appendStringInfo(&buf,
+		"{ "
+			"\"status\": \"invalid_token\", "
+			"\"openid-configuration\": \"%s/.well-known/openid-configuration\","
+			"\"scope\": \"%s\" "
+		"}",
+		ctx->issuer, ctx->scope);
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+static bool
+validate(Port *port, const char *auth, char **logdetail)
+{
+	static const char * const b64_set = "abcdefghijklmnopqrstuvwxyz"
+										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+										"0123456789-._~+/";
+
+	const char *token;
+	size_t		span;
+	int			ret;
+
+	/* TODO: handle logdetail when the test framework can check it */
+
+	/*
+	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+	 * 2.1:
+	 *
+	 *      b64token    = 1*( ALPHA / DIGIT /
+	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+	 *      credentials = "Bearer" 1*SP b64token
+	 *
+	 * The "credentials" construction is what we receive in our auth value.
+	 *
+	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+	 * compared case-insensitively. (This is not mentioned in RFC 6750, but it's
+	 * pointed out in RFC 7628 Sec. 4.)
+	 *
+	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+	 */
+	if (strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return false;
+
+	/* Pull the bearer token out of the auth value. */
+	token = auth + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/*
+	 * Before invoking the validator command, sanity-check the token format to
+	 * avoid any injection attacks later in the chain. Invalid formats are
+	 * technically a protocol violation, but don't reflect any information about
+	 * the sensitive Bearer token back to the client; log at COMMERROR instead.
+	 */
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is empty.")));
+		return false;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end with
+	 * any number of '=' characters.
+	 */
+	span = strspn(token, b64_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the problematic
+		 * character(s), but that'd be a bit like printing a piece of someone's
+		 * password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return false;
+	}
+
+	/* Have the validator check the token. */
+	if (!run_validator_command(port, token))
+		return false;
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator says
+		 * the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (!port->authn_id)
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	ret = check_usermap(port->hba->usermap, port->user_name, port->authn_id,
+						false);
+	return (ret == STATUS_OK);
+}
+
+static bool
+run_validator_command(Port *port, const char *token)
+{
+	bool		success = false;
+	int			rc;
+	int			pipefd[2];
+	int			rfd = -1;
+	int			wfd = -1;
+
+	StringInfoData command = { 0 };
+	char	   *p;
+	FILE	   *fh = NULL;
+
+	ssize_t		written;
+	char	   *line = NULL;
+	size_t		size = 0;
+	ssize_t		len;
+
+	Assert(oauth_validator_command);
+
+	if (!oauth_validator_command[0])
+	{
+		ereport(COMMERROR,
+				(errmsg("oauth_validator_command is not set"),
+				 errhint("To allow OAuth authenticated connections, set "
+						 "oauth_validator_command in postgresql.conf.")));
+		return false;
+	}
+
+	/*
+	 * Since popen() is unidirectional, open up a pipe for the other direction.
+	 * Use CLOEXEC to ensure that our write end doesn't accidentally get copied
+	 * into child processes, which would prevent us from closing it cleanly.
+	 *
+	 * XXX this is ugly. We should just read from the child process's stdout,
+	 * but that's a lot more code.
+	 * XXX by bypassing the popen API, we open the potential of process
+	 * deadlock. Clearly document child process requirements (i.e. the child
+	 * MUST read all data off of the pipe before writing anything).
+	 * TODO: port to Windows using _pipe().
+	 */
+	rc = pipe2(pipefd, O_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not create child pipe: %m")));
+		return false;
+	}
+
+	rfd = pipefd[0];
+	wfd = pipefd[1];
+
+	/* Allow the read pipe be passed to the child. */
+	if (!unset_cloexec(rfd))
+	{
+		/* error message was already logged */
+		goto cleanup;
+	}
+
+	/*
+	 * Construct the command, substituting any recognized %-specifiers:
+	 *
+	 *   %f: the file descriptor of the input pipe
+	 *   %r: the role that the client wants to assume (port->user_name)
+	 *   %%: a literal '%'
+	 */
+	initStringInfo(&command);
+
+	for (p = oauth_validator_command; *p; p++)
+	{
+		if (p[0] == '%')
+		{
+			switch (p[1])
+			{
+				case 'f':
+					appendStringInfo(&command, "%d", rfd);
+					p++;
+					break;
+				case 'r':
+					/*
+					 * TODO: decide how this string should be escaped. The role
+					 * is controlled by the client, so if we don't escape it,
+					 * command injections are inevitable.
+					 *
+					 * This is probably an indication that the role name needs
+					 * to be communicated to the validator process in some other
+					 * way. For this proof of concept, just be incredibly strict
+					 * about the characters that are allowed in user names.
+					 */
+					if (!username_ok_for_shell(port->user_name))
+						goto cleanup;
+
+					appendStringInfoString(&command, port->user_name);
+					p++;
+					break;
+				case '%':
+					appendStringInfoChar(&command, '%');
+					p++;
+					break;
+				default:
+					appendStringInfoChar(&command, p[0]);
+			}
+		}
+		else
+			appendStringInfoChar(&command, p[0]);
+	}
+
+	/* Execute the command. */
+	fh = OpenPipeStream(command.data, "re");
+	/* TODO: handle failures */
+
+	/* We don't need the read end of the pipe anymore. */
+	close(rfd);
+	rfd = -1;
+
+	/* Give the command the token to validate. */
+	written = write(wfd, token, strlen(token));
+	if (written != strlen(token))
+	{
+		/* TODO must loop for short writes, EINTR et al */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not write token to child pipe: %m")));
+		goto cleanup;
+	}
+
+	close(wfd);
+	wfd = -1;
+
+	/*
+	 * Read the command's response.
+	 *
+	 * TODO: getline() is probably too new to use, unfortunately.
+	 * TODO: loop over all lines
+	 */
+	if ((len = getline(&line, &size, fh)) >= 0)
+	{
+		/* TODO: fail if the authn_id doesn't end with a newline */
+		if (len > 0)
+			line[len - 1] = '\0';
+
+		set_authn_id(port, line);
+	}
+	else if (ferror(fh))
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not read from command \"%s\": %m",
+						command.data)));
+		goto cleanup;
+	}
+
+	/* Make sure the command exits cleanly. */
+	if (!check_exit(&fh, command.data))
+	{
+		/* error message already logged */
+		goto cleanup;
+	}
+
+	/* Done. */
+	success = true;
+
+cleanup:
+	if (line)
+		free(line);
+
+	/*
+	 * In the successful case, the pipe fds are already closed. For the error
+	 * case, always close out the pipe before waiting for the command, to
+	 * prevent deadlock.
+	 */
+	if (rfd >= 0)
+		close(rfd);
+	if (wfd >= 0)
+		close(wfd);
+
+	if (fh)
+	{
+		Assert(!success);
+		check_exit(&fh, command.data);
+	}
+
+	if (command.data)
+		pfree(command.data);
+
+	return success;
+}
+
+static bool
+check_exit(FILE **fh, const char *command)
+{
+	int rc;
+
+	rc = ClosePipeStream(*fh);
+	*fh = NULL;
+
+	if (rc == -1)
+	{
+		/* pclose() itself failed. */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not close pipe to command \"%s\": %m",
+						command)));
+	}
+	else if (rc != 0)
+	{
+		char *reason = wait_result_to_str(rc);
+
+		ereport(COMMERROR,
+				(errmsg("failed to execute command \"%s\": %s",
+						command, reason)));
+
+		pfree(reason);
+	}
+
+	return (rc == 0);
+}
+
+static bool
+unset_cloexec(int fd)
+{
+	int			flags;
+	int			rc;
+
+	flags = fcntl(fd, F_GETFD);
+	if (flags == -1)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not get fd flags for child pipe: %m")));
+		return false;
+	}
+
+	rc = fcntl(fd, F_SETFD, flags & ~FD_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not unset FD_CLOEXEC for child pipe: %m")));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * XXX This should go away eventually and be replaced with either a proper
+ * escape or a different strategy for communication with the validator command.
+ */
+static bool
+username_ok_for_shell(const char *username)
+{
+	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
+	static const char * const allowed = "abcdefghijklmnopqrstuvwxyz"
+										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+										"0123456789-_./:";
+	size_t	span;
+
+	Assert(username && username[0]); /* should have already been checked */
+
+	span = strspn(username, allowed);
+	if (username[span] != '\0')
+	{
+		ereport(COMMERROR,
+				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
+		return false;
+	}
+
+	return true;
+}
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index db3ca75a60..9e4482dc27 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -118,6 +118,8 @@ const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
 	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH,
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index e20740a7c5..354c7b0fc8 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -49,7 +50,6 @@ static void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
 static void auth_failed(Port *port, int status, char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 /*----------------------------------------------------------------
  * SASL common authentication
@@ -215,29 +215,12 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
+/*----------------------------------------------------------------
+ * OAuth v2 Bearer Authentication
+ *----------------------------------------------------------------
  */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+static int	CheckOAuthBearer(Port *port);
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
 
 /*----------------------------------------------------------------
  * Global authentication functions
@@ -327,6 +310,9 @@ auth_failed(Port *port, int status, char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -361,7 +347,7 @@ auth_failed(Port *port, int status, char *logdetail)
  * lifetime of the Port, so it is safe to pass a string that is managed by an
  * external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -646,6 +632,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckOAuthBearer(port);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
@@ -973,7 +962,7 @@ SASL_exchange(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
@@ -3495,3 +3484,9 @@ PerformRadiusTransaction(const char *server, const char *secret, const char *por
 		}
 	}							/* while (true) */
 }
+
+static int
+CheckOAuthBearer(Port *port)
+{
+	return SASL_exchange(&pg_be_oauth_mech, port, NULL, NULL);
+}
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 3be8778d21..98147700dd 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -134,7 +134,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 
@@ -1399,6 +1400,8 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -1713,8 +1716,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
+			hbaline->auth_method != uaOAuth &&
 			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, oauth, and cert"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2098,6 +2102,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 68b62d523d..1ef6b3c41e 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -56,6 +56,7 @@
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
+#include "libpq/oauth.h"
 #include "miscadmin.h"
 #include "optimizer/cost.h"
 #include "optimizer/geqo.h"
@@ -4587,6 +4588,17 @@ static struct config_string ConfigureNamesString[] =
 		check_backtrace_functions, assign_backtrace_functions, NULL
 	},
 
+	{
+		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_command,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 3610fae3ff..785cc5d16f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -21,6 +21,7 @@ extern bool pg_krb_caseins_users;
 extern char *pg_krb_realm;
 
 extern void ClientAuthentication(Port *port);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8d9f3821b1..441dd5623e 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -38,8 +38,9 @@ typedef enum UserAuth
 	uaLDAP,
 	uaCert,
 	uaRADIUS,
-	uaPeer
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaPeer,
+	uaOAuth
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -120,6 +121,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..870e426af1
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern char *oauth_validator_command;
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif /* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 8c9c9983d4..f1341d0c54 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -19,6 +19,30 @@
 #define SASL_EXCHANGE_SUCCESS		1
 #define SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /* Backend mechanism API */
 typedef void  (*pg_be_sasl_mechanism_func)(Port *, StringInfo);
 typedef void *(*pg_be_sasl_init_func)(Port *, const char *, const char *);
@@ -29,6 +53,8 @@ typedef struct
 	pg_be_sasl_mechanism_func	get_mechanisms;
 	pg_be_sasl_init_func		init;
 	pg_be_sasl_exchange_func	exchange;
+
+	int							max_message_length;
 } pg_be_sasl_mech;
 
 #endif /* PG_SASL_H */
-- 
2.25.1

0006-Add-a-very-simple-authn_id-extension.patchtext/x-patch; name=0006-Add-a-very-simple-authn_id-extension.patchDownload
From e468be7ff7d19645aeb77bef21a383960a47731e Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 18 May 2021 15:01:29 -0700
Subject: [PATCH 6/7] Add a very simple authn_id extension

...for retrieving the authn_id from the server in tests.
---
 contrib/authn_id/Makefile          | 19 +++++++++++++++++++
 contrib/authn_id/authn_id--1.0.sql |  8 ++++++++
 contrib/authn_id/authn_id.c        | 28 ++++++++++++++++++++++++++++
 contrib/authn_id/authn_id.control  |  5 +++++
 4 files changed, 60 insertions(+)
 create mode 100644 contrib/authn_id/Makefile
 create mode 100644 contrib/authn_id/authn_id--1.0.sql
 create mode 100644 contrib/authn_id/authn_id.c
 create mode 100644 contrib/authn_id/authn_id.control

diff --git a/contrib/authn_id/Makefile b/contrib/authn_id/Makefile
new file mode 100644
index 0000000000..46026358e0
--- /dev/null
+++ b/contrib/authn_id/Makefile
@@ -0,0 +1,19 @@
+# contrib/authn_id/Makefile
+
+MODULE_big = authn_id
+OBJS = authn_id.o
+
+EXTENSION = authn_id
+DATA = authn_id--1.0.sql
+PGFILEDESC = "authn_id - information about the authenticated user"
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/authn_id
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/contrib/authn_id/authn_id--1.0.sql b/contrib/authn_id/authn_id--1.0.sql
new file mode 100644
index 0000000000..af2a4d3991
--- /dev/null
+++ b/contrib/authn_id/authn_id--1.0.sql
@@ -0,0 +1,8 @@
+/* contrib/authn_id/authn_id--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION authn_id" to load this file. \quit
+
+CREATE FUNCTION authn_id() RETURNS text
+AS 'MODULE_PATHNAME', 'authn_id'
+LANGUAGE C IMMUTABLE;
diff --git a/contrib/authn_id/authn_id.c b/contrib/authn_id/authn_id.c
new file mode 100644
index 0000000000..0fecac36a8
--- /dev/null
+++ b/contrib/authn_id/authn_id.c
@@ -0,0 +1,28 @@
+/*
+ * Extension to expose the current user's authn_id.
+ *
+ * contrib/authn_id/authn_id.c
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/libpq-be.h"
+#include "miscadmin.h"
+#include "utils/builtins.h"
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(authn_id);
+
+/*
+ * Returns the current user's authenticated identity.
+ */
+Datum
+authn_id(PG_FUNCTION_ARGS)
+{
+	if (!MyProcPort->authn_id)
+		PG_RETURN_NULL();
+
+	PG_RETURN_TEXT_P(cstring_to_text(MyProcPort->authn_id));
+}
diff --git a/contrib/authn_id/authn_id.control b/contrib/authn_id/authn_id.control
new file mode 100644
index 0000000000..e0f9e06bed
--- /dev/null
+++ b/contrib/authn_id/authn_id.control
@@ -0,0 +1,5 @@
+# authn_id extension
+comment = 'current user identity'
+default_version = '1.0'
+module_pathname = '$libdir/authn_id'
+relocatable = true
-- 
2.25.1

0007-Add-pytest-suite-for-OAuth.patchtext/x-patch; name=0007-Add-pytest-suite-for-OAuth.patchDownload
From 896da918cfcd16bcb119090914f687b3e905d865 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH 7/7] Add pytest suite for OAuth

Requires Python 3; on the first run of `make installcheck` the
dependencies will be installed into ./venv for you. See the README for
more details.
---
 src/test/python/.gitignore                 |    2 +
 src/test/python/Makefile                   |   33 +
 src/test/python/README                     |   49 +
 src/test/python/client/__init__.py         |    0
 src/test/python/client/conftest.py         |  126 +++
 src/test/python/client/test_client.py      |  180 ++++
 src/test/python/client/test_oauth.py       |  936 ++++++++++++++++++
 src/test/python/pq3.py                     |  727 ++++++++++++++
 src/test/python/pytest.ini                 |    4 +
 src/test/python/requirements.txt           |    7 +
 src/test/python/server/__init__.py         |    0
 src/test/python/server/conftest.py         |   45 +
 src/test/python/server/test_oauth.py       | 1012 ++++++++++++++++++++
 src/test/python/server/test_server.py      |   21 +
 src/test/python/server/validate_bearer.py  |  101 ++
 src/test/python/server/validate_reflect.py |   34 +
 src/test/python/test_internals.py          |  138 +++
 src/test/python/test_pq3.py                |  558 +++++++++++
 src/test/python/tls.py                     |  195 ++++
 19 files changed, 4168 insertions(+)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100755 src/test/python/server/validate_bearer.py
 create mode 100755 src/test/python/server/validate_reflect.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py

diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..515a995106
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,33 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK): requirements.txt | $(PIP)
+	$(PIP) install -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..ceae364e81
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,49 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck
+
+will install a local virtual environment and all needed dependencies.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..f38da7a138
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,126 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import socket
+import sys
+import threading
+
+import psycopg2
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            conn.close()
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+    client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..c4c946fda4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,180 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = "server closed the connection unexpectedly"
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..a754a9c0b6
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,936 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import http.server
+import json
+import secrets
+import sys
+import threading
+import time
+import urllib.parse
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            self.server.serve_forever()
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+            self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _discovery_handler(self, headers, params):
+            oauth = self.server.oauth
+
+            doc = {
+                "issuer": oauth.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+            }
+
+            for name, path in oauth.endpoint_paths.items():
+                doc[name] = oauth.issuer + path
+
+            return 200, doc
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            code, resp = handler(self.headers, params)
+
+            self.send_response(code)
+            self.send_header("Content-Type", "application/json")
+            self.end_headers()
+
+            resp = json.dumps(resp)
+            resp = resp.encode("utf-8")
+            self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            if self.path == "/.well-known/openid-configuration":
+                self._handle(handler=self._discovery_handler)
+                return
+
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_with_explicit_issuer(
+    capfd, accept, openid_provider, retries, scope, secret
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user with the expected
+        # authorization URL and user code.
+        expected = f"Visit {verification_url} and enter the code: {user_code}"
+        _, stderr = capfd.readouterr()
+        assert expected in stderr
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+def test_oauth_retry_interval(accept, openid_provider, retries, error_code):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": expected_retry_interval,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            {
+                "error": "invalid_client",
+                "error_description": "client authentication failed",
+            },
+            r"client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            {"error": "invalid_request"},
+            r"\(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            {},
+            r"failed to obtain device authorization",
+            id="broken error response",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return 400, failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should not continue the connection due to the hardcoded
+            # provider failure; we disconnect here.
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            {
+                "error": "expired_token",
+                "error_description": "the device code has expired",
+            },
+            r"the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            {"error": "access_denied"},
+            r"\(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            {},
+            r"OAuth token retrieval failed",
+            id="broken error response",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+        return 400, failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should not continue the connection due to the hardcoded
+            # provider failure; we disconnect here.
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..3a22dad0b6
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,727 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload" / FixedSized(this.len - 4, Default(_payload, b"")),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..32f105ea84
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,7 @@
+black
+cryptography~=3.4.6
+construct~=2.10.61
+isort~=5.6
+psycopg2~=2.8.6
+pytest~=6.1
+pytest-asyncio~=0.14.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..ba7342a453
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,45 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+
+import pytest
+
+import pq3
+
+
+@pytest.fixture
+def connect():
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. The calling test will be
+    skipped automatically if a server is not running at PGHOST:PGPORT, so it's
+    best to connect as soon as possible after the test case begins, to avoid
+    doing unnecessary work.
+    """
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            addr = (pq3.pghost(), pq3.pgport())
+
+            try:
+                sock = socket.create_connection(addr, timeout=2)
+            except ConnectionError as e:
+                pytest.skip(f"unable to connect to {addr}: {e}")
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..355ef8e4bd
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1012 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from psycopg2 import sql
+
+import pq3
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_TOKEN_SIZE = 4096
+MAX_UINT16 = 2 ** 16 - 1
+
+
+def skip_if_no_postgres():
+    """
+    Used by the oauth_ctx fixture to skip this test module if no Postgres server
+    is running.
+
+    This logic is nearly duplicated with the conn fixture. Ideally oauth_ctx
+    would depend on that, but a module-scope fixture can't depend on a
+    test-scope fixture, and we haven't reached the rule of three yet.
+    """
+    addr = (pq3.pghost(), pq3.pgport())
+
+    try:
+        with socket.create_connection(addr, timeout=2):
+            pass
+    except ConnectionError as e:
+        pytest.skip(f"unable to connect to {addr}: {e}")
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + ".bak"
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx():
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    skip_if_no_postgres()  # don't bother running these tests without a server
+
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = (
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    )
+    ident_lines = (r"oauth /^(.*)@example\.com$ \1",)
+
+    conn = psycopg2.connect("")
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Make this test script the server's oauth_validator.
+        path = pathlib.Path(__file__).parent / "validate_bearer.py"
+        path = str(path.absolute())
+
+        cmd = f"{shlex.quote(path)} {SHARED_MEM_NAME} <&%f"
+        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute("ALTER SYSTEM RESET oauth_validator_command;")
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def authn_id_extension(oauth_ctx):
+    """
+    Performs a `CREATE EXTENSION authn_id` in the test database. This fixture is
+    autoused, so tests don't need to rely on it.
+    """
+    conn = psycopg2.connect(database=oauth_ctx.dbname)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        c.execute("CREATE EXTENSION authn_id;")
+
+
+@pytest.fixture(scope="session")
+def shared_mem():
+    """
+    Yields a shared memory segment that can be used for communication between
+    the bearer_token fixture and ./validate_bearer.py.
+    """
+    size = MAX_TOKEN_SIZE + 2  # two byte length prefix
+    mem = shared_memory.SharedMemory(SHARED_MEM_NAME, create=True, size=size)
+
+    try:
+        with contextlib.closing(mem):
+            yield mem
+    finally:
+        mem.unlink()
+
+
+@pytest.fixture()
+def bearer_token(shared_mem):
+    """
+    Returns a factory function that, when called, will store a Bearer token in
+    shared_mem. If token is None (the default), a new token will be generated
+    using secrets.token_urlsafe() and returned; otherwise the passed token will
+    be used as-is.
+
+    When token is None, the generated token size in bytes may be specified as an
+    argument; if unset, a small 16-byte token will be generated. The token size
+    may not exceed MAX_TOKEN_SIZE in any case.
+
+    The return value is the token, converted to a bytes object.
+
+    As a special case for testing failure modes, accept_any may be set to True.
+    This signals to the validator command that any bearer token should be
+    accepted. The returned token in this case may be used or discarded as needed
+    by the test.
+    """
+
+    def set_token(token=None, *, size=16, accept_any=False):
+        if token is not None:
+            size = len(token)
+
+        if size > MAX_TOKEN_SIZE:
+            raise ValueError(f"token size {size} exceeds maximum size {MAX_TOKEN_SIZE}")
+
+        if token is None:
+            if size % 4:
+                raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+            token = secrets.token_urlsafe(size // 4 * 3)
+            assert len(token) == size
+
+        try:
+            token = token.encode("ascii")
+        except AttributeError:
+            pass  # already encoded
+
+        if accept_any:
+            # Two-byte magic value.
+            shared_mem.buf[:2] = struct.pack("H", MAX_UINT16)
+        else:
+            # Two-byte length prefix, then the token data.
+            shared_mem.buf[:2] = struct.pack("H", len(token))
+            shared_mem.buf[2 : size + 2] = token
+
+        return token
+
+    return set_token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(conn, oauth_ctx, bearer_token, auth_prefix, token_len):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    auth = auth_prefix + token
+
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(conn, oauth_ctx, bearer_token, token_value):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=bearer_token(token_value))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(conn, oauth_ctx, bearer_token, user, authn_id, should_succeed):
+    token = None
+
+    authn_id = authn_id(oauth_ctx)
+    if authn_id is not None:
+        authn_id = authn_id.encode("ascii")
+
+        # As a hack to get the validator to reflect arbitrary output from this
+        # test, encode the desired output as a base64 token. The validator will
+        # key on the leading "output=" to differentiate this from the random
+        # tokens generated by secrets.token_urlsafe().
+        output = b"output=" + authn_id + b"\n"
+        token = base64.urlsafe_b64encode(output)
+
+    token = bearer_token(token)
+    username = user(oauth_ctx)
+
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token)
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [authn_id]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx, bearer_token):
+    # Generate a new bearer token, which we will proceed not to use.
+    _ = bearer_token()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer me@example.com",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(conn, oauth_ctx, bearer_token, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    _ = bearer_token(accept_any=True)
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"x" * (MAX_SASL_MESSAGE_LENGTH + 1),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if not isinstance(payload, dict):
+        payload = dict(payload_data=payload)
+    pq3.send(conn, type, **payload)
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(conn, oauth_ctx, bearer_token):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + bearer_token() + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+@pytest.fixture()
+def set_validator():
+    """
+    A per-test fixture that allows a test to override the setting of
+    oauth_validator_command for the cluster. The setting will be reverted during
+    teardown.
+
+    Passing None will perform an ALTER SYSTEM RESET.
+    """
+    conn = psycopg2.connect("")
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Save the previous value.
+        c.execute("SHOW oauth_validator_command;")
+        prev_cmd = c.fetchone()[0]
+
+        def setter(cmd):
+            c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+            c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous value.
+        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (prev_cmd,))
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_oauth_no_validator(oauth_ctx, set_validator, connect, bearer_token):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+def test_oauth_validator_role(oauth_ctx, set_validator, connect):
+    # Switch the validator implementation. This validator will reflect the
+    # PGUSER as the authenticated identity.
+    path = pathlib.Path(__file__).parent / "validate_reflect.py"
+    path = str(path.absolute())
+
+    set_validator(f"{shlex.quote(path)} '%r' <&%f")
+    conn = connect()
+
+    # Log in. Note that the reflection validator ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=oauth_ctx.user)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = oauth_ctx.user.encode("utf-8")
+    assert row.columns == [expected]
+
+
+def test_oauth_role_with_shell_unsafe_characters(oauth_ctx, set_validator, connect):
+    """
+    XXX This test pins undesirable behavior. We should be able to handle any
+    valid Postgres role name.
+    """
+    # Switch the validator implementation. This validator will reflect the
+    # PGUSER as the authenticated identity.
+    path = pathlib.Path(__file__).parent / "validate_reflect.py"
+    path = str(path.absolute())
+
+    set_validator(f"{shlex.quote(path)} '%r' <&%f")
+    conn = connect()
+
+    unsafe_username = "hello'there"
+    begin_oauth_handshake(conn, oauth_ctx, user=unsafe_username)
+
+    # The server should reject the handshake.
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_failure(conn, oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/server/validate_bearer.py b/src/test/python/server/validate_bearer.py
new file mode 100755
index 0000000000..2cc73ff154
--- /dev/null
+++ b/src/test/python/server/validate_bearer.py
@@ -0,0 +1,101 @@
+#! /usr/bin/env python3
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+# DO NOT USE THIS OAUTH VALIDATOR IN PRODUCTION. It doesn't actually validate
+# anything, and it logs the bearer token data, which is sensitive.
+#
+# This executable is used as an oauth_validator_command in concert with
+# test_oauth.py. Memory is shared and communicated from that test module's
+# bearer_token() fixture.
+#
+# This script must run under the Postgres server environment; keep the
+# dependency list fairly standard.
+
+import base64
+import binascii
+import contextlib
+import struct
+import sys
+from multiprocessing import shared_memory
+
+MAX_UINT16 = 2 ** 16 - 1
+
+
+def remove_shm_from_resource_tracker():
+    """
+    Monkey-patch multiprocessing.resource_tracker so SharedMemory won't be
+    tracked. Pulled from this thread, where there are more details:
+
+        https://bugs.python.org/issue38119
+
+    TL;DR: all clients of shared memory segments automatically destroy them on
+    process exit, which makes shared memory segments much less useful. This
+    monkeypatch removes that behavior so that we can defer to the test to manage
+    the segment lifetime.
+
+    Ideally a future Python patch will pull in this fix and then the entire
+    function can go away.
+    """
+    from multiprocessing import resource_tracker
+
+    def fix_register(name, rtype):
+        if rtype == "shared_memory":
+            return
+        return resource_tracker._resource_tracker.register(self, name, rtype)
+
+    resource_tracker.register = fix_register
+
+    def fix_unregister(name, rtype):
+        if rtype == "shared_memory":
+            return
+        return resource_tracker._resource_tracker.unregister(self, name, rtype)
+
+    resource_tracker.unregister = fix_unregister
+
+    if "shared_memory" in resource_tracker._CLEANUP_FUNCS:
+        del resource_tracker._CLEANUP_FUNCS["shared_memory"]
+
+
+def main(args):
+    remove_shm_from_resource_tracker()  # XXX remove some day
+
+    # Get the expected token from the currently running test.
+    shared_mem_name = args[0]
+
+    mem = shared_memory.SharedMemory(shared_mem_name)
+    with contextlib.closing(mem):
+        # First two bytes are the token length.
+        size = struct.unpack("H", mem.buf[:2])[0]
+
+        if size == MAX_UINT16:
+            # Special case: the test wants us to accept any token.
+            sys.stderr.write("accepting token without validation\n")
+            return
+
+        # The remainder of the buffer contains the expected token.
+        assert size <= (mem.size - 2)
+        expected_token = mem.buf[2 : size + 2].tobytes()
+
+        mem.buf[:] = b"\0" * mem.size  # scribble over the token
+
+    token = sys.stdin.buffer.read()
+    if token != expected_token:
+        sys.exit(f"failed to match Bearer token ({token!r} != {expected_token!r})")
+
+    # See if the test wants us to print anything. If so, it will have encoded
+    # the desired output in the token with an "output=" prefix.
+    try:
+        # altchars="-_" corresponds to the urlsafe alphabet.
+        data = base64.b64decode(token, altchars="-_", validate=True)
+
+        if data.startswith(b"output="):
+            sys.stdout.buffer.write(data[7:])
+
+    except binascii.Error:
+        pass
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/src/test/python/server/validate_reflect.py b/src/test/python/server/validate_reflect.py
new file mode 100755
index 0000000000..24c3a7e715
--- /dev/null
+++ b/src/test/python/server/validate_reflect.py
@@ -0,0 +1,34 @@
+#! /usr/bin/env python3
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+# DO NOT USE THIS OAUTH VALIDATOR IN PRODUCTION. It ignores the bearer token
+# entirely and automatically logs the user in.
+#
+# This executable is used as an oauth_validator_command in concert with
+# test_oauth.py. It expects the user's desired role name as an argument; the
+# actual token will be discarded and the user will be logged in with the role
+# name as the authenticated identity.
+#
+# This script must run under the Postgres server environment; keep the
+# dependency list fairly standard.
+
+import sys
+
+
+def main(args):
+    # We have to read the entire token as our first action to unblock the
+    # server, but we won't actually use it.
+    _ = sys.stdin.buffer.read()
+
+    if len(args) != 1:
+        sys.exit("usage: ./validate_reflect.py ROLE")
+
+    # Log the user in as the provided role.
+    role = args[0]
+    print(role)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..e0c0e0568d
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,558 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05\x00",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        ("PGUSER", pq3.pguser, getpass.getuser()),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
-- 
2.25.1

#2Michael Paquier
michael@paquier.xyz
In reply to: Jacob Champion (#1)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Jun 08, 2021 at 04:37:46PM +0000, Jacob Champion wrote:

1. Prep

0001 decouples the SASL code from the SCRAM implementation.
0002 makes it possible to use common/jsonapi from the frontend.
0003 lets the json_errdetail() result be freed, to avoid leaks.

The first three patches are, hopefully, generally useful outside of
this implementation, and I'll plan to register them in the next
commitfest. The middle two patches are the "interesting" pieces, and
I've split them into client and server for ease of understanding,
though neither is particularly useful without the other.

Beginning with the beginning, could you spawn two threads for the
jsonapi rework and the SASL/SCRAM business? I agree that these look
independently useful. Glad to see someone improving the code with
SASL and SCRAM which are too inter-dependent now. I saw in the RFCs
dedicated to OAUTH the need for the JSON part as well.

+#  define check_stack_depth()
+#  ifdef JSONAPI_NO_LOG
+#    define json_log_and_abort(...) \
+   do { fprintf(stderr, __VA_ARGS__); exit(1); } while(0)
+#  else
In patch 0002, this is the wrong approach.  libpq will not be able to
feed on such reports, and you cannot use any of the APIs from the
palloc() family either as these just fail on OOM.  libpq should be
able to know about the error, and would fill in the error back to the
application.  This abstraction is not necessary on HEAD as
pg_verifybackup is fine with this level of reporting.  My rough guess
is that we will need to split the existing jsonapi.c into two files,
one that can be used in shared libraries and a second that handles the 
errors.
+           /* TODO: SASL_EXCHANGE_FAILURE with output is forbidden in SASL */
            if (result == SASL_EXCHANGE_SUCCESS)
                sendAuthRequest(port,
                            AUTH_REQ_SASL_FIN,
                            output,
                            outputlen);
Perhaps that's an issue we need to worry on its own?  I didn't recall
this part..
--
Michael
#3Heikki Linnakangas
hlinnaka@iki.fi
In reply to: Jacob Champion (#1)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 08/06/2021 19:37, Jacob Champion wrote:

We've been working on ways to expand the list of third-party auth
methods that Postgres provides. Some example use cases might be "I want
to let anyone with a Google account read this table" or "let anyone who
belongs to this GitHub organization connect as a superuser".

Cool!

The iddawc dependency for client-side OAuth was extremely helpful to
develop this proof of concept quickly, but I don't think it would be an
appropriate component to build a real feature on. It's extremely
heavyweight -- it incorporates a huge stack of dependencies, including
a logging framework and a web server, to implement features we would
probably never use -- and it's fairly difficult to debug in practice.
If a device authorization flow were the only thing that libpq needed to
support natively, I think we should just depend on a widely used HTTP
client, like libcurl or neon, and implement the minimum spec directly
against the existing test suite.

You could punt and let the application implement that stuff. I'm
imagining that the application code would look something like this:

conn = PQconnectStartParams(...);
for (;;)
{
status = PQconnectPoll(conn)
switch (status)
{
case CONNECTION_SASL_TOKEN_REQUIRED:
/* open a browser for the user, get token */
token = open_browser()
PQauthResponse(token);
break;
...
}
}

It would be nice to have a simple default implementation, though, for
psql and all the other client applications that come with PostgreSQL itself.

If you've read this far, thank you for your interest, and I hope you
enjoy playing with it!

A few small things caught my eye in the backend oauth_exchange function:

+       /* Handle the client's initial message. */
+       p = strdup(input);

this strdup() should be pstrdup().

In the same function, there are a bunch of reports like this:

ereport(ERROR,
+                          (errcode(ERRCODE_PROTOCOL_VIOLATION),
+                           errmsg("malformed OAUTHBEARER message"),
+                           errdetail("Comma expected, but found character \"%s\".",
+                                     sanitize_char(*p))));

I don't think the double quotes are needed here, because sanitize_char
will return quotes if it's a single character. So it would end up
looking like this: ... found character "'x'".

- Heikki

#4Jacob Champion
pchampion@vmware.com
In reply to: Heikki Linnakangas (#3)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:

On 08/06/2021 19:37, Jacob Champion wrote:

We've been working on ways to expand the list of third-party auth
methods that Postgres provides. Some example use cases might be "I want
to let anyone with a Google account read this table" or "let anyone who
belongs to this GitHub organization connect as a superuser".

Cool!

Glad you think so! :D

The iddawc dependency for client-side OAuth was extremely helpful to
develop this proof of concept quickly, but I don't think it would be an
appropriate component to build a real feature on. It's extremely
heavyweight -- it incorporates a huge stack of dependencies, including
a logging framework and a web server, to implement features we would
probably never use -- and it's fairly difficult to debug in practice.
If a device authorization flow were the only thing that libpq needed to
support natively, I think we should just depend on a widely used HTTP
client, like libcurl or neon, and implement the minimum spec directly
against the existing test suite.

You could punt and let the application implement that stuff. I'm
imagining that the application code would look something like this:

conn = PQconnectStartParams(...);
for (;;)
{
status = PQconnectPoll(conn)
switch (status)
{
case CONNECTION_SASL_TOKEN_REQUIRED:
/* open a browser for the user, get token */
token = open_browser()
PQauthResponse(token);
break;
...
}
}

I was toying with the idea of having a callback for libpq clients,
where they could take full control of the OAuth flow if they wanted to.
Doing it inline with PQconnectPoll seems like it would work too. It has
a couple of drawbacks that I can see:

- If a client isn't currently using a poll loop, they'd have to switch
to one to be able to use OAuth connections. Not a difficult change, but
considering all the other hurdles to making this work, I'm hoping to
minimize the hoop-jumping.

- A client would still have to receive a bunch of OAuth parameters from
some new libpq API in order to construct the correct URL to visit, so
the overall complexity for implementers might be higher than if we just
passed those params directly in a callback.

It would be nice to have a simple default implementation, though, for
psql and all the other client applications that come with PostgreSQL itself.

I agree. I think having a bare-bones implementation in libpq itself
would make initial adoption *much* easier, and then if specific
applications wanted to have richer control over an authorization flow,
then they could implement that themselves with the aforementioned
callback.

The Device Authorization flow was the most minimal working
implementation I could find, since it doesn't require a web browser on
the system, just the ability to print a prompt to the console. But if
anyone knows of a better flow for this use case, I'm all ears.

If you've read this far, thank you for your interest, and I hope you
enjoy playing with it!

A few small things caught my eye in the backend oauth_exchange function:

+       /* Handle the client's initial message. */
+       p = strdup(input);

this strdup() should be pstrdup().

Thanks, I'll fix that in the next re-roll.

In the same function, there are a bunch of reports like this:

ereport(ERROR,
+                          (errcode(ERRCODE_PROTOCOL_VIOLATION),
+                           errmsg("malformed OAUTHBEARER message"),
+                           errdetail("Comma expected, but found character \"%s\".",
+                                     sanitize_char(*p))));

I don't think the double quotes are needed here, because sanitize_char
will return quotes if it's a single character. So it would end up
looking like this: ... found character "'x'".

I'll fix this too. Thanks!

--Jacob

#5Jacob Champion
pchampion@vmware.com
In reply to: Michael Paquier (#2)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, 2021-06-18 at 13:07 +0900, Michael Paquier wrote:

On Tue, Jun 08, 2021 at 04:37:46PM +0000, Jacob Champion wrote:

1. Prep

0001 decouples the SASL code from the SCRAM implementation.
0002 makes it possible to use common/jsonapi from the frontend.
0003 lets the json_errdetail() result be freed, to avoid leaks.

The first three patches are, hopefully, generally useful outside of
this implementation, and I'll plan to register them in the next
commitfest. The middle two patches are the "interesting" pieces, and
I've split them into client and server for ease of understanding,
though neither is particularly useful without the other.

Beginning with the beginning, could you spawn two threads for the
jsonapi rework and the SASL/SCRAM business?

Done [1, 2]. I've copied your comments into those threads with my
responses, and I'll have them registered in commitfest shortly.

Thanks!
--Jacob

[1]: /messages/by-id/3d2a6f5d50e741117d6baf83eb67ebf1a8a35a11.camel@vmware.com
[2]: /messages/by-id/a250d475ba1c0cc0efb7dfec8e538fcc77cdcb8e.camel@vmware.com

#6Michael Paquier
michael@paquier.xyz
In reply to: Jacob Champion (#5)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Jun 22, 2021 at 11:26:03PM +0000, Jacob Champion wrote:

Done [1, 2]. I've copied your comments into those threads with my
responses, and I'll have them registered in commitfest shortly.

Thanks!
--
Michael

#7Jacob Champion
pchampion@vmware.com
In reply to: Jacob Champion (#4)
5 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, 2021-06-22 at 23:22 +0000, Jacob Champion wrote:

On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:

A few small things caught my eye in the backend oauth_exchange function:

+       /* Handle the client's initial message. */
+       p = strdup(input);

this strdup() should be pstrdup().

Thanks, I'll fix that in the next re-roll.

In the same function, there are a bunch of reports like this:

ereport(ERROR,
+                          (errcode(ERRCODE_PROTOCOL_VIOLATION),
+                           errmsg("malformed OAUTHBEARER message"),
+                           errdetail("Comma expected, but found character \"%s\".",
+                                     sanitize_char(*p))));

I don't think the double quotes are needed here, because sanitize_char
will return quotes if it's a single character. So it would end up
looking like this: ... found character "'x'".

I'll fix this too. Thanks!

v2, attached, incorporates Heikki's suggested fixes and also rebases on
top of latest HEAD, which had the SASL refactoring changes committed
last month.

The biggest change from the last patchset is 0001, an attempt at
enabling jsonapi in the frontend without the use of palloc(), based on
suggestions by Michael and Tom from last commitfest. I've also made
some improvements to the pytest suite. No major changes to the OAuth
implementation yet.

--Jacob

Attachments:

v2-0001-common-jsonapi-support-FRONTEND-clients.patchtext/x-patch; name=v2-0001-common-jsonapi-support-FRONTEND-clients.patchDownload
From 8c4b82940efb7e0f0f33ac915d5f7969a36e3644 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v2 1/5] common/jsonapi: support FRONTEND clients

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed. json_errdetail() now allocates its error message inside
memory owned by the JsonLexContext, so clients don't need to worry about
freeing it.

For convenience, the backend now has destroyJsonLexContext() to mirror
other create/destroy APIs. The frontend has init/term versions of the
API to handle stack-allocated JsonLexContexts.

We can now partially revert b44669b2ca, now that json_errdetail() works
correctly.
---
 src/backend/utils/adt/jsonfuncs.c             |   4 +-
 src/bin/pg_verifybackup/parse_manifest.c      |  13 +-
 src/bin/pg_verifybackup/t/005_bad_manifest.pl |   2 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 290 +++++++++++++-----
 src/include/common/jsonapi.h                  |  47 ++-
 6 files changed, 270 insertions(+), 88 deletions(-)

diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index 5fd54b64b5..fa39751188 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -723,9 +723,7 @@ json_object_keys(PG_FUNCTION_ARGS)
 		pg_parse_json_or_ereport(lex, sem);
 		/* keys are now in state->result */
 
-		pfree(lex->strval->data);
-		pfree(lex->strval);
-		pfree(lex);
+		destroyJsonLexContext(lex);
 		pfree(sem);
 
 		MemoryContextSwitchTo(oldcontext);
diff --git a/src/bin/pg_verifybackup/parse_manifest.c b/src/bin/pg_verifybackup/parse_manifest.c
index c7ccc78c70..6cedb7435f 100644
--- a/src/bin/pg_verifybackup/parse_manifest.c
+++ b/src/bin/pg_verifybackup/parse_manifest.c
@@ -119,7 +119,7 @@ void
 json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 					size_t size)
 {
-	JsonLexContext *lex;
+	JsonLexContext lex = {0};
 	JsonParseErrorType json_error;
 	JsonSemAction sem;
 	JsonManifestParseState parse;
@@ -129,8 +129,8 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 	parse.state = JM_EXPECT_TOPLEVEL_START;
 	parse.saw_version_field = false;
 
-	/* Create a JSON lexing context. */
-	lex = makeJsonLexContextCstringLen(buffer, size, PG_UTF8, true);
+	/* Initialize a JSON lexing context. */
+	initJsonLexContextCstringLen(&lex, buffer, size, PG_UTF8, true);
 
 	/* Set up semantic actions. */
 	sem.semstate = &parse;
@@ -145,14 +145,17 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 	sem.scalar = json_manifest_scalar;
 
 	/* Run the actual JSON parser. */
-	json_error = pg_parse_json(lex, &sem);
+	json_error = pg_parse_json(&lex, &sem);
 	if (json_error != JSON_SUCCESS)
-		json_manifest_parse_failure(context, "parsing failed");
+		json_manifest_parse_failure(context, json_errdetail(json_error, &lex));
 	if (parse.state != JM_EXPECT_EOF)
 		json_manifest_parse_failure(context, "manifest ended unexpectedly");
 
 	/* Verify the manifest checksum. */
 	verify_manifest_checksum(&parse, buffer, size);
+
+	/* Clean up. */
+	termJsonLexContext(&lex);
 }
 
 /*
diff --git a/src/bin/pg_verifybackup/t/005_bad_manifest.pl b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
index 4f5b8f5a49..9f8a100a71 100644
--- a/src/bin/pg_verifybackup/t/005_bad_manifest.pl
+++ b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
@@ -16,7 +16,7 @@ my $tempdir = TestLib::tempdir;
 
 test_bad_manifest(
 	'input string ended unexpectedly',
-	qr/could not parse backup manifest: parsing failed/,
+	qr/could not parse backup manifest: The input string ended unexpectedly/,
 	<<EOM);
 {
 EOM
diff --git a/src/common/Makefile b/src/common/Makefile
index 880722fcf5..5ecb09a8c4 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 # If you add objects here, see also src/tools/msvc/Mkvcbuild.pm
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 5504072b4f..3a9620f739 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -20,10 +20,39 @@
 #include "common/jsonapi.h"
 #include "mb/pg_wchar.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+
+#define appendStrVal		appendPQExpBuffer
+#define appendStrValChar	appendPQExpBufferChar
+#define createStrVal		createPQExpBuffer
+#define resetStrVal			resetPQExpBuffer
+
+#else /* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+
+#define appendStrVal		appendStringInfo
+#define appendStrValChar	appendStringInfoChar
+#define createStrVal		makeStringInfo
+#define resetStrVal			resetStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -132,10 +161,12 @@ IsValidJsonNumber(const char *str, int len)
 	return (!numeric_error) && (total_len == dummy_lex.input_length);
 }
 
+#ifndef FRONTEND
+
 /*
  * makeJsonLexContextCstringLen
  *
- * lex constructor, with or without StringInfo object for de-escaped lexemes.
+ * lex constructor, with or without a string object for de-escaped lexemes.
  *
  * Without is better as it makes the processing faster, so only make one
  * if really required.
@@ -145,13 +176,66 @@ makeJsonLexContextCstringLen(char *json, int len, int encoding, bool need_escape
 {
 	JsonLexContext *lex = palloc0(sizeof(JsonLexContext));
 
+	initJsonLexContextCstringLen(lex, json, len, encoding, need_escapes);
+
+	return lex;
+}
+
+void
+destroyJsonLexContext(JsonLexContext *lex)
+{
+	termJsonLexContext(lex);
+	pfree(lex);
+}
+
+#endif /* !FRONTEND */
+
+void
+initJsonLexContextCstringLen(JsonLexContext *lex, char *json, int len, int encoding, bool need_escapes)
+{
 	lex->input = lex->token_terminator = lex->line_start = json;
 	lex->line_number = 1;
 	lex->input_length = len;
 	lex->input_encoding = encoding;
-	if (need_escapes)
-		lex->strval = makeStringInfo();
-	return lex;
+	lex->parse_strval = need_escapes;
+	if (lex->parse_strval)
+	{
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to time
+		 * of use (json_lex_string()) since there's no way to signal failure
+		 * here, and we might not need to parse any strings anyway.
+		 */
+		lex->strval = createStrVal();
+	}
+	lex->errormsg = NULL;
+}
+
+void
+termJsonLexContext(JsonLexContext *lex)
+{
+	static const JsonLexContext empty = {0};
+
+	if (lex->strval)
+	{
+#ifdef FRONTEND
+		destroyPQExpBuffer(lex->strval);
+#else
+		pfree(lex->strval->data);
+		pfree(lex->strval);
+#endif
+	}
+
+	if (lex->errormsg)
+	{
+#ifdef FRONTEND
+		destroyPQExpBuffer(lex->errormsg);
+#else
+		pfree(lex->errormsg->data);
+		pfree(lex->errormsg);
+#endif
+	}
+
+	*lex = empty;
 }
 
 /*
@@ -217,7 +301,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;		/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -279,14 +363,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -320,8 +411,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -368,6 +463,10 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -676,8 +775,15 @@ json_lex_string(JsonLexContext *lex)
 	int			len;
 	int			hi_surrogate = -1;
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -737,7 +843,7 @@ json_lex_string(JsonLexContext *lex)
 						return JSON_UNICODE_ESCAPE_FORMAT;
 					}
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -797,19 +903,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						return JSON_UNICODE_HIGH_ESCAPE;
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					return JSON_UNICODE_LOW_SURROGATE;
@@ -819,22 +925,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 						/* Not a valid string escape, so signal error. */
@@ -858,12 +964,12 @@ json_lex_string(JsonLexContext *lex)
 			}
 
 		}
-		else if (lex->strval != NULL)
+		else if (lex->parse_strval)
 		{
 			if (hi_surrogate != -1)
 				return JSON_UNICODE_LOW_SURROGATE;
 
-			appendStringInfoChar(lex->strval, *s);
+			appendStrValChar(lex->strval, *s);
 		}
 
 	}
@@ -871,6 +977,11 @@ json_lex_string(JsonLexContext *lex)
 	if (hi_surrogate != -1)
 		return JSON_UNICODE_LOW_SURROGATE;
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -1043,72 +1154,93 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 	return JSON_SUCCESS;		/* silence stupider compilers */
 }
 
-
-#ifndef FRONTEND
-/*
- * Extract the current token from a lexing context, for error reporting.
- */
-static char *
-extract_token(JsonLexContext *lex)
-{
-	int			toklen = lex->token_terminator - lex->token_start;
-	char	   *token = palloc(toklen + 1);
-
-	memcpy(token, lex->token_start, toklen);
-	token[toklen] = '\0';
-	return token;
-}
-
 /*
  * Construct a detail message for a JSON error.
  *
- * Note that the error message generated by this routine may not be
- * palloc'd, making it unsafe for frontend code as there is no way to
- * know if this can be safery pfree'd or not.
+ * The returned allocation is either static or owned by the JsonLexContext and
+ * should not be freed.
  */
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	int		toklen = lex->token_terminator - lex->token_start;
+
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
+	if (lex->errormsg)
+		resetStrVal(lex->errormsg);
+	else
+		lex->errormsg = createStrVal();
+
 	switch (error)
 	{
 		case JSON_SUCCESS:
 			/* fall through to the error code after switch */
 			break;
 		case JSON_ESCAPING_INVALID:
-			return psprintf(_("Escape sequence \"\\%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Escape sequence \"\\%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_ESCAPING_REQUIRED:
-			return psprintf(_("Character with value 0x%02x must be escaped."),
-							(unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
+			break;
 		case JSON_EXPECTED_END:
-			return psprintf(_("Expected end of input, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected end of input, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_FIRST:
-			return psprintf(_("Expected array element or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected array element or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_NEXT:
-			return psprintf(_("Expected \",\" or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_COLON:
-			return psprintf(_("Expected \":\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \":\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_JSON:
-			return psprintf(_("Expected JSON value, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected JSON value, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_MORE:
 			return _("The input string ended unexpectedly.");
 		case JSON_EXPECTED_OBJECT_FIRST:
-			return psprintf(_("Expected string or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_OBJECT_NEXT:
-			return psprintf(_("Expected \",\" or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_STRING:
-			return psprintf(_("Expected string, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_INVALID_TOKEN:
-			return psprintf(_("Token \"%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Token \"%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -1122,12 +1254,22 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			return _("Unicode low surrogate must follow a high surrogate.");
 	}
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	elog(ERROR, "unexpected json parse error type: %d", (int) error);
-	return NULL;
-}
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && !lex->errormsg->data[0])
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover the
+		 * possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d", (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
 #endif
+
+	return lex->errormsg->data;
+}
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index ec3dfce9c3..dc71ab2cd3 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum
 {
 	JSON_TOKEN_INVALID,
@@ -48,6 +46,7 @@ typedef enum
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -55,6 +54,17 @@ typedef enum
 	JSON_UNICODE_LOW_SURROGATE
 } JsonParseErrorType;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
 
 /*
  * All the fields in this structure should be treated as read-only.
@@ -81,7 +91,9 @@ typedef struct JsonLexContext
 	int			lex_level;
 	int			line_number;	/* line number, starting from 1 */
 	char	   *line_start;		/* where that line starts within input */
-	StringInfo	strval;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef void (*json_struct_action) (void *state);
@@ -141,9 +153,10 @@ extern JsonSemAction nullSemAction;
  */
 extern JsonParseErrorType json_count_array_elements(JsonLexContext *lex,
 													int *elements);
+#ifndef FRONTEND
 
 /*
- * constructor for JsonLexContext, with or without strval element.
+ * allocating constructor for JsonLexContext, with or without strval element.
  * If supplied, the strval element will contain a de-escaped version of
  * the lexeme. However, doing this imposes a performance penalty, so
  * it should be avoided if the de-escaped lexeme is not required.
@@ -153,6 +166,32 @@ extern JsonLexContext *makeJsonLexContextCstringLen(char *json,
 													int encoding,
 													bool need_escapes);
 
+/*
+ * Counterpart to makeJsonLexContextCstringLen(): clears and deallocates lex.
+ * The context pointer should not be used after this call.
+ */
+extern void destroyJsonLexContext(JsonLexContext *lex);
+
+#endif /* !FRONTEND */
+
+/*
+ * stack constructor for JsonLexContext, with or without strval element.
+ * If supplied, the strval element will contain a de-escaped version of
+ * the lexeme. However, doing this imposes a performance penalty, so
+ * it should be avoided if the de-escaped lexeme is not required.
+ */
+extern void initJsonLexContextCstringLen(JsonLexContext *lex,
+										 char *json,
+										 int len,
+										 int encoding,
+										 bool need_escapes);
+
+/*
+ * Counterpart to initJsonLexContextCstringLen(): clears the contents of lex,
+ * but does not deallocate lex itself.
+ */
+extern void termJsonLexContext(JsonLexContext *lex);
+
 /* lex one token */
 extern JsonParseErrorType json_lex(JsonLexContext *lex);
 
-- 
2.25.1

v2-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchtext/x-patch; name=v2-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 52ac4bd25ca19735eb2bd863e8b1549ccbe6560a Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v2 2/5] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented.

The client implementation requires libiddawc and its development
headers. Configure --with-oauth (and --with-includes/--with-libraries to
point at the iddawc installation, if it's in a custom location).

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- ...and more.
---
 configure                            | 100 ++++
 configure.ac                         |  19 +
 src/Makefile.global.in               |   1 +
 src/include/common/oauth-common.h    |  19 +
 src/include/pg_config.h.in           |   6 +
 src/interfaces/libpq/Makefile        |   7 +-
 src/interfaces/libpq/fe-auth-oauth.c | 745 +++++++++++++++++++++++++++
 src/interfaces/libpq/fe-auth-sasl.h  |   5 +-
 src/interfaces/libpq/fe-auth-scram.c |   6 +-
 src/interfaces/libpq/fe-auth.c       |  42 +-
 src/interfaces/libpq/fe-auth.h       |   3 +
 src/interfaces/libpq/fe-connect.c    |  38 ++
 src/interfaces/libpq/libpq-int.h     |   8 +
 13 files changed, 980 insertions(+), 19 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c

diff --git a/configure b/configure
index 7542fe30a1..2ddbe9a1d9 100755
--- a/configure
+++ b/configure
@@ -713,6 +713,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -856,6 +857,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1562,6 +1564,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth            build with OAuth 2.0 support
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8144,6 +8147,42 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-oauth option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_oauth=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13084,6 +13123,56 @@ fi
 
 
 
+if test "$with_oauth" = yes ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for i_init_session in -liddawc" >&5
+$as_echo_n "checking for i_init_session in -liddawc... " >&6; }
+if ${ac_cv_lib_iddawc_i_init_session+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-liddawc  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char i_init_session ();
+int
+main ()
+{
+return i_init_session ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_iddawc_i_init_session=yes
+else
+  ac_cv_lib_iddawc_i_init_session=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_iddawc_i_init_session" >&5
+$as_echo "$ac_cv_lib_iddawc_i_init_session" >&6; }
+if test "x$ac_cv_lib_iddawc_i_init_session" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBIDDAWC 1
+_ACEOF
+
+  LIBS="-liddawc $LIBS"
+
+else
+  as_fn_error $? "library 'iddawc' is required for OAuth support" "$LINENO" 5
+fi
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -13978,6 +14067,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" != no; then
+  ac_fn_c_check_header_mongrel "$LINENO" "iddawc.h" "ac_cv_header_iddawc_h" "$ac_includes_default"
+if test "x$ac_cv_header_iddawc_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <iddawc.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index ed3cdb9a8e..22026476d9 100644
--- a/configure.ac
+++ b/configure.ac
@@ -851,6 +851,17 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_BOOL(with, oauth, no,
+              [build with OAuth 2.0 support],
+              [AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])])
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1321,6 +1332,10 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = yes ; then
+  AC_CHECK_LIB(iddawc, i_init_session, [], [AC_MSG_ERROR([library 'iddawc' is required for OAuth support])])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1531,6 +1546,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" != no; then
+  AC_CHECK_HEADER(iddawc.h, [], [AC_MSG_ERROR([header file <iddawc.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 6e2f224cc4..d67912711e 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..3fa95ac7e8
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif /* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 15ffdd895a..f82ab38536 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -331,6 +331,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `iddawc' library (-liddawc). */
+#undef HAVE_LIBIDDAWC
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -926,6 +929,9 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 7cbdeb589b..3cdf19294b 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -62,6 +62,11 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_oauth),yes)
+OBJS += \
+	fe-auth-oauth.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -83,7 +88,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -liddawc -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..91d2c69f16
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,745 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include <iddawc.h>
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static void oauth_exchange(void *opaq, bool final,
+						   char *input, int inputlen,
+						   char **output, int *outputlen,
+						   bool *done, bool *success);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+} fe_oauth_state;
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(!strcmp(sasl_mechanism, OAUTHBEARER_NAME));
+
+	state = malloc(sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+static const char *
+iddawc_error_string(int errcode)
+{
+	switch (errcode)
+	{
+		case I_OK:
+			return "I_OK";
+
+		case I_ERROR:
+			return "I_ERROR";
+
+		case I_ERROR_PARAM:
+			return "I_ERROR_PARAM";
+
+		case I_ERROR_MEMORY:
+			return "I_ERROR_MEMORY";
+
+		case I_ERROR_UNAUTHORIZED:
+			return "I_ERROR_UNAUTHORIZED";
+
+		case I_ERROR_SERVER:
+			return "I_ERROR_SERVER";
+	}
+
+	return "<unknown>";
+}
+
+static void
+iddawc_error(PGconn *conn, int errcode, const char *msg)
+{
+	appendPQExpBufferStr(&conn->errorMessage, libpq_gettext(msg));
+	appendPQExpBuffer(&conn->errorMessage,
+					  libpq_gettext(" (iddawc error %s)\n"),
+					  iddawc_error_string(errcode));
+}
+
+static void
+iddawc_request_error(PGconn *conn, struct _i_session *i, int err, const char *msg)
+{
+	const char *error_code;
+	const char *desc;
+
+	appendPQExpBuffer(&conn->errorMessage, "%s: ", libpq_gettext(msg));
+
+	error_code = i_get_str_parameter(i, I_OPT_ERROR);
+	if (!error_code)
+	{
+		/*
+		 * The server didn't give us any useful information, so just print the
+		 * error code.
+		 */
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("(iddawc error %s)\n"),
+						  iddawc_error_string(err));
+		return;
+	}
+
+	/* If the server gave a string description, print that too. */
+	desc = i_get_str_parameter(i, I_OPT_ERROR_DESCRIPTION);
+	if (desc)
+		appendPQExpBuffer(&conn->errorMessage, "%s ", desc);
+
+	appendPQExpBuffer(&conn->errorMessage, "(%s)\n", error_code);
+}
+
+static char *
+get_auth_token(PGconn *conn)
+{
+	PQExpBuffer	token_buf = NULL;
+	struct _i_session session;
+	int			err;
+	int			auth_method;
+	bool		user_prompted = false;
+	const char *verification_uri;
+	const char *user_code;
+	const char *access_token;
+	const char *token_type;
+	char	   *token = NULL;
+
+	if (!conn->oauth_discovery_uri)
+		return strdup(""); /* ask the server for one */
+
+	i_init_session(&session);
+
+	if (!conn->oauth_client_id)
+	{
+		/* We can't talk to a server without a client identifier. */
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("no oauth_client_id is set for the connection"));
+		goto cleanup;
+	}
+
+	token_buf = createPQExpBuffer();
+
+	if (!token_buf)
+		goto cleanup;
+
+	err = i_set_str_parameter(&session, I_OPT_OPENID_CONFIG_ENDPOINT, conn->oauth_discovery_uri);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set OpenID config endpoint");
+		goto cleanup;
+	}
+
+	err = i_get_openid_config(&session);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to fetch OpenID discovery document");
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_TOKEN_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer has no token endpoint"));
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_DEVICE_AUTHORIZATION_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer does not support device authorization"));
+		goto cleanup;
+	}
+
+	err = i_set_response_type(&session, I_RESPONSE_TYPE_DEVICE_CODE);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set device code response type");
+		goto cleanup;
+	}
+
+	auth_method = I_TOKEN_AUTH_METHOD_NONE;
+	if (conn->oauth_client_secret && *conn->oauth_client_secret)
+		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
+
+	err = i_set_parameter_list(&session,
+		I_OPT_CLIENT_ID, conn->oauth_client_id,
+		I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
+		I_OPT_TOKEN_METHOD, auth_method,
+		I_OPT_SCOPE, conn->oauth_scope,
+		I_OPT_NONE
+	);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set client identifier");
+		goto cleanup;
+	}
+
+	err = i_run_device_auth_request(&session);
+	if (err)
+	{
+		iddawc_request_error(conn, &session, err,
+							"failed to obtain device authorization");
+		goto cleanup;
+	}
+
+	verification_uri = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_VERIFICATION_URI);
+	if (!verification_uri)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a verification URI"));
+		goto cleanup;
+	}
+
+	user_code = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_USER_CODE);
+	if (!user_code)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a user code"));
+		goto cleanup;
+	}
+
+	/*
+	 * Poll the token endpoint until either the user logs in and authorizes the
+	 * use of a token, or a hard failure occurs. We perform one ping _before_
+	 * prompting the user, so that we don't make them do the work of logging in
+	 * only to find that the token endpoint is completely unreachable.
+	 */
+	err = i_run_token_request(&session);
+	while (err)
+	{
+		const char *error_code;
+		uint		interval;
+
+		error_code = i_get_str_parameter(&session, I_OPT_ERROR);
+
+		/*
+		 * authorization_pending and slow_down are the only acceptable errors;
+		 * anything else and we bail.
+		 */
+		if (!error_code || (strcmp(error_code, "authorization_pending")
+							&& strcmp(error_code, "slow_down")))
+		{
+			iddawc_request_error(conn, &session, err,
+								"OAuth token retrieval failed");
+			goto cleanup;
+		}
+
+		if (!user_prompted)
+		{
+			/*
+			 * Now that we know the token endpoint isn't broken, give the user
+			 * the login instructions.
+			 */
+			pqInternalNotice(&conn->noticeHooks,
+							 "Visit %s and enter the code: %s",
+							 verification_uri, user_code);
+
+			user_prompted = true;
+		}
+
+		/*
+		 * We are required to wait between polls; the server tells us how long.
+		 * TODO: if interval's not set, we need to default to five seconds
+		 * TODO: sanity check the interval
+		 */
+		interval = i_get_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL);
+
+		/*
+		 * A slow_down error requires us to permanently increase our retry
+		 * interval by five seconds. RFC 8628, Sec. 3.5.
+		 */
+		if (!strcmp(error_code, "slow_down"))
+		{
+			interval += 5;
+			i_set_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL, interval);
+		}
+
+		sleep(interval);
+
+		/*
+		 * XXX Reset the error code before every call, because iddawc won't do
+		 * that for us. This matters if the server first sends a "pending" error
+		 * code, then later hard-fails without sending an error code to
+		 * overwrite the first one.
+		 *
+		 * That we have to do this at all seems like a bug in iddawc.
+		 */
+		i_set_str_parameter(&session, I_OPT_ERROR, NULL);
+
+		err = i_run_token_request(&session);
+	}
+
+	access_token = i_get_str_parameter(&session, I_OPT_ACCESS_TOKEN);
+	token_type = i_get_str_parameter(&session, I_OPT_TOKEN_TYPE);
+
+	if (!access_token || !token_type || strcasecmp(token_type, "Bearer"))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a bearer token"));
+		goto cleanup;
+	}
+
+	appendPQExpBufferStr(token_buf, "Bearer ");
+	appendPQExpBufferStr(token_buf, access_token);
+
+	if (PQExpBufferBroken(token_buf))
+		goto cleanup;
+
+	token = strdup(token_buf->data);
+
+cleanup:
+	if (token_buf)
+		destroyPQExpBuffer(token_buf);
+	i_clean_session(&session);
+
+	return token;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn)
+{
+	static const char * const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBuffer	token_buf;
+	PQExpBuffer	discovery_buf = NULL;
+	char	   *token = NULL;
+	char	   *response = NULL;
+
+	token_buf = createPQExpBuffer();
+	if (!token_buf)
+		goto cleanup;
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	if (!conn->oauth_discovery_uri && conn->oauth_issuer)
+	{
+		discovery_buf = createPQExpBuffer();
+		if (!discovery_buf)
+			goto cleanup;
+
+		appendPQExpBufferStr(discovery_buf, conn->oauth_issuer);
+		appendPQExpBufferStr(discovery_buf, "/.well-known/openid-configuration");
+
+		if (PQExpBufferBroken(discovery_buf))
+			goto cleanup;
+
+		conn->oauth_discovery_uri = strdup(discovery_buf->data);
+	}
+
+	token = get_auth_token(conn);
+	if (!token)
+		goto cleanup;
+
+	appendPQExpBuffer(token_buf, resp_format, token);
+	if (PQExpBufferBroken(token_buf))
+		goto cleanup;
+
+	response = strdup(token_buf->data);
+
+cleanup:
+	if (token)
+		free(token);
+	if (discovery_buf)
+		destroyPQExpBuffer(discovery_buf);
+	if (token_buf)
+		destroyPQExpBuffer(token_buf);
+
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char		   *errmsg; /* any non-NULL value stops all processing */
+	PQExpBufferData errbuf; /* backing memory for errmsg */
+	int				nested; /* nesting level (zero is the top) */
+
+	const char	   *target_field_name; /* points to a static allocation */
+	char		  **target_field;      /* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char		   *status;
+	char		   *scope;
+	char		   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static void
+oauth_json_object_start(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+		return; /* short-circuit */
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+}
+
+static void
+oauth_json_object_end(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+		return; /* short-circuit */
+
+	--ctx->nested;
+}
+
+static void
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+	{
+		/* short-circuit */
+		free(name);
+		return;
+	}
+
+	if (ctx->nested == 1)
+	{
+		if (!strcmp(name, ERROR_STATUS_FIELD))
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (!strcmp(name, ERROR_SCOPE_FIELD))
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (!strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD))
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+}
+
+static void
+oauth_json_array_start(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+		return; /* short-circuit */
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+}
+
+static void
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+	{
+		/* short-circuit */
+		free(token);
+		return;
+	}
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return; /* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext		lex = {0};
+	JsonSemAction		sem = {0};
+	JsonParseErrorType	err;
+	struct json_ctx		ctx = {0};
+	char			   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL"));
+		return false;
+	}
+
+	initJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		errmsg = json_errdetail(err, &lex);
+	}
+	else if (PQExpBufferDataBroken(ctx.errbuf))
+	{
+		errmsg = libpq_gettext("out of memory");
+	}
+	else if (ctx.errmsg)
+	{
+		errmsg = ctx.errmsg;
+	}
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	termJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (!strcmp(ctx.status, "invalid_token"))
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen,
+			   bool *done, bool *success)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*done = false;
+	*success = false;
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn);
+			if (!*output)
+				goto error;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			break;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				*done = true;
+				*success = true;
+
+				break;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				goto error;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output); /* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			break;
+
+		case FE_OAUTH_SERVER_ERROR:
+			/*
+			 * After an error, the server should send an error response to fail
+			 * the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge which
+			 * isn't defined in the RFC, or completed the handshake successfully
+			 * after telling us it was going to fail. Neither is acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			goto error;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			goto error;
+	}
+
+	return;
+
+error:
+	*done = true;
+	*success = false;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 3d7ee576f2..0920102908 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -65,6 +65,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -92,7 +94,8 @@ typedef struct pg_fe_sasl_mech
 	 *			   Ignored if *done is false.
 	 *--------
 	 */
-	void		(*exchange) (void *state, char *input, int inputlen,
+	void		(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen,
 							 bool *done, bool *success);
 
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 4337e89ce9..489cbeda50 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static void scram_exchange(void *opaq, char *input, int inputlen,
+static void scram_exchange(void *opaq, bool final,
+						   char *input, int inputlen,
 						   char **output, int *outputlen,
 						   bool *done, bool *success);
 static bool scram_channel_bound(void *opaq);
@@ -205,7 +206,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static void
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen,
 			   bool *done, bool *success)
 {
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 3421ed4685..0b5b91962a 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -423,7 +424,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	bool		success;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
-	char	   *password;
+	char	   *password = NULL;
 
 	initPQExpBuffer(&mechanism_buf);
 
@@ -445,8 +446,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	/*
 	 * Parse the list of SASL authentication mechanisms in the
 	 * AuthenticationSASL message, and select the best mechanism that we
-	 * support.  SCRAM-SHA-256-PLUS and SCRAM-SHA-256 are the only ones
-	 * supported at the moment, listed by order of decreasing importance.
+	 * support.  Mechanisms are listed by order of decreasing importance.
 	 */
 	selected_mechanism = NULL;
 	for (;;)
@@ -486,6 +486,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 				{
 					selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
 					conn->sasl = &pg_scram_mech;
+					conn->password_needed = true;
 				}
 #else
 				/*
@@ -523,7 +524,17 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		{
 			selected_mechanism = SCRAM_SHA_256_NAME;
 			conn->sasl = &pg_scram_mech;
+			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				!selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -548,18 +559,19 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	/*
 	 * First, select the password to use for the exchange, complaining if
-	 * there isn't one.  Currently, all supported SASL mechanisms require a
-	 * password, so we can just go ahead here without further distinction.
+	 * there isn't one and the SASL mechanism needs it.
 	 */
-	conn->password_needed = true;
-	password = conn->connhost[conn->whichhost].password;
-	if (password == NULL)
-		password = conn->pgpass;
-	if (password == NULL || password[0] == '\0')
+	if (conn->password_needed)
 	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 PQnoPasswordSupplied);
-		goto error;
+		password = conn->connhost[conn->whichhost].password;
+		if (password == NULL)
+			password = conn->pgpass;
+		if (password == NULL || password[0] == '\0')
+		{
+			appendPQExpBufferStr(&conn->errorMessage,
+								 PQnoPasswordSupplied);
+			goto error;
+		}
 	}
 
 	Assert(conn->sasl);
@@ -577,7 +589,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto oom_error;
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	conn->sasl->exchange(conn->sasl_state,
+	conn->sasl->exchange(conn->sasl_state, false,
 						 NULL, -1,
 						 &initialresponse, &initialresponselen,
 						 &done, &success);
@@ -658,7 +670,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	conn->sasl->exchange(conn->sasl_state,
+	conn->sasl->exchange(conn->sasl_state, final,
 						 challenge, payloadlen,
 						 &output, &outputlen,
 						 &done, &success);
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 63927480ee..03bea124a6 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -26,4 +26,7 @@ extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 extern const pg_fe_sasl_mech pg_scram_mech;
 extern char *pg_fe_scram_build_secret(const char *password);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 49eec3e835..ba9c097060 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -344,6 +344,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Target-Session-Attrs", "", 15, /* sizeof("prefer-standby") = 15 */
 	offsetof(struct pg_conn, target_session_attrs)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -606,6 +623,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_err_msg = NULL;
 	conn->be_pid = 0;
 	conn->be_key = 0;
+	/* conn->oauth_want_retry = false; TODO */
 }
 
 
@@ -3355,6 +3373,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 #ifdef ENABLE_GSS
 
 					/*
@@ -4129,6 +4157,16 @@ freePGconn(PGconn *conn)
 		free(conn->rowBuf);
 	if (conn->target_session_attrs)
 		free(conn->target_session_attrs);
+	if (conn->oauth_issuer)
+		free(conn->oauth_issuer);
+	if (conn->oauth_discovery_uri)
+		free(conn->oauth_discovery_uri);
+	if (conn->oauth_client_id)
+		free(conn->oauth_client_id);
+	if (conn->oauth_client_secret)
+		free(conn->oauth_client_secret);
+	if (conn->oauth_scope)
+		free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 490458adef..3d20482550 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -394,6 +394,14 @@ struct pg_conn
 	char	   *ssl_max_protocol_version;	/* maximum TLS protocol version */
 	char	   *target_session_attrs;	/* desired session properties */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;			/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery document */
+	char	   *oauth_client_id;		/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;			/* access token scope */
+	bool		oauth_want_retry;		/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
-- 
2.25.1

v2-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchtext/x-patch; name=v2-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From d09a00b4d52a5ed578ee5cd7623108ebdd12f202 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v2 3/5] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external program: the oauth_validator_command.
This command must do the following:

1. Receive the bearer token by reading its contents from a file
   descriptor passed from the server. (The numeric value of this
   descriptor may be inserted into the oauth_validator_command using the
   %f specifier.)

   This MUST be the first action the command performs. The server will
   not begin reading stdout from the command until the token has been
   read in full, so if the command tries to print anything and hits a
   buffer limit, the backend will deadlock and time out.

2. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the command must exit with a
   non-zero status. Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The command should print the
      authenticated identity string to stdout, followed by a newline.

      If the user cannot be authenticated, the validator should not
      print anything to stdout. It should also exit with a non-zero
      status, unless the token may be used to authorize the connection
      through some other means (see below).

      On a success, the command may then exit with a zero success code.
      By default, the server will then check to make sure the identity
      string matches the role that is being used (or matches a usermap
      entry, if one is in use).

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below), the validator simply
      returns a zero exit code if the client should be allowed to
      connect with its presented role (which can be passed to the
      command using the %r specifier), or a non-zero code otherwise.

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the command may print
      the authenticated ID and then fail with a non-zero exit code.
      (This makes it easier to see what's going on in the Postgres
      logs.)

4. Token validators may optionally log to stderr. This will be printed
   verbatim into the Postgres server logs.

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Unlike the client, servers support OAuth without needing to be built
against libiddawc (since the responsibility for "speaking" OAuth/OIDC
correctly is delegated entirely to the oauth_validator_command).

Several TODOs:
- port to platforms other than "modern Linux"
- overhaul the communication with oauth_validator_command, which is
  currently a bad hack on OpenPipeStream()
- implement more sanity checks on the OAUTHBEARER message format and
  tokens sent by the client
- implement more helpful handling of HBA misconfigurations
- properly interpolate JSON when generating error responses
- use logdetail during auth failures
- deal with role names that can't be safely passed to system() without
  shell-escaping
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.
---
 src/backend/libpq/Makefile     |   1 +
 src/backend/libpq/auth-oauth.c | 797 +++++++++++++++++++++++++++++++++
 src/backend/libpq/auth-sasl.c  |  10 +-
 src/backend/libpq/auth-scram.c |   4 +-
 src/backend/libpq/auth.c       |  26 +-
 src/backend/libpq/hba.c        |  29 +-
 src/backend/utils/misc/guc.c   |  12 +
 src/include/libpq/auth.h       |  17 +
 src/include/libpq/hba.h        |   8 +-
 src/include/libpq/oauth.h      |  24 +
 src/include/libpq/sasl.h       |  11 +
 11 files changed, 907 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..c47211132c
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,797 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+
+/* GUC */
+char *oauth_validator_command;
+
+static void  oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int   oauth_exchange(void *opaq, const char *input, int inputlen,
+							char **output, int *outputlen, char **logdetail);
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state	state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth, char **logdetail);
+static bool run_validator_command(Port *port, const char *token);
+static bool check_exit(FILE **fh, const char *command);
+static bool unset_cloexec(int fd);
+static bool username_ok_for_shell(const char *username);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, char **logdetail)
+{
+	char   *p;
+	char	cbind_flag;
+	char   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a 'y'
+	 * specifier purely for the remote chance that a future specification could
+	 * define one; then future clients can still interoperate with this server
+	 * implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y': /* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character %s.",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth, logdetail))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char   *pos = *input;
+	char   *auth = NULL;
+
+	/*
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char   *end;
+		char   *sep;
+		char   *key;
+		char   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 *
+		 * TODO further validate the key/value grammar? empty keys, bad chars...
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per
+			 * Sec. 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL; /* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData	buf;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's not
+	 * really a way to hide this from the user, either, because we can't choose
+	 * a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: JSON escaping
+	 */
+	appendStringInfo(&buf,
+		"{ "
+			"\"status\": \"invalid_token\", "
+			"\"openid-configuration\": \"%s/.well-known/openid-configuration\","
+			"\"scope\": \"%s\" "
+		"}",
+		ctx->issuer, ctx->scope);
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+static bool
+validate(Port *port, const char *auth, char **logdetail)
+{
+	static const char * const b64_set = "abcdefghijklmnopqrstuvwxyz"
+										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+										"0123456789-._~+/";
+
+	const char *token;
+	size_t		span;
+	int			ret;
+
+	/* TODO: handle logdetail when the test framework can check it */
+
+	/*
+	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+	 * 2.1:
+	 *
+	 *      b64token    = 1*( ALPHA / DIGIT /
+	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+	 *      credentials = "Bearer" 1*SP b64token
+	 *
+	 * The "credentials" construction is what we receive in our auth value.
+	 *
+	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+	 * compared case-insensitively. (This is not mentioned in RFC 6750, but it's
+	 * pointed out in RFC 7628 Sec. 4.)
+	 *
+	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+	 */
+	if (strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return false;
+
+	/* Pull the bearer token out of the auth value. */
+	token = auth + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/*
+	 * Before invoking the validator command, sanity-check the token format to
+	 * avoid any injection attacks later in the chain. Invalid formats are
+	 * technically a protocol violation, but don't reflect any information about
+	 * the sensitive Bearer token back to the client; log at COMMERROR instead.
+	 */
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is empty.")));
+		return false;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end with
+	 * any number of '=' characters.
+	 */
+	span = strspn(token, b64_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the problematic
+		 * character(s), but that'd be a bit like printing a piece of someone's
+		 * password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return false;
+	}
+
+	/* Have the validator check the token. */
+	if (!run_validator_command(port, token))
+		return false;
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator says
+		 * the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (!port->authn_id)
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	ret = check_usermap(port->hba->usermap, port->user_name, port->authn_id,
+						false);
+	return (ret == STATUS_OK);
+}
+
+static bool
+run_validator_command(Port *port, const char *token)
+{
+	bool		success = false;
+	int			rc;
+	int			pipefd[2];
+	int			rfd = -1;
+	int			wfd = -1;
+
+	StringInfoData command = { 0 };
+	char	   *p;
+	FILE	   *fh = NULL;
+
+	ssize_t		written;
+	char	   *line = NULL;
+	size_t		size = 0;
+	ssize_t		len;
+
+	Assert(oauth_validator_command);
+
+	if (!oauth_validator_command[0])
+	{
+		ereport(COMMERROR,
+				(errmsg("oauth_validator_command is not set"),
+				 errhint("To allow OAuth authenticated connections, set "
+						 "oauth_validator_command in postgresql.conf.")));
+		return false;
+	}
+
+	/*
+	 * Since popen() is unidirectional, open up a pipe for the other direction.
+	 * Use CLOEXEC to ensure that our write end doesn't accidentally get copied
+	 * into child processes, which would prevent us from closing it cleanly.
+	 *
+	 * XXX this is ugly. We should just read from the child process's stdout,
+	 * but that's a lot more code.
+	 * XXX by bypassing the popen API, we open the potential of process
+	 * deadlock. Clearly document child process requirements (i.e. the child
+	 * MUST read all data off of the pipe before writing anything).
+	 * TODO: port to Windows using _pipe().
+	 */
+	rc = pipe2(pipefd, O_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not create child pipe: %m")));
+		return false;
+	}
+
+	rfd = pipefd[0];
+	wfd = pipefd[1];
+
+	/* Allow the read pipe be passed to the child. */
+	if (!unset_cloexec(rfd))
+	{
+		/* error message was already logged */
+		goto cleanup;
+	}
+
+	/*
+	 * Construct the command, substituting any recognized %-specifiers:
+	 *
+	 *   %f: the file descriptor of the input pipe
+	 *   %r: the role that the client wants to assume (port->user_name)
+	 *   %%: a literal '%'
+	 */
+	initStringInfo(&command);
+
+	for (p = oauth_validator_command; *p; p++)
+	{
+		if (p[0] == '%')
+		{
+			switch (p[1])
+			{
+				case 'f':
+					appendStringInfo(&command, "%d", rfd);
+					p++;
+					break;
+				case 'r':
+					/*
+					 * TODO: decide how this string should be escaped. The role
+					 * is controlled by the client, so if we don't escape it,
+					 * command injections are inevitable.
+					 *
+					 * This is probably an indication that the role name needs
+					 * to be communicated to the validator process in some other
+					 * way. For this proof of concept, just be incredibly strict
+					 * about the characters that are allowed in user names.
+					 */
+					if (!username_ok_for_shell(port->user_name))
+						goto cleanup;
+
+					appendStringInfoString(&command, port->user_name);
+					p++;
+					break;
+				case '%':
+					appendStringInfoChar(&command, '%');
+					p++;
+					break;
+				default:
+					appendStringInfoChar(&command, p[0]);
+			}
+		}
+		else
+			appendStringInfoChar(&command, p[0]);
+	}
+
+	/* Execute the command. */
+	fh = OpenPipeStream(command.data, "re");
+	/* TODO: handle failures */
+
+	/* We don't need the read end of the pipe anymore. */
+	close(rfd);
+	rfd = -1;
+
+	/* Give the command the token to validate. */
+	written = write(wfd, token, strlen(token));
+	if (written != strlen(token))
+	{
+		/* TODO must loop for short writes, EINTR et al */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not write token to child pipe: %m")));
+		goto cleanup;
+	}
+
+	close(wfd);
+	wfd = -1;
+
+	/*
+	 * Read the command's response.
+	 *
+	 * TODO: getline() is probably too new to use, unfortunately.
+	 * TODO: loop over all lines
+	 */
+	if ((len = getline(&line, &size, fh)) >= 0)
+	{
+		/* TODO: fail if the authn_id doesn't end with a newline */
+		if (len > 0)
+			line[len - 1] = '\0';
+
+		set_authn_id(port, line);
+	}
+	else if (ferror(fh))
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not read from command \"%s\": %m",
+						command.data)));
+		goto cleanup;
+	}
+
+	/* Make sure the command exits cleanly. */
+	if (!check_exit(&fh, command.data))
+	{
+		/* error message already logged */
+		goto cleanup;
+	}
+
+	/* Done. */
+	success = true;
+
+cleanup:
+	if (line)
+		free(line);
+
+	/*
+	 * In the successful case, the pipe fds are already closed. For the error
+	 * case, always close out the pipe before waiting for the command, to
+	 * prevent deadlock.
+	 */
+	if (rfd >= 0)
+		close(rfd);
+	if (wfd >= 0)
+		close(wfd);
+
+	if (fh)
+	{
+		Assert(!success);
+		check_exit(&fh, command.data);
+	}
+
+	if (command.data)
+		pfree(command.data);
+
+	return success;
+}
+
+static bool
+check_exit(FILE **fh, const char *command)
+{
+	int rc;
+
+	rc = ClosePipeStream(*fh);
+	*fh = NULL;
+
+	if (rc == -1)
+	{
+		/* pclose() itself failed. */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not close pipe to command \"%s\": %m",
+						command)));
+	}
+	else if (rc != 0)
+	{
+		char *reason = wait_result_to_str(rc);
+
+		ereport(COMMERROR,
+				(errmsg("failed to execute command \"%s\": %s",
+						command, reason)));
+
+		pfree(reason);
+	}
+
+	return (rc == 0);
+}
+
+static bool
+unset_cloexec(int fd)
+{
+	int			flags;
+	int			rc;
+
+	flags = fcntl(fd, F_GETFD);
+	if (flags == -1)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not get fd flags for child pipe: %m")));
+		return false;
+	}
+
+	rc = fcntl(fd, F_SETFD, flags & ~FD_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not unset FD_CLOEXEC for child pipe: %m")));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * XXX This should go away eventually and be replaced with either a proper
+ * escape or a different strategy for communication with the validator command.
+ */
+static bool
+username_ok_for_shell(const char *username)
+{
+	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
+	static const char * const allowed = "abcdefghijklmnopqrstuvwxyz"
+										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+										"0123456789-_./:";
+	size_t	span;
+
+	Assert(username && username[0]); /* should have already been checked */
+
+	span = strspn(username, allowed);
+	if (username[span] != '\0')
+	{
+		ereport(COMMERROR,
+				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
+		return false;
+	}
+
+	return true;
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 6cfd90fa21..f6c49a4de5 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 9df8f17837..5bb0388c01 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -117,7 +117,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 8cc23ef7fb..fbcc2c55b4 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -47,7 +48,6 @@
  */
 static void auth_failed(Port *port, int status, char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -205,22 +205,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -309,6 +293,9 @@ auth_failed(Port *port, int status, char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -343,7 +330,7 @@ auth_failed(Port *port, int status, char *logdetail)
  * lifetime of the Port, so it is safe to pass a string that is managed by an
  * external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -628,6 +615,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 3be8778d21..98147700dd 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -134,7 +134,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 
@@ -1399,6 +1400,8 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -1713,8 +1716,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
+			hbaline->auth_method != uaOAuth &&
 			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, oauth, and cert"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2098,6 +2102,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 467b0fd6fe..2b42862f71 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -56,6 +56,7 @@
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
+#include "libpq/oauth.h"
 #include "miscadmin.h"
 #include "optimizer/cost.h"
 #include "optimizer/geqo.h"
@@ -4594,6 +4595,17 @@ static struct config_string ConfigureNamesString[] =
 		check_backtrace_functions, assign_backtrace_functions, NULL
 	},
 
+	{
+		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_command,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 3d6734f253..1c77dcb0c1 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern char *pg_krb_server_keyfile;
 extern bool pg_krb_caseins_users;
 extern char *pg_krb_realm;
@@ -23,6 +39,7 @@ extern char *pg_krb_realm;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8d9f3821b1..441dd5623e 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -38,8 +38,9 @@ typedef enum UserAuth
 	uaLDAP,
 	uaCert,
 	uaRADIUS,
-	uaPeer
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaPeer,
+	uaOAuth
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -120,6 +121,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..870e426af1
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern char *oauth_validator_command;
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif /* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 4c611bab6b..c0a88430d5 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
-- 
2.25.1

v2-0004-Add-a-very-simple-authn_id-extension.patchtext/x-patch; name=v2-0004-Add-a-very-simple-authn_id-extension.patchDownload
From 7c4175f9ad87141d40dd44d6c9fe9312ce8e5b88 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 18 May 2021 15:01:29 -0700
Subject: [PATCH v2 4/5] Add a very simple authn_id extension

...for retrieving the authn_id from the server in tests.
---
 contrib/authn_id/Makefile          | 19 +++++++++++++++++++
 contrib/authn_id/authn_id--1.0.sql |  8 ++++++++
 contrib/authn_id/authn_id.c        | 28 ++++++++++++++++++++++++++++
 contrib/authn_id/authn_id.control  |  5 +++++
 4 files changed, 60 insertions(+)
 create mode 100644 contrib/authn_id/Makefile
 create mode 100644 contrib/authn_id/authn_id--1.0.sql
 create mode 100644 contrib/authn_id/authn_id.c
 create mode 100644 contrib/authn_id/authn_id.control

diff --git a/contrib/authn_id/Makefile b/contrib/authn_id/Makefile
new file mode 100644
index 0000000000..46026358e0
--- /dev/null
+++ b/contrib/authn_id/Makefile
@@ -0,0 +1,19 @@
+# contrib/authn_id/Makefile
+
+MODULE_big = authn_id
+OBJS = authn_id.o
+
+EXTENSION = authn_id
+DATA = authn_id--1.0.sql
+PGFILEDESC = "authn_id - information about the authenticated user"
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/authn_id
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/contrib/authn_id/authn_id--1.0.sql b/contrib/authn_id/authn_id--1.0.sql
new file mode 100644
index 0000000000..af2a4d3991
--- /dev/null
+++ b/contrib/authn_id/authn_id--1.0.sql
@@ -0,0 +1,8 @@
+/* contrib/authn_id/authn_id--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION authn_id" to load this file. \quit
+
+CREATE FUNCTION authn_id() RETURNS text
+AS 'MODULE_PATHNAME', 'authn_id'
+LANGUAGE C IMMUTABLE;
diff --git a/contrib/authn_id/authn_id.c b/contrib/authn_id/authn_id.c
new file mode 100644
index 0000000000..0fecac36a8
--- /dev/null
+++ b/contrib/authn_id/authn_id.c
@@ -0,0 +1,28 @@
+/*
+ * Extension to expose the current user's authn_id.
+ *
+ * contrib/authn_id/authn_id.c
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/libpq-be.h"
+#include "miscadmin.h"
+#include "utils/builtins.h"
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(authn_id);
+
+/*
+ * Returns the current user's authenticated identity.
+ */
+Datum
+authn_id(PG_FUNCTION_ARGS)
+{
+	if (!MyProcPort->authn_id)
+		PG_RETURN_NULL();
+
+	PG_RETURN_TEXT_P(cstring_to_text(MyProcPort->authn_id));
+}
diff --git a/contrib/authn_id/authn_id.control b/contrib/authn_id/authn_id.control
new file mode 100644
index 0000000000..e0f9e06bed
--- /dev/null
+++ b/contrib/authn_id/authn_id.control
@@ -0,0 +1,5 @@
+# authn_id extension
+comment = 'current user identity'
+default_version = '1.0'
+module_pathname = '$libdir/authn_id'
+relocatable = true
-- 
2.25.1

v2-0005-Add-pytest-suite-for-OAuth.patchtext/x-patch; name=v2-0005-Add-pytest-suite-for-OAuth.patchDownload
From 0281635f35a44e0fdfd4369423f98ebe5b467ce3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v2 5/5] Add pytest suite for OAuth

Requires Python 3; on the first run of `make installcheck` the
dependencies will be installed into ./venv for you. See the README for
more details.
---
 src/test/python/.gitignore                 |    2 +
 src/test/python/Makefile                   |   38 +
 src/test/python/README                     |   54 ++
 src/test/python/client/__init__.py         |    0
 src/test/python/client/conftest.py         |  126 +++
 src/test/python/client/test_client.py      |  180 ++++
 src/test/python/client/test_oauth.py       |  936 ++++++++++++++++++
 src/test/python/pq3.py                     |  727 ++++++++++++++
 src/test/python/pytest.ini                 |    4 +
 src/test/python/requirements.txt           |    7 +
 src/test/python/server/__init__.py         |    0
 src/test/python/server/conftest.py         |   45 +
 src/test/python/server/test_oauth.py       | 1012 ++++++++++++++++++++
 src/test/python/server/test_server.py      |   21 +
 src/test/python/server/validate_bearer.py  |  101 ++
 src/test/python/server/validate_reflect.py |   34 +
 src/test/python/test_internals.py          |  138 +++
 src/test/python/test_pq3.py                |  558 +++++++++++
 src/test/python/tls.py                     |  195 ++++
 19 files changed, 4178 insertions(+)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100755 src/test/python/server/validate_bearer.py
 create mode 100755 src/test/python/server/validate_reflect.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py

diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..0bda582c4b
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,54 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..f38da7a138
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,126 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import socket
+import sys
+import threading
+
+import psycopg2
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            conn.close()
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+    client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..c4c946fda4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,180 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = "server closed the connection unexpectedly"
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..a754a9c0b6
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,936 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import http.server
+import json
+import secrets
+import sys
+import threading
+import time
+import urllib.parse
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            self.server.serve_forever()
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+            self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _discovery_handler(self, headers, params):
+            oauth = self.server.oauth
+
+            doc = {
+                "issuer": oauth.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+            }
+
+            for name, path in oauth.endpoint_paths.items():
+                doc[name] = oauth.issuer + path
+
+            return 200, doc
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            code, resp = handler(self.headers, params)
+
+            self.send_response(code)
+            self.send_header("Content-Type", "application/json")
+            self.end_headers()
+
+            resp = json.dumps(resp)
+            resp = resp.encode("utf-8")
+            self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            if self.path == "/.well-known/openid-configuration":
+                self._handle(handler=self._discovery_handler)
+                return
+
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_with_explicit_issuer(
+    capfd, accept, openid_provider, retries, scope, secret
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user with the expected
+        # authorization URL and user code.
+        expected = f"Visit {verification_url} and enter the code: {user_code}"
+        _, stderr = capfd.readouterr()
+        assert expected in stderr
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+def test_oauth_retry_interval(accept, openid_provider, retries, error_code):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": expected_retry_interval,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            {
+                "error": "invalid_client",
+                "error_description": "client authentication failed",
+            },
+            r"client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            {"error": "invalid_request"},
+            r"\(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            {},
+            r"failed to obtain device authorization",
+            id="broken error response",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return 400, failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should not continue the connection due to the hardcoded
+            # provider failure; we disconnect here.
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            {
+                "error": "expired_token",
+                "error_description": "the device code has expired",
+            },
+            r"the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            {"error": "access_denied"},
+            r"\(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            {},
+            r"OAuth token retrieval failed",
+            id="broken error response",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+        return 400, failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should not continue the connection due to the hardcoded
+            # provider failure; we disconnect here.
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..3a22dad0b6
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,727 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload" / FixedSized(this.len - 4, Default(_payload, b"")),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..32f105ea84
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,7 @@
+black
+cryptography~=3.4.6
+construct~=2.10.61
+isort~=5.6
+psycopg2~=2.8.6
+pytest~=6.1
+pytest-asyncio~=0.14.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..ba7342a453
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,45 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+
+import pytest
+
+import pq3
+
+
+@pytest.fixture
+def connect():
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. The calling test will be
+    skipped automatically if a server is not running at PGHOST:PGPORT, so it's
+    best to connect as soon as possible after the test case begins, to avoid
+    doing unnecessary work.
+    """
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            addr = (pq3.pghost(), pq3.pgport())
+
+            try:
+                sock = socket.create_connection(addr, timeout=2)
+            except ConnectionError as e:
+                pytest.skip(f"unable to connect to {addr}: {e}")
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..cb5ca7fa23
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1012 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from psycopg2 import sql
+
+import pq3
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_TOKEN_SIZE = 4096
+MAX_UINT16 = 2 ** 16 - 1
+
+
+def skip_if_no_postgres():
+    """
+    Used by the oauth_ctx fixture to skip this test module if no Postgres server
+    is running.
+
+    This logic is nearly duplicated with the conn fixture. Ideally oauth_ctx
+    would depend on that, but a module-scope fixture can't depend on a
+    test-scope fixture, and we haven't reached the rule of three yet.
+    """
+    addr = (pq3.pghost(), pq3.pgport())
+
+    try:
+        with socket.create_connection(addr, timeout=2):
+            pass
+    except ConnectionError as e:
+        pytest.skip(f"unable to connect to {addr}: {e}")
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + ".bak"
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx():
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    skip_if_no_postgres()  # don't bother running these tests without a server
+
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = (
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    )
+    ident_lines = (r"oauth /^(.*)@example\.com$ \1",)
+
+    conn = psycopg2.connect("")
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Make this test script the server's oauth_validator.
+        path = pathlib.Path(__file__).parent / "validate_bearer.py"
+        path = str(path.absolute())
+
+        cmd = f"{shlex.quote(path)} {SHARED_MEM_NAME} <&%f"
+        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute("ALTER SYSTEM RESET oauth_validator_command;")
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def authn_id_extension(oauth_ctx):
+    """
+    Performs a `CREATE EXTENSION authn_id` in the test database. This fixture is
+    autoused, so tests don't need to rely on it.
+    """
+    conn = psycopg2.connect(database=oauth_ctx.dbname)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        c.execute("CREATE EXTENSION authn_id;")
+
+
+@pytest.fixture(scope="session")
+def shared_mem():
+    """
+    Yields a shared memory segment that can be used for communication between
+    the bearer_token fixture and ./validate_bearer.py.
+    """
+    size = MAX_TOKEN_SIZE + 2  # two byte length prefix
+    mem = shared_memory.SharedMemory(SHARED_MEM_NAME, create=True, size=size)
+
+    try:
+        with contextlib.closing(mem):
+            yield mem
+    finally:
+        mem.unlink()
+
+
+@pytest.fixture()
+def bearer_token(shared_mem):
+    """
+    Returns a factory function that, when called, will store a Bearer token in
+    shared_mem. If token is None (the default), a new token will be generated
+    using secrets.token_urlsafe() and returned; otherwise the passed token will
+    be used as-is.
+
+    When token is None, the generated token size in bytes may be specified as an
+    argument; if unset, a small 16-byte token will be generated. The token size
+    may not exceed MAX_TOKEN_SIZE in any case.
+
+    The return value is the token, converted to a bytes object.
+
+    As a special case for testing failure modes, accept_any may be set to True.
+    This signals to the validator command that any bearer token should be
+    accepted. The returned token in this case may be used or discarded as needed
+    by the test.
+    """
+
+    def set_token(token=None, *, size=16, accept_any=False):
+        if token is not None:
+            size = len(token)
+
+        if size > MAX_TOKEN_SIZE:
+            raise ValueError(f"token size {size} exceeds maximum size {MAX_TOKEN_SIZE}")
+
+        if token is None:
+            if size % 4:
+                raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+            token = secrets.token_urlsafe(size // 4 * 3)
+            assert len(token) == size
+
+        try:
+            token = token.encode("ascii")
+        except AttributeError:
+            pass  # already encoded
+
+        if accept_any:
+            # Two-byte magic value.
+            shared_mem.buf[:2] = struct.pack("H", MAX_UINT16)
+        else:
+            # Two-byte length prefix, then the token data.
+            shared_mem.buf[:2] = struct.pack("H", len(token))
+            shared_mem.buf[2 : size + 2] = token
+
+        return token
+
+    return set_token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(conn, oauth_ctx, bearer_token, auth_prefix, token_len):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    auth = auth_prefix + token
+
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(conn, oauth_ctx, bearer_token, token_value):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=bearer_token(token_value))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(conn, oauth_ctx, bearer_token, user, authn_id, should_succeed):
+    token = None
+
+    authn_id = authn_id(oauth_ctx)
+    if authn_id is not None:
+        authn_id = authn_id.encode("ascii")
+
+        # As a hack to get the validator to reflect arbitrary output from this
+        # test, encode the desired output as a base64 token. The validator will
+        # key on the leading "output=" to differentiate this from the random
+        # tokens generated by secrets.token_urlsafe().
+        output = b"output=" + authn_id + b"\n"
+        token = base64.urlsafe_b64encode(output)
+
+    token = bearer_token(token)
+    username = user(oauth_ctx)
+
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token)
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [authn_id]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx, bearer_token):
+    # Generate a new bearer token, which we will proceed not to use.
+    _ = bearer_token()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer me@example.com",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(conn, oauth_ctx, bearer_token, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    _ = bearer_token(accept_any=True)
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"x" * (MAX_SASL_MESSAGE_LENGTH + 1),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if not isinstance(payload, dict):
+        payload = dict(payload_data=payload)
+    pq3.send(conn, type, **payload)
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(conn, oauth_ctx, bearer_token):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + bearer_token() + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+@pytest.fixture()
+def set_validator():
+    """
+    A per-test fixture that allows a test to override the setting of
+    oauth_validator_command for the cluster. The setting will be reverted during
+    teardown.
+
+    Passing None will perform an ALTER SYSTEM RESET.
+    """
+    conn = psycopg2.connect("")
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Save the previous value.
+        c.execute("SHOW oauth_validator_command;")
+        prev_cmd = c.fetchone()[0]
+
+        def setter(cmd):
+            c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+            c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous value.
+        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (prev_cmd,))
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_oauth_no_validator(oauth_ctx, set_validator, connect, bearer_token):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+def test_oauth_validator_role(oauth_ctx, set_validator, connect):
+    # Switch the validator implementation. This validator will reflect the
+    # PGUSER as the authenticated identity.
+    path = pathlib.Path(__file__).parent / "validate_reflect.py"
+    path = str(path.absolute())
+
+    set_validator(f"{shlex.quote(path)} '%r' <&%f")
+    conn = connect()
+
+    # Log in. Note that the reflection validator ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=oauth_ctx.user)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = oauth_ctx.user.encode("utf-8")
+    assert row.columns == [expected]
+
+
+def test_oauth_role_with_shell_unsafe_characters(oauth_ctx, set_validator, connect):
+    """
+    XXX This test pins undesirable behavior. We should be able to handle any
+    valid Postgres role name.
+    """
+    # Switch the validator implementation. This validator will reflect the
+    # PGUSER as the authenticated identity.
+    path = pathlib.Path(__file__).parent / "validate_reflect.py"
+    path = str(path.absolute())
+
+    set_validator(f"{shlex.quote(path)} '%r' <&%f")
+    conn = connect()
+
+    unsafe_username = "hello'there"
+    begin_oauth_handshake(conn, oauth_ctx, user=unsafe_username)
+
+    # The server should reject the handshake.
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_failure(conn, oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/server/validate_bearer.py b/src/test/python/server/validate_bearer.py
new file mode 100755
index 0000000000..2cc73ff154
--- /dev/null
+++ b/src/test/python/server/validate_bearer.py
@@ -0,0 +1,101 @@
+#! /usr/bin/env python3
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+# DO NOT USE THIS OAUTH VALIDATOR IN PRODUCTION. It doesn't actually validate
+# anything, and it logs the bearer token data, which is sensitive.
+#
+# This executable is used as an oauth_validator_command in concert with
+# test_oauth.py. Memory is shared and communicated from that test module's
+# bearer_token() fixture.
+#
+# This script must run under the Postgres server environment; keep the
+# dependency list fairly standard.
+
+import base64
+import binascii
+import contextlib
+import struct
+import sys
+from multiprocessing import shared_memory
+
+MAX_UINT16 = 2 ** 16 - 1
+
+
+def remove_shm_from_resource_tracker():
+    """
+    Monkey-patch multiprocessing.resource_tracker so SharedMemory won't be
+    tracked. Pulled from this thread, where there are more details:
+
+        https://bugs.python.org/issue38119
+
+    TL;DR: all clients of shared memory segments automatically destroy them on
+    process exit, which makes shared memory segments much less useful. This
+    monkeypatch removes that behavior so that we can defer to the test to manage
+    the segment lifetime.
+
+    Ideally a future Python patch will pull in this fix and then the entire
+    function can go away.
+    """
+    from multiprocessing import resource_tracker
+
+    def fix_register(name, rtype):
+        if rtype == "shared_memory":
+            return
+        return resource_tracker._resource_tracker.register(self, name, rtype)
+
+    resource_tracker.register = fix_register
+
+    def fix_unregister(name, rtype):
+        if rtype == "shared_memory":
+            return
+        return resource_tracker._resource_tracker.unregister(self, name, rtype)
+
+    resource_tracker.unregister = fix_unregister
+
+    if "shared_memory" in resource_tracker._CLEANUP_FUNCS:
+        del resource_tracker._CLEANUP_FUNCS["shared_memory"]
+
+
+def main(args):
+    remove_shm_from_resource_tracker()  # XXX remove some day
+
+    # Get the expected token from the currently running test.
+    shared_mem_name = args[0]
+
+    mem = shared_memory.SharedMemory(shared_mem_name)
+    with contextlib.closing(mem):
+        # First two bytes are the token length.
+        size = struct.unpack("H", mem.buf[:2])[0]
+
+        if size == MAX_UINT16:
+            # Special case: the test wants us to accept any token.
+            sys.stderr.write("accepting token without validation\n")
+            return
+
+        # The remainder of the buffer contains the expected token.
+        assert size <= (mem.size - 2)
+        expected_token = mem.buf[2 : size + 2].tobytes()
+
+        mem.buf[:] = b"\0" * mem.size  # scribble over the token
+
+    token = sys.stdin.buffer.read()
+    if token != expected_token:
+        sys.exit(f"failed to match Bearer token ({token!r} != {expected_token!r})")
+
+    # See if the test wants us to print anything. If so, it will have encoded
+    # the desired output in the token with an "output=" prefix.
+    try:
+        # altchars="-_" corresponds to the urlsafe alphabet.
+        data = base64.b64decode(token, altchars="-_", validate=True)
+
+        if data.startswith(b"output="):
+            sys.stdout.buffer.write(data[7:])
+
+    except binascii.Error:
+        pass
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/src/test/python/server/validate_reflect.py b/src/test/python/server/validate_reflect.py
new file mode 100755
index 0000000000..24c3a7e715
--- /dev/null
+++ b/src/test/python/server/validate_reflect.py
@@ -0,0 +1,34 @@
+#! /usr/bin/env python3
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+# DO NOT USE THIS OAUTH VALIDATOR IN PRODUCTION. It ignores the bearer token
+# entirely and automatically logs the user in.
+#
+# This executable is used as an oauth_validator_command in concert with
+# test_oauth.py. It expects the user's desired role name as an argument; the
+# actual token will be discarded and the user will be logged in with the role
+# name as the authenticated identity.
+#
+# This script must run under the Postgres server environment; keep the
+# dependency list fairly standard.
+
+import sys
+
+
+def main(args):
+    # We have to read the entire token as our first action to unblock the
+    # server, but we won't actually use it.
+    _ = sys.stdin.buffer.read()
+
+    if len(args) != 1:
+        sys.exit("usage: ./validate_reflect.py ROLE")
+
+    # Log the user in as the provided role.
+    role = args[0]
+    print(role)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..e0c0e0568d
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,558 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05\x00",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        ("PGUSER", pq3.pguser, getpass.getuser()),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
-- 
2.25.1

#8Zhihong Yu
zyu@yugabyte.com
In reply to: Jacob Champion (#7)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Aug 25, 2021 at 11:42 AM Jacob Champion <pchampion@vmware.com>
wrote:

On Tue, 2021-06-22 at 23:22 +0000, Jacob Champion wrote:

On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:

A few small things caught my eye in the backend oauth_exchange

function:

+       /* Handle the client's initial message. */
+       p = strdup(input);

this strdup() should be pstrdup().

Thanks, I'll fix that in the next re-roll.

In the same function, there are a bunch of reports like this:

ereport(ERROR,
+                          (errcode(ERRCODE_PROTOCOL_VIOLATION),
+                           errmsg("malformed OAUTHBEARER message"),
+                           errdetail("Comma expected, but found

character \"%s\".",

+ sanitize_char(*p))));

I don't think the double quotes are needed here, because sanitize_char
will return quotes if it's a single character. So it would end up
looking like this: ... found character "'x'".

I'll fix this too. Thanks!

v2, attached, incorporates Heikki's suggested fixes and also rebases on
top of latest HEAD, which had the SASL refactoring changes committed
last month.

The biggest change from the last patchset is 0001, an attempt at
enabling jsonapi in the frontend without the use of palloc(), based on
suggestions by Michael and Tom from last commitfest. I've also made
some improvements to the pytest suite. No major changes to the OAuth
implementation yet.

--Jacob

Hi,
For v2-0001-common-jsonapi-support-FRONTEND-clients.patch :

+ /* Clean up. */
+ termJsonLexContext(&lex);

At the end of termJsonLexContext(), empty is copied to lex. For stack
based JsonLexContext, the copy seems unnecessary.
Maybe introduce a boolean parameter for termJsonLexContext() to signal that
the copy can be omitted ?

+#ifdef FRONTEND
+       /* make sure initialization succeeded */
+       if (lex->strval == NULL)
+           return JSON_OUT_OF_MEMORY;

Should PQExpBufferBroken(lex->strval) be used for the check ?

Thanks

#9Zhihong Yu
zyu@yugabyte.com
In reply to: Zhihong Yu (#8)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Aug 25, 2021 at 3:25 PM Zhihong Yu <zyu@yugabyte.com> wrote:

On Wed, Aug 25, 2021 at 11:42 AM Jacob Champion <pchampion@vmware.com>
wrote:

On Tue, 2021-06-22 at 23:22 +0000, Jacob Champion wrote:

On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:

A few small things caught my eye in the backend oauth_exchange

function:

+       /* Handle the client's initial message. */
+       p = strdup(input);

this strdup() should be pstrdup().

Thanks, I'll fix that in the next re-roll.

In the same function, there are a bunch of reports like this:

ereport(ERROR,
+                          (errcode(ERRCODE_PROTOCOL_VIOLATION),
+                           errmsg("malformed OAUTHBEARER message"),
+                           errdetail("Comma expected, but found

character \"%s\".",

+ sanitize_char(*p))));

I don't think the double quotes are needed here, because

sanitize_char

will return quotes if it's a single character. So it would end up
looking like this: ... found character "'x'".

I'll fix this too. Thanks!

v2, attached, incorporates Heikki's suggested fixes and also rebases on
top of latest HEAD, which had the SASL refactoring changes committed
last month.

The biggest change from the last patchset is 0001, an attempt at
enabling jsonapi in the frontend without the use of palloc(), based on
suggestions by Michael and Tom from last commitfest. I've also made
some improvements to the pytest suite. No major changes to the OAuth
implementation yet.

--Jacob

Hi,
For v2-0001-common-jsonapi-support-FRONTEND-clients.patch :

+ /* Clean up. */
+ termJsonLexContext(&lex);

At the end of termJsonLexContext(), empty is copied to lex. For stack
based JsonLexContext, the copy seems unnecessary.
Maybe introduce a boolean parameter for termJsonLexContext() to signal
that the copy can be omitted ?

+#ifdef FRONTEND
+       /* make sure initialization succeeded */
+       if (lex->strval == NULL)
+           return JSON_OUT_OF_MEMORY;

Should PQExpBufferBroken(lex->strval) be used for the check ?

Thanks

Hi,
For v2-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch :

+   i_init_session(&session);
+
+   if (!conn->oauth_client_id)
+   {
+       /* We can't talk to a server without a client identifier. */
+       appendPQExpBufferStr(&conn->errorMessage,
+                            libpq_gettext("no oauth_client_id is set for
the connection"));
+       goto cleanup;

Can conn->oauth_client_id check be performed ahead of i_init_session() ?
That way, ```goto cleanup``` can be replaced with return.

+       if (!error_code || (strcmp(error_code, "authorization_pending")
+                           && strcmp(error_code, "slow_down")))

What if, in the future, there is error code different from the above two
which doesn't represent "OAuth token retrieval failed" scenario ?

For client_initial_response(),

+   token_buf = createPQExpBuffer();
+   if (!token_buf)
+       goto cleanup;

If token_buf is NULL, there doesn't seem to be anything to free. We can
return directly.

Cheers

#10Jacob Champion
pchampion@vmware.com
In reply to: Zhihong Yu (#9)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, 2021-08-25 at 15:25 -0700, Zhihong Yu wrote:

Hi,
For v2-0001-common-jsonapi-support-FRONTEND-clients.patch :

+ /* Clean up. */
+ termJsonLexContext(&lex);

At the end of termJsonLexContext(), empty is copied to lex. For stack
based JsonLexContext, the copy seems unnecessary.
Maybe introduce a boolean parameter for termJsonLexContext() to
signal that the copy can be omitted ?

Do you mean heap-based? i.e. destroyJsonLexContext() does an
unnecessary copy before free? Yeah, in that case it's not super useful,
but I think I'd want some evidence that the performance hit matters
before optimizing it.

Are there any other internal APIs that take a boolean parameter like
that? If not, I think we'd probably just want to remove the copy
entirely if it's a problem.

+#ifdef FRONTEND
+       /* make sure initialization succeeded */
+       if (lex->strval == NULL)
+           return JSON_OUT_OF_MEMORY;

Should PQExpBufferBroken(lex->strval) be used for the check ?

It should be okay to continue if the strval is broken but non-NULL,
since it's about to be reset. That has the fringe benefit of allowing
the function to go as far as possible without failing, though that's
probably a pretty weak justification.

In practice, do you think that the probability of success is low enough
that we should just short-circuit and be done with it?

On Wed, 2021-08-25 at 16:24 -0700, Zhihong Yu wrote:

For v2-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch :

+   i_init_session(&session);
+
+   if (!conn->oauth_client_id)
+   {
+       /* We can't talk to a server without a client identifier. */
+       appendPQExpBufferStr(&conn->errorMessage,
+                            libpq_gettext("no oauth_client_id is set for the connection"));
+       goto cleanup;

Can conn->oauth_client_id check be performed ahead
of i_init_session() ? That way, ```goto cleanup``` can be replaced
with return.

Yeah, I think that makes sense. FYI, this is probably one of the
functions that will be rewritten completely once iddawc is removed.

+       if (!error_code || (strcmp(error_code, "authorization_pending")
+                           && strcmp(error_code, "slow_down")))

What if, in the future, there is error code different from the above
two which doesn't represent "OAuth token retrieval failed" scenario ?

We'd have to update our code; that would be a breaking change to the
Device Authorization spec. Here's what it says today [1]https://datatracker.ietf.org/doc/html/rfc8628#section-3.5:

The "authorization_pending" and "slow_down" error codes define
particularly unique behavior, as they indicate that the OAuth client
should continue to poll the token endpoint by repeating the token
request (implementing the precise behavior defined above). If the
client receives an error response with any other error code, it MUST
stop polling and SHOULD react accordingly, for example, by displaying
an error to the user.

For client_initial_response(),

+   token_buf = createPQExpBuffer();
+   if (!token_buf)
+       goto cleanup;

If token_buf is NULL, there doesn't seem to be anything to free. We
can return directly.

That's true today, but implementations have a habit of changing. I
personally prefer not to introduce too many exit points from a function
that's already using goto. In my experience, that makes future
maintenance harder.

Thanks for the reviews! Have you been able to give the patchset a try
with an OAuth deployment?

--Jacob

[1]: https://datatracker.ietf.org/doc/html/rfc8628#section-3.5

#11Zhihong Yu
zyu@yugabyte.com
In reply to: Jacob Champion (#10)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Aug 26, 2021 at 9:13 AM Jacob Champion <pchampion@vmware.com> wrote:

On Wed, 2021-08-25 at 15:25 -0700, Zhihong Yu wrote:

Hi,
For v2-0001-common-jsonapi-support-FRONTEND-clients.patch :

+ /* Clean up. */
+ termJsonLexContext(&lex);

At the end of termJsonLexContext(), empty is copied to lex. For stack
based JsonLexContext, the copy seems unnecessary.
Maybe introduce a boolean parameter for termJsonLexContext() to
signal that the copy can be omitted ?

Do you mean heap-based? i.e. destroyJsonLexContext() does an
unnecessary copy before free? Yeah, in that case it's not super useful,
but I think I'd want some evidence that the performance hit matters
before optimizing it.

Are there any other internal APIs that take a boolean parameter like
that? If not, I think we'd probably just want to remove the copy
entirely if it's a problem.

+#ifdef FRONTEND
+       /* make sure initialization succeeded */
+       if (lex->strval == NULL)
+           return JSON_OUT_OF_MEMORY;

Should PQExpBufferBroken(lex->strval) be used for the check ?

It should be okay to continue if the strval is broken but non-NULL,
since it's about to be reset. That has the fringe benefit of allowing
the function to go as far as possible without failing, though that's
probably a pretty weak justification.

In practice, do you think that the probability of success is low enough
that we should just short-circuit and be done with it?

On Wed, 2021-08-25 at 16:24 -0700, Zhihong Yu wrote:

For v2-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch :

+   i_init_session(&session);
+
+   if (!conn->oauth_client_id)
+   {
+       /* We can't talk to a server without a client identifier. */
+       appendPQExpBufferStr(&conn->errorMessage,
+                            libpq_gettext("no oauth_client_id is set

for the connection"));

+ goto cleanup;

Can conn->oauth_client_id check be performed ahead
of i_init_session() ? That way, ```goto cleanup``` can be replaced
with return.

Yeah, I think that makes sense. FYI, this is probably one of the
functions that will be rewritten completely once iddawc is removed.

+       if (!error_code || (strcmp(error_code, "authorization_pending")
+                           && strcmp(error_code, "slow_down")))

What if, in the future, there is error code different from the above
two which doesn't represent "OAuth token retrieval failed" scenario ?

We'd have to update our code; that would be a breaking change to the
Device Authorization spec. Here's what it says today [1]:

The "authorization_pending" and "slow_down" error codes define
particularly unique behavior, as they indicate that the OAuth client
should continue to poll the token endpoint by repeating the token
request (implementing the precise behavior defined above). If the
client receives an error response with any other error code, it MUST
stop polling and SHOULD react accordingly, for example, by displaying
an error to the user.

For client_initial_response(),

+   token_buf = createPQExpBuffer();
+   if (!token_buf)
+       goto cleanup;

If token_buf is NULL, there doesn't seem to be anything to free. We
can return directly.

That's true today, but implementations have a habit of changing. I
personally prefer not to introduce too many exit points from a function
that's already using goto. In my experience, that makes future
maintenance harder.

Thanks for the reviews! Have you been able to give the patchset a try
with an OAuth deployment?

--Jacob

[1] https://datatracker.ietf.org/doc/html/rfc8628#section-3.5

Hi,
bq. destroyJsonLexContext() does an unnecessary copy before free? Yeah, in
that case it's not super useful,
but I think I'd want some evidence that the performance hit matters before
optimizing it.

Yes I agree.

bq. In practice, do you think that the probability of success is low enough
that we should just short-circuit and be done with it?

Haven't had a chance to try your patches out yet.
I will leave this to people who are more familiar with OAuth
implementation(s).

bq. I personally prefer not to introduce too many exit points from a
function that's already using goto.

Fair enough.

Cheers

#12Michael Paquier
michael@paquier.xyz
In reply to: Jacob Champion (#10)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Aug 26, 2021 at 04:13:08PM +0000, Jacob Champion wrote:

Do you mean heap-based? i.e. destroyJsonLexContext() does an
unnecessary copy before free? Yeah, in that case it's not super useful,
but I think I'd want some evidence that the performance hit matters
before optimizing it.

As an authentication code path, the impact is minimal and my take on
that would be to keep the code simple. Now if you'd really wish to
stress that without relying on the backend, one simple way is to use
pgbench -C -n with a mostly-empty script (one meta-command) coupled
with some profiling.
--
Michael

#13Jacob Champion
pchampion@vmware.com
In reply to: Michael Paquier (#12)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, 2021-08-27 at 11:32 +0900, Michael Paquier wrote:

Now if you'd really wish to
stress that without relying on the backend, one simple way is to use
pgbench -C -n with a mostly-empty script (one meta-command) coupled
with some profiling.

Ah, thanks! I'll add that to the toolbox.

--Jacob

#14Jacob Champion
pchampion@vmware.com
In reply to: Jacob Champion (#7)
9 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi all,

v3 rebases this patchset over the top of Samay's pluggable auth
provider API [1]/messages/by-id/CAJxrbyxTRn5P8J-p+wHLwFahK5y56PhK28VOb55jqMO05Y-DJw@mail.gmail.com, included here as patches 0001-3. The final patch in
the set ports the server implementation from a core feature to a
contrib module; to switch between the two approaches, simply leave out
that final patch.

There are still some backend changes that must be made to get this
working, as pointed out in 0009, and obviously libpq support still
requires code changes.

--Jacob

[1]: /messages/by-id/CAJxrbyxTRn5P8J-p+wHLwFahK5y56PhK28VOb55jqMO05Y-DJw@mail.gmail.com

Attachments:

v3-0001-Add-support-for-custom-authentication-methods.patchtext/x-patch; name=v3-0001-Add-support-for-custom-authentication-methods.patchDownload
From 206060ed1b31fcec48fb6ee05d61b135ec98cecf Mon Sep 17 00:00:00 2001
From: Samay Sharma <smilingsamay@gmail.com>
Date: Tue, 15 Feb 2022 22:23:29 -0800
Subject: [PATCH v3 1/9] Add support for custom authentication methods

Currently, PostgreSQL supports only a set of pre-defined authentication
methods. This patch adds support for 2 hooks which allow users to add
their custom authentication methods by defining a check function and an
error function. Users can then use these methods by using a new "custom"
keyword in pg_hba.conf and specifying the authentication provider they
want to use.
---
 src/backend/libpq/auth.c | 89 ++++++++++++++++++++++++++++++----------
 src/backend/libpq/hba.c  | 44 ++++++++++++++++++++
 src/include/libpq/auth.h | 27 ++++++++++++
 src/include/libpq/hba.h  |  4 ++
 4 files changed, 143 insertions(+), 21 deletions(-)

diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index efc53f3135..3533b0bc50 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -47,8 +47,6 @@
  *----------------------------------------------------------------
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
-static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -206,23 +204,6 @@ static int	pg_SSPI_make_upn(char *accountname,
 static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
-
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -235,6 +216,16 @@ static int	PerformRadiusTransaction(const char *server, const char *secret, cons
  */
 ClientAuthentication_hook_type ClientAuthentication_hook = NULL;
 
+/*
+ * These hooks allow plugins to get control of the client authentication check
+ * and error reporting logic. This allows users to write extensions to
+ * implement authentication using any protocol of their choice. To acquire these
+ * hooks, plugins need to call the RegisterAuthProvider() function.
+ */
+static CustomAuthenticationCheck_hook_type CustomAuthenticationCheck_hook = NULL;
+static CustomAuthenticationError_hook_type CustomAuthenticationError_hook = NULL;
+char *custom_provider_name = NULL;
+
 /*
  * Tell the user the authentication failed, but not (much about) why.
  *
@@ -311,6 +302,12 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaCustom:
+			if (CustomAuthenticationError_hook)
+				errstr = CustomAuthenticationError_hook(port);
+			else
+				errstr = gettext_noop("Custom authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -345,7 +342,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of the Port, so it is safe to pass a string that is managed by an
  * external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -630,6 +627,10 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaCustom:
+			if (CustomAuthenticationCheck_hook)
+				status = CustomAuthenticationCheck_hook(port);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
@@ -689,7 +690,7 @@ sendAuthRequest(Port *port, AuthRequest areq, const char *extradata, int extrale
  *
  * Returns NULL if couldn't get password, else palloc'd string.
  */
-static char *
+char *
 recv_password_packet(Port *port)
 {
 	StringInfoData buf;
@@ -3343,3 +3344,49 @@ PerformRadiusTransaction(const char *server, const char *secret, const char *por
 		}
 	}							/* while (true) */
 }
+
+/*----------------------------------------------------------------
+ * Custom authentication
+ *----------------------------------------------------------------
+ */
+
+/*
+ * RegisterAuthProvider registers a custom authentication provider to be
+ * used for authentication. Currently, we allow only one authentication
+ * provider to be registered for use at a time.
+ *
+ * This function should be called in _PG_init() by any extension looking to
+ * add a custom authentication method.
+ */
+void RegisterAuthProvider(const char *provider_name,
+		CustomAuthenticationCheck_hook_type AuthenticationCheckFunction,
+		CustomAuthenticationError_hook_type AuthenticationErrorFunction)
+{
+	if (provider_name == NULL)
+	{
+		ereport(ERROR,
+				(errmsg("cannot register authentication provider without name")));
+	}
+
+	if (AuthenticationCheckFunction == NULL)
+	{
+		ereport(ERROR,
+				(errmsg("cannot register authentication provider without a check function")));
+	}
+
+	if (custom_provider_name)
+	{
+		ereport(ERROR,
+				(errmsg("cannot register authentication provider %s", provider_name),
+				 errdetail("Only one authentication provider allowed.  Provider %s is already registered.",
+							custom_provider_name)));
+	}
+
+	/*
+	 * Allocate in top memory context as we need to read this whenever
+	 * we parse pg_hba.conf
+	 */
+	custom_provider_name = MemoryContextStrdup(TopMemoryContext,provider_name);
+	CustomAuthenticationCheck_hook = AuthenticationCheckFunction;
+	CustomAuthenticationError_hook = AuthenticationErrorFunction;
+}
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index d84a40b726..ebae992964 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -134,6 +134,7 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
+	"custom",
 	"peer"
 };
 
@@ -1399,6 +1400,8 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "custom") == 0)
+		parsedline->auth_method = uaCustom;
 	else
 	{
 		ereport(elevel,
@@ -1691,6 +1694,14 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Ensure that the provider name is specified for custom authentication method.
+	 */
+	if (parsedline->auth_method == uaCustom)
+	{
+		MANDATORY_AUTH_ARG(parsedline->custom_provider, "provider", "custom");
+	}
+
 	return parsedline;
 }
 
@@ -2102,6 +2113,32 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "provider") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaCustom, "provider", "custom");
+
+		/*
+		 * Verify that the provider mentioned is same as the one loaded
+		 * via shared_preload_libraries.
+		 */
+
+		if (custom_provider_name == NULL || strcmp(val,custom_provider_name) != 0)
+		{
+			ereport(elevel,
+					(errcode(ERRCODE_CONFIG_FILE_ERROR),
+					 errmsg("cannot use authentication provider %s",val),
+					 errhint("Load authentication provider via shared_preload_libraries."),
+					 errcontext("line %d of configuration file \"%s\"",
+							line_num, HbaFileName)));
+			*err_msg = psprintf("cannot use authentication provider %s", val);
+
+			return false;
+		}
+		else
+		{
+			hbaline->custom_provider = pstrdup(val);
+		}
+	}
 	else
 	{
 		ereport(elevel,
@@ -2442,6 +2479,13 @@ gethba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaCustom)
+	{
+		if (hba->custom_provider)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("provider=%s",hba->custom_provider));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 6d7ee1acb9..1d10cccc1b 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -23,9 +23,36 @@ extern char *pg_krb_realm;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
+extern char *recv_password_packet(Port *port);
 
 /* Hook for plugins to get control in ClientAuthentication() */
+typedef int (*CustomAuthenticationCheck_hook_type) (Port *);
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
 extern PGDLLIMPORT ClientAuthentication_hook_type ClientAuthentication_hook;
 
+/* Hook for plugins to report error messages in auth_failed() */
+typedef const char * (*CustomAuthenticationError_hook_type) (Port *);
+
+extern void RegisterAuthProvider
+		(const char *provider_name,
+		 CustomAuthenticationCheck_hook_type CustomAuthenticationCheck_hook,
+		 CustomAuthenticationError_hook_type CustomAuthenticationError_hook);
+
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 #endif							/* AUTH_H */
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8d9f3821b1..c5aef6994c 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -38,6 +38,7 @@ typedef enum UserAuth
 	uaLDAP,
 	uaCert,
 	uaRADIUS,
+	uaCustom,
 	uaPeer
 #define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
 } UserAuth;
@@ -120,6 +121,7 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *custom_provider;
 } HbaLine;
 
 typedef struct IdentLine
@@ -144,4 +146,6 @@ extern int	check_usermap(const char *usermap_name,
 						  bool case_sensitive);
 extern bool pg_isblank(const char c);
 
+extern char *custom_provider_name;
+
 #endif							/* HBA_H */
-- 
2.25.1

v3-0002-Add-sample-extension-to-test-custom-auth-provider.patchtext/x-patch; name=v3-0002-Add-sample-extension-to-test-custom-auth-provider.patchDownload
From 93b108334fa9fcc2d5f68d75fc320cd2714eb9da Mon Sep 17 00:00:00 2001
From: Samay Sharma <smilingsamay@gmail.com>
Date: Tue, 15 Feb 2022 22:28:40 -0800
Subject: [PATCH v3 2/9] Add sample extension to test custom auth provider
 hooks

This change adds a new extension to src/test/modules to
test the custom authentication provider hooks. In this
extension, we use an array to define which users to
authenticate and what passwords to use.
---
 src/test/modules/test_auth_provider/Makefile  | 16 ++++
 .../test_auth_provider/test_auth_provider.c   | 90 +++++++++++++++++++
 2 files changed, 106 insertions(+)
 create mode 100644 src/test/modules/test_auth_provider/Makefile
 create mode 100644 src/test/modules/test_auth_provider/test_auth_provider.c

diff --git a/src/test/modules/test_auth_provider/Makefile b/src/test/modules/test_auth_provider/Makefile
new file mode 100644
index 0000000000..17971a5c7a
--- /dev/null
+++ b/src/test/modules/test_auth_provider/Makefile
@@ -0,0 +1,16 @@
+# src/test/modules/test_auth_provider/Makefile
+
+MODULE_big = test_auth_provider
+OBJS = test_auth_provider.o
+PGFILEDESC = "test_auth_provider - provider to test auth hooks"
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_auth_provider
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_auth_provider/test_auth_provider.c b/src/test/modules/test_auth_provider/test_auth_provider.c
new file mode 100644
index 0000000000..477ef8b2c3
--- /dev/null
+++ b/src/test/modules/test_auth_provider/test_auth_provider.c
@@ -0,0 +1,90 @@
+/* -------------------------------------------------------------------------
+ *
+ * test_auth_provider.c
+ *			example authentication provider plugin		
+ *
+ * Copyright (c) 2022, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *		contrib/test_auth_provider/test_auth_provider.c
+ *
+ * -------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+#include "fmgr.h"
+#include "libpq/auth.h"
+#include "libpq/libpq.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static char *get_password_for_user(char *user_name);
+
+/*
+ * List of usernames / passwords to approve. Here we are not
+ * getting passwords from Postgres but from this list. In a more real-life
+ * extension, you can fetch valid credentials and authentication tokens /
+ * passwords from an external authentication provider.
+ */
+char credentials[3][3][50] = {
+	{"bob","alice","carol"},
+	{"bob123","alice123","carol123"}
+};
+
+static int TestAuthenticationCheck(Port *port)
+{
+	char *passwd;
+	int result = STATUS_ERROR;
+	char *real_pass;
+
+	sendAuthRequest(port, AUTH_REQ_PASSWORD, NULL, 0);
+
+	passwd = recv_password_packet(port);
+	if (passwd == NULL)
+		return STATUS_EOF;
+		
+	real_pass = get_password_for_user(port->user_name);
+	if (real_pass)
+	{
+		if(strcmp(passwd, real_pass) == 0)
+		{
+			result = STATUS_OK;
+		}
+		pfree(real_pass);
+	}
+
+	pfree(passwd);
+
+	return result;
+}
+
+static char *
+get_password_for_user(char *user_name)
+{
+	char *password = NULL;
+	int i;
+	for (i=0; i<3; i++)
+	{
+		if (strcmp(user_name, credentials[0][i]) == 0)
+		{
+			password = pstrdup(credentials[1][i]);
+		}
+	}
+
+	return password;
+}
+
+static const char *TestAuthenticationError(Port *port)
+{
+	char *error_message = (char *)palloc (100);
+	sprintf(error_message, "Test authentication failed for user %s", port->user_name);
+	return error_message;
+}
+
+void
+_PG_init(void)
+{
+	RegisterAuthProvider("test", TestAuthenticationCheck, TestAuthenticationError);
+}
-- 
2.25.1

v3-0003-Add-tests-for-test_auth_provider-extension.patchtext/x-patch; name=v3-0003-Add-tests-for-test_auth_provider-extension.patchDownload
From 3aa2535fa42b142beabdcef234d1939738f36b9a Mon Sep 17 00:00:00 2001
From: Samay Sharma <smilingsamay@gmail.com>
Date: Wed, 16 Feb 2022 12:28:36 -0800
Subject: [PATCH v3 3/9] Add tests for test_auth_provider extension

Add tap tests for test_auth_provider extension allow make check in
src/test/modules to run them.
---
 src/test/modules/Makefile                     |   1 +
 src/test/modules/test_auth_provider/Makefile  |   2 +
 .../test_auth_provider/t/001_custom_auth.pl   | 125 ++++++++++++++++++
 3 files changed, 128 insertions(+)
 create mode 100644 src/test/modules/test_auth_provider/t/001_custom_auth.pl

diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index dffc79b2d9..f56533ea13 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -14,6 +14,7 @@ SUBDIRS = \
 		  plsample \
 		  snapshot_too_old \
 		  spgist_name_ops \
+		  test_auth_provider \
 		  test_bloomfilter \
 		  test_ddl_deparse \
 		  test_extensions \
diff --git a/src/test/modules/test_auth_provider/Makefile b/src/test/modules/test_auth_provider/Makefile
index 17971a5c7a..7d601cf7d5 100644
--- a/src/test/modules/test_auth_provider/Makefile
+++ b/src/test/modules/test_auth_provider/Makefile
@@ -4,6 +4,8 @@ MODULE_big = test_auth_provider
 OBJS = test_auth_provider.o
 PGFILEDESC = "test_auth_provider - provider to test auth hooks"
 
+TAP_TESTS = 1
+
 ifdef USE_PGXS
 PG_CONFIG = pg_config
 PGXS := $(shell $(PG_CONFIG) --pgxs)
diff --git a/src/test/modules/test_auth_provider/t/001_custom_auth.pl b/src/test/modules/test_auth_provider/t/001_custom_auth.pl
new file mode 100644
index 0000000000..3b7472dc7f
--- /dev/null
+++ b/src/test/modules/test_auth_provider/t/001_custom_auth.pl
@@ -0,0 +1,125 @@
+
+# Copyright (c) 2021-2022, PostgreSQL Global Development Group
+
+# Set of tests for testing custom authentication hooks.
+
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Delete pg_hba.conf from the given node, add a new entry to it
+# and then execute a reload to refresh it.
+sub reset_pg_hba
+{
+	my $node       = shift;
+	my $hba_method = shift;
+
+	unlink($node->data_dir . '/pg_hba.conf');
+	# just for testing purposes, use a continuation line
+	$node->append_conf('pg_hba.conf', "local all all\\\n $hba_method");
+	$node->reload;
+	return;
+}
+
+# Test if you get expected results in pg_hba_file_rules error column after
+# changing pg_hba.conf and reloading it.
+sub test_hba_reload
+{
+	my ($node, $method, $expected_res) = @_;
+	my $status_string = 'failed';
+	$status_string = 'success' if ($expected_res eq 0);
+	my $testname = "pg_hba.conf reload $status_string for method $method";
+
+	reset_pg_hba($node, $method);
+
+	my ($cmdret, $stdout, $stderr) = $node->psql("postgres",
+		"select count(*) from pg_hba_file_rules where error is not null",extra_params => ['-U','bob']);
+
+	is($stdout, $expected_res, $testname);
+}
+
+# Test access for a single role, useful to wrap all tests into one.  Extra
+# named parameters are passed to connect_ok/fails as-is.
+sub test_role
+{
+	local $Test::Builder::Level = $Test::Builder::Level + 1;
+
+	my ($node, $role, $method, $expected_res, %params) = @_;
+	my $status_string = 'failed';
+	$status_string = 'success' if ($expected_res eq 0);
+
+	my $connstr = "user=$role";
+	my $testname =
+	  "authentication $status_string for method $method, role $role";
+
+	if ($expected_res eq 0)
+	{
+		$node->connect_ok($connstr, $testname, %params);
+	}
+	else
+	{
+		# No checks of the error message, only the status code.
+		$node->connect_fails($connstr, $testname, %params);
+	}
+}
+
+# Initialize server node
+my $node = PostgreSQL::Test::Cluster->new('server');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'test_auth_provider.so'\n");
+$node->start;
+
+$node->safe_psql('postgres', "CREATE ROLE bob SUPERUSER LOGIN;");
+$node->safe_psql('postgres', "CREATE ROLE alice LOGIN;");
+$node->safe_psql('postgres', "CREATE ROLE test LOGIN;");
+
+# Add custom auth method to pg_hba.conf
+reset_pg_hba($node, 'custom provider=test');
+
+# Test that users are able to login with correct passwords.
+$ENV{"PGPASSWORD"} = 'bob123';
+test_role($node, 'bob', 'custom', 0, log_like => [qr/connection authorized: user=bob/]);
+$ENV{"PGPASSWORD"} = 'alice123';
+test_role($node, 'alice', 'custom', 0, log_like => [qr/connection authorized: user=alice/]);
+
+# Test that bad passwords are rejected.
+$ENV{"PGPASSWORD"} = 'badpassword';
+test_role($node, 'bob', 'custom', 2, log_unlike => [qr/connection authorized:/]);
+test_role($node, 'alice', 'custom', 2, log_unlike => [qr/connection authorized:/]);
+
+# Test that users not in authentication list are rejected.
+test_role($node, 'test', 'custom', 2, log_unlike => [qr/connection authorized:/]);
+
+$ENV{"PGPASSWORD"} = 'bob123';
+
+# Tests for invalid auth options
+
+# Test that an incorrect provider name is not accepted.
+test_hba_reload($node, 'custom provider=wrong', 1);
+
+# Test that specifying provider option with different auth method is not allowed.
+test_hba_reload($node, 'trust provider=test', 1);
+
+# Test that provider name is a mandatory option for custom auth.
+test_hba_reload($node, 'custom', 1);
+
+# Test that correct provider name allows reload to succeed.
+test_hba_reload($node, 'custom provider=test', 0);
+
+# Custom auth modules require mentioning extension in shared_preload_libraries.
+
+# Remove extension from shared_preload_libraries and try to restart.
+$node->adjust_conf('postgresql.conf', 'shared_preload_libraries', "''");
+command_fails(['pg_ctl', '-w', '-D', $node->data_dir, '-l', $node->logfile, 'restart'],'restart with empty shared_preload_libraries failed');
+
+# Fix shared_preload_libraries and confirm that you can now restart.
+$node->adjust_conf('postgresql.conf', 'shared_preload_libraries', "'test_auth_provider.so'");
+command_ok(['pg_ctl', '-w', '-D', $node->data_dir, '-l', $node->logfile,'start'],'restart with correct shared_preload_libraries succeeded');
+
+# Test that we can connect again
+test_role($node, 'bob', 'custom', 0, log_like => [qr/connection authorized: user=bob/]);
+
+done_testing();
-- 
2.25.1

v3-0004-common-jsonapi-support-FRONTEND-clients.patchtext/x-patch; name=v3-0004-common-jsonapi-support-FRONTEND-clients.patchDownload
From 13bee49d8c674e921804b4e6edc363dcf33211ce Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v3 4/9] common/jsonapi: support FRONTEND clients

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed. json_errdetail() now allocates its error message inside
memory owned by the JsonLexContext, so clients don't need to worry about
freeing it.

For convenience, the backend now has destroyJsonLexContext() to mirror
other create/destroy APIs. The frontend has init/term versions of the
API to handle stack-allocated JsonLexContexts.

We can now partially revert b44669b2ca, now that json_errdetail() works
correctly.
---
 src/backend/utils/adt/jsonfuncs.c             |   4 +-
 src/bin/pg_verifybackup/parse_manifest.c      |  13 +-
 src/bin/pg_verifybackup/t/005_bad_manifest.pl |   2 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 290 +++++++++++++-----
 src/include/common/jsonapi.h                  |  47 ++-
 6 files changed, 270 insertions(+), 88 deletions(-)

diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index 2457061f97..f58233cda9 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -723,9 +723,7 @@ json_object_keys(PG_FUNCTION_ARGS)
 		pg_parse_json_or_ereport(lex, sem);
 		/* keys are now in state->result */
 
-		pfree(lex->strval->data);
-		pfree(lex->strval);
-		pfree(lex);
+		destroyJsonLexContext(lex);
 		pfree(sem);
 
 		MemoryContextSwitchTo(oldcontext);
diff --git a/src/bin/pg_verifybackup/parse_manifest.c b/src/bin/pg_verifybackup/parse_manifest.c
index 6364b01282..4b38fd3963 100644
--- a/src/bin/pg_verifybackup/parse_manifest.c
+++ b/src/bin/pg_verifybackup/parse_manifest.c
@@ -119,7 +119,7 @@ void
 json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 					size_t size)
 {
-	JsonLexContext *lex;
+	JsonLexContext lex = {0};
 	JsonParseErrorType json_error;
 	JsonSemAction sem;
 	JsonManifestParseState parse;
@@ -129,8 +129,8 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 	parse.state = JM_EXPECT_TOPLEVEL_START;
 	parse.saw_version_field = false;
 
-	/* Create a JSON lexing context. */
-	lex = makeJsonLexContextCstringLen(buffer, size, PG_UTF8, true);
+	/* Initialize a JSON lexing context. */
+	initJsonLexContextCstringLen(&lex, buffer, size, PG_UTF8, true);
 
 	/* Set up semantic actions. */
 	sem.semstate = &parse;
@@ -145,14 +145,17 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 	sem.scalar = json_manifest_scalar;
 
 	/* Run the actual JSON parser. */
-	json_error = pg_parse_json(lex, &sem);
+	json_error = pg_parse_json(&lex, &sem);
 	if (json_error != JSON_SUCCESS)
-		json_manifest_parse_failure(context, "parsing failed");
+		json_manifest_parse_failure(context, json_errdetail(json_error, &lex));
 	if (parse.state != JM_EXPECT_EOF)
 		json_manifest_parse_failure(context, "manifest ended unexpectedly");
 
 	/* Verify the manifest checksum. */
 	verify_manifest_checksum(&parse, buffer, size);
+
+	/* Clean up. */
+	termJsonLexContext(&lex);
 }
 
 /*
diff --git a/src/bin/pg_verifybackup/t/005_bad_manifest.pl b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
index 118beb53d7..f2692972fe 100644
--- a/src/bin/pg_verifybackup/t/005_bad_manifest.pl
+++ b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
@@ -16,7 +16,7 @@ my $tempdir = PostgreSQL::Test::Utils::tempdir;
 
 test_bad_manifest(
 	'input string ended unexpectedly',
-	qr/could not parse backup manifest: parsing failed/,
+	qr/could not parse backup manifest: The input string ended unexpectedly/,
 	<<EOM);
 {
 EOM
diff --git a/src/common/Makefile b/src/common/Makefile
index 31c0dd366d..8e8b27546e 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 # If you add objects here, see also src/tools/msvc/Mkvcbuild.pm
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 6666077a93..7fc5eaf460 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -20,10 +20,39 @@
 #include "common/jsonapi.h"
 #include "mb/pg_wchar.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+
+#define appendStrVal		appendPQExpBuffer
+#define appendStrValChar	appendPQExpBufferChar
+#define createStrVal		createPQExpBuffer
+#define resetStrVal			resetPQExpBuffer
+
+#else /* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+
+#define appendStrVal		appendStringInfo
+#define appendStrValChar	appendStringInfoChar
+#define createStrVal		makeStringInfo
+#define resetStrVal			resetStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -132,10 +161,12 @@ IsValidJsonNumber(const char *str, int len)
 	return (!numeric_error) && (total_len == dummy_lex.input_length);
 }
 
+#ifndef FRONTEND
+
 /*
  * makeJsonLexContextCstringLen
  *
- * lex constructor, with or without StringInfo object for de-escaped lexemes.
+ * lex constructor, with or without a string object for de-escaped lexemes.
  *
  * Without is better as it makes the processing faster, so only make one
  * if really required.
@@ -145,13 +176,66 @@ makeJsonLexContextCstringLen(char *json, int len, int encoding, bool need_escape
 {
 	JsonLexContext *lex = palloc0(sizeof(JsonLexContext));
 
+	initJsonLexContextCstringLen(lex, json, len, encoding, need_escapes);
+
+	return lex;
+}
+
+void
+destroyJsonLexContext(JsonLexContext *lex)
+{
+	termJsonLexContext(lex);
+	pfree(lex);
+}
+
+#endif /* !FRONTEND */
+
+void
+initJsonLexContextCstringLen(JsonLexContext *lex, char *json, int len, int encoding, bool need_escapes)
+{
 	lex->input = lex->token_terminator = lex->line_start = json;
 	lex->line_number = 1;
 	lex->input_length = len;
 	lex->input_encoding = encoding;
-	if (need_escapes)
-		lex->strval = makeStringInfo();
-	return lex;
+	lex->parse_strval = need_escapes;
+	if (lex->parse_strval)
+	{
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to time
+		 * of use (json_lex_string()) since there's no way to signal failure
+		 * here, and we might not need to parse any strings anyway.
+		 */
+		lex->strval = createStrVal();
+	}
+	lex->errormsg = NULL;
+}
+
+void
+termJsonLexContext(JsonLexContext *lex)
+{
+	static const JsonLexContext empty = {0};
+
+	if (lex->strval)
+	{
+#ifdef FRONTEND
+		destroyPQExpBuffer(lex->strval);
+#else
+		pfree(lex->strval->data);
+		pfree(lex->strval);
+#endif
+	}
+
+	if (lex->errormsg)
+	{
+#ifdef FRONTEND
+		destroyPQExpBuffer(lex->errormsg);
+#else
+		pfree(lex->errormsg->data);
+		pfree(lex->errormsg);
+#endif
+	}
+
+	*lex = empty;
 }
 
 /*
@@ -217,7 +301,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;		/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -279,14 +363,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -320,8 +411,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -368,6 +463,10 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -676,8 +775,15 @@ json_lex_string(JsonLexContext *lex)
 	int			len;
 	int			hi_surrogate = -1;
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -737,7 +843,7 @@ json_lex_string(JsonLexContext *lex)
 						return JSON_UNICODE_ESCAPE_FORMAT;
 					}
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -797,19 +903,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						return JSON_UNICODE_HIGH_ESCAPE;
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					return JSON_UNICODE_LOW_SURROGATE;
@@ -819,22 +925,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 						/* Not a valid string escape, so signal error. */
@@ -858,12 +964,12 @@ json_lex_string(JsonLexContext *lex)
 			}
 
 		}
-		else if (lex->strval != NULL)
+		else if (lex->parse_strval)
 		{
 			if (hi_surrogate != -1)
 				return JSON_UNICODE_LOW_SURROGATE;
 
-			appendStringInfoChar(lex->strval, *s);
+			appendStrValChar(lex->strval, *s);
 		}
 
 	}
@@ -871,6 +977,11 @@ json_lex_string(JsonLexContext *lex)
 	if (hi_surrogate != -1)
 		return JSON_UNICODE_LOW_SURROGATE;
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -1043,72 +1154,93 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 	return JSON_SUCCESS;		/* silence stupider compilers */
 }
 
-
-#ifndef FRONTEND
-/*
- * Extract the current token from a lexing context, for error reporting.
- */
-static char *
-extract_token(JsonLexContext *lex)
-{
-	int			toklen = lex->token_terminator - lex->token_start;
-	char	   *token = palloc(toklen + 1);
-
-	memcpy(token, lex->token_start, toklen);
-	token[toklen] = '\0';
-	return token;
-}
-
 /*
  * Construct a detail message for a JSON error.
  *
- * Note that the error message generated by this routine may not be
- * palloc'd, making it unsafe for frontend code as there is no way to
- * know if this can be safery pfree'd or not.
+ * The returned allocation is either static or owned by the JsonLexContext and
+ * should not be freed.
  */
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	int		toklen = lex->token_terminator - lex->token_start;
+
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
+	if (lex->errormsg)
+		resetStrVal(lex->errormsg);
+	else
+		lex->errormsg = createStrVal();
+
 	switch (error)
 	{
 		case JSON_SUCCESS:
 			/* fall through to the error code after switch */
 			break;
 		case JSON_ESCAPING_INVALID:
-			return psprintf(_("Escape sequence \"\\%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Escape sequence \"\\%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_ESCAPING_REQUIRED:
-			return psprintf(_("Character with value 0x%02x must be escaped."),
-							(unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
+			break;
 		case JSON_EXPECTED_END:
-			return psprintf(_("Expected end of input, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected end of input, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_FIRST:
-			return psprintf(_("Expected array element or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected array element or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_NEXT:
-			return psprintf(_("Expected \",\" or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_COLON:
-			return psprintf(_("Expected \":\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \":\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_JSON:
-			return psprintf(_("Expected JSON value, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected JSON value, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_MORE:
 			return _("The input string ended unexpectedly.");
 		case JSON_EXPECTED_OBJECT_FIRST:
-			return psprintf(_("Expected string or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_OBJECT_NEXT:
-			return psprintf(_("Expected \",\" or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_STRING:
-			return psprintf(_("Expected string, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_INVALID_TOKEN:
-			return psprintf(_("Token \"%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Token \"%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -1122,12 +1254,22 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			return _("Unicode low surrogate must follow a high surrogate.");
 	}
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	elog(ERROR, "unexpected json parse error type: %d", (int) error);
-	return NULL;
-}
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && !lex->errormsg->data[0])
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover the
+		 * possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d", (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
 #endif
+
+	return lex->errormsg->data;
+}
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 52cb4a9339..d7cafc84fe 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum
 {
 	JSON_TOKEN_INVALID,
@@ -48,6 +46,7 @@ typedef enum
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -55,6 +54,17 @@ typedef enum
 	JSON_UNICODE_LOW_SURROGATE
 } JsonParseErrorType;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
 
 /*
  * All the fields in this structure should be treated as read-only.
@@ -81,7 +91,9 @@ typedef struct JsonLexContext
 	int			lex_level;
 	int			line_number;	/* line number, starting from 1 */
 	char	   *line_start;		/* where that line starts within input */
-	StringInfo	strval;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef void (*json_struct_action) (void *state);
@@ -141,9 +153,10 @@ extern JsonSemAction nullSemAction;
  */
 extern JsonParseErrorType json_count_array_elements(JsonLexContext *lex,
 													int *elements);
+#ifndef FRONTEND
 
 /*
- * constructor for JsonLexContext, with or without strval element.
+ * allocating constructor for JsonLexContext, with or without strval element.
  * If supplied, the strval element will contain a de-escaped version of
  * the lexeme. However, doing this imposes a performance penalty, so
  * it should be avoided if the de-escaped lexeme is not required.
@@ -153,6 +166,32 @@ extern JsonLexContext *makeJsonLexContextCstringLen(char *json,
 													int encoding,
 													bool need_escapes);
 
+/*
+ * Counterpart to makeJsonLexContextCstringLen(): clears and deallocates lex.
+ * The context pointer should not be used after this call.
+ */
+extern void destroyJsonLexContext(JsonLexContext *lex);
+
+#endif /* !FRONTEND */
+
+/*
+ * stack constructor for JsonLexContext, with or without strval element.
+ * If supplied, the strval element will contain a de-escaped version of
+ * the lexeme. However, doing this imposes a performance penalty, so
+ * it should be avoided if the de-escaped lexeme is not required.
+ */
+extern void initJsonLexContextCstringLen(JsonLexContext *lex,
+										 char *json,
+										 int len,
+										 int encoding,
+										 bool need_escapes);
+
+/*
+ * Counterpart to initJsonLexContextCstringLen(): clears the contents of lex,
+ * but does not deallocate lex itself.
+ */
+extern void termJsonLexContext(JsonLexContext *lex);
+
 /* lex one token */
 extern JsonParseErrorType json_lex(JsonLexContext *lex);
 
-- 
2.25.1

v3-0005-libpq-add-OAUTHBEARER-SASL-mechanism.patchtext/x-patch; name=v3-0005-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 7f6b02652cd771a93ce4269607d498f4ac574e7f Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v3 5/9] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented.

The client implementation requires libiddawc and its development
headers. Configure --with-oauth (and --with-includes/--with-libraries to
point at the iddawc installation, if it's in a custom location).

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- ...and more.
---
 configure                            | 100 ++++
 configure.ac                         |  19 +
 src/Makefile.global.in               |   1 +
 src/include/common/oauth-common.h    |  19 +
 src/include/pg_config.h.in           |   6 +
 src/interfaces/libpq/Makefile        |   7 +-
 src/interfaces/libpq/fe-auth-oauth.c | 744 +++++++++++++++++++++++++++
 src/interfaces/libpq/fe-auth-sasl.h  |   5 +-
 src/interfaces/libpq/fe-auth-scram.c |   6 +-
 src/interfaces/libpq/fe-auth.c       |  42 +-
 src/interfaces/libpq/fe-auth.h       |   3 +
 src/interfaces/libpq/fe-connect.c    |  38 ++
 src/interfaces/libpq/libpq-int.h     |   8 +
 13 files changed, 979 insertions(+), 19 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c

diff --git a/configure b/configure
index f3cb5c2b51..cd0c50a951 100755
--- a/configure
+++ b/configure
@@ -718,6 +718,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -861,6 +862,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1570,6 +1572,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth            build with OAuth 2.0 support
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8377,6 +8380,42 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-oauth option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_oauth=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13500,6 +13539,56 @@ fi
 
 
 
+if test "$with_oauth" = yes ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for i_init_session in -liddawc" >&5
+$as_echo_n "checking for i_init_session in -liddawc... " >&6; }
+if ${ac_cv_lib_iddawc_i_init_session+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-liddawc  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char i_init_session ();
+int
+main ()
+{
+return i_init_session ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_iddawc_i_init_session=yes
+else
+  ac_cv_lib_iddawc_i_init_session=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_iddawc_i_init_session" >&5
+$as_echo "$ac_cv_lib_iddawc_i_init_session" >&6; }
+if test "x$ac_cv_lib_iddawc_i_init_session" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBIDDAWC 1
+_ACEOF
+
+  LIBS="-liddawc $LIBS"
+
+else
+  as_fn_error $? "library 'iddawc' is required for OAuth support" "$LINENO" 5
+fi
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14513,6 +14602,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" != no; then
+  ac_fn_c_check_header_mongrel "$LINENO" "iddawc.h" "ac_cv_header_iddawc_h" "$ac_includes_default"
+if test "x$ac_cv_header_iddawc_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <iddawc.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 19d1a80367..922608065f 100644
--- a/configure.ac
+++ b/configure.ac
@@ -887,6 +887,17 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_BOOL(with, oauth, no,
+              [build with OAuth 2.0 support],
+              [AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])])
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1385,6 +1396,10 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = yes ; then
+  AC_CHECK_LIB(iddawc, i_init_session, [], [AC_MSG_ERROR([library 'iddawc' is required for OAuth support])])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1603,6 +1618,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" != no; then
+  AC_CHECK_HEADER(iddawc.h, [], [AC_MSG_ERROR([header file <iddawc.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbdc1c4bda..c9c61a9c99 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..3fa95ac7e8
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif /* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 635fbb2181..1b3332601e 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -319,6 +319,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `iddawc' library (-liddawc). */
+#undef HAVE_LIBIDDAWC
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -922,6 +925,9 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 3c53393fa4..727305c578 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -62,6 +62,11 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_oauth),yes)
+OBJS += \
+	fe-auth-oauth.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -83,7 +88,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -liddawc -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..383c9d4bdb
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,744 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include <iddawc.h>
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static void oauth_exchange(void *opaq, bool final,
+						   char *input, int inputlen,
+						   char **output, int *outputlen,
+						   bool *done, bool *success);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+} fe_oauth_state;
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(!strcmp(sasl_mechanism, OAUTHBEARER_NAME));
+
+	state = malloc(sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+static const char *
+iddawc_error_string(int errcode)
+{
+	switch (errcode)
+	{
+		case I_OK:
+			return "I_OK";
+
+		case I_ERROR:
+			return "I_ERROR";
+
+		case I_ERROR_PARAM:
+			return "I_ERROR_PARAM";
+
+		case I_ERROR_MEMORY:
+			return "I_ERROR_MEMORY";
+
+		case I_ERROR_UNAUTHORIZED:
+			return "I_ERROR_UNAUTHORIZED";
+
+		case I_ERROR_SERVER:
+			return "I_ERROR_SERVER";
+	}
+
+	return "<unknown>";
+}
+
+static void
+iddawc_error(PGconn *conn, int errcode, const char *msg)
+{
+	appendPQExpBufferStr(&conn->errorMessage, libpq_gettext(msg));
+	appendPQExpBuffer(&conn->errorMessage,
+					  libpq_gettext(" (iddawc error %s)\n"),
+					  iddawc_error_string(errcode));
+}
+
+static void
+iddawc_request_error(PGconn *conn, struct _i_session *i, int err, const char *msg)
+{
+	const char *error_code;
+	const char *desc;
+
+	appendPQExpBuffer(&conn->errorMessage, "%s: ", libpq_gettext(msg));
+
+	error_code = i_get_str_parameter(i, I_OPT_ERROR);
+	if (!error_code)
+	{
+		/*
+		 * The server didn't give us any useful information, so just print the
+		 * error code.
+		 */
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("(iddawc error %s)\n"),
+						  iddawc_error_string(err));
+		return;
+	}
+
+	/* If the server gave a string description, print that too. */
+	desc = i_get_str_parameter(i, I_OPT_ERROR_DESCRIPTION);
+	if (desc)
+		appendPQExpBuffer(&conn->errorMessage, "%s ", desc);
+
+	appendPQExpBuffer(&conn->errorMessage, "(%s)\n", error_code);
+}
+
+static char *
+get_auth_token(PGconn *conn)
+{
+	PQExpBuffer	token_buf = NULL;
+	struct _i_session session;
+	int			err;
+	int			auth_method;
+	bool		user_prompted = false;
+	const char *verification_uri;
+	const char *user_code;
+	const char *access_token;
+	const char *token_type;
+	char	   *token = NULL;
+
+	if (!conn->oauth_discovery_uri)
+		return strdup(""); /* ask the server for one */
+
+	if (!conn->oauth_client_id)
+	{
+		/* We can't talk to a server without a client identifier. */
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("no oauth_client_id is set for the connection"));
+		return NULL;
+	}
+
+	i_init_session(&session);
+
+	token_buf = createPQExpBuffer();
+	if (!token_buf)
+		goto cleanup;
+
+	err = i_set_str_parameter(&session, I_OPT_OPENID_CONFIG_ENDPOINT, conn->oauth_discovery_uri);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set OpenID config endpoint");
+		goto cleanup;
+	}
+
+	err = i_get_openid_config(&session);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to fetch OpenID discovery document");
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_TOKEN_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer has no token endpoint"));
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_DEVICE_AUTHORIZATION_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer does not support device authorization"));
+		goto cleanup;
+	}
+
+	err = i_set_response_type(&session, I_RESPONSE_TYPE_DEVICE_CODE);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set device code response type");
+		goto cleanup;
+	}
+
+	auth_method = I_TOKEN_AUTH_METHOD_NONE;
+	if (conn->oauth_client_secret && *conn->oauth_client_secret)
+		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
+
+	err = i_set_parameter_list(&session,
+		I_OPT_CLIENT_ID, conn->oauth_client_id,
+		I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
+		I_OPT_TOKEN_METHOD, auth_method,
+		I_OPT_SCOPE, conn->oauth_scope,
+		I_OPT_NONE
+	);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set client identifier");
+		goto cleanup;
+	}
+
+	err = i_run_device_auth_request(&session);
+	if (err)
+	{
+		iddawc_request_error(conn, &session, err,
+							"failed to obtain device authorization");
+		goto cleanup;
+	}
+
+	verification_uri = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_VERIFICATION_URI);
+	if (!verification_uri)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a verification URI"));
+		goto cleanup;
+	}
+
+	user_code = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_USER_CODE);
+	if (!user_code)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a user code"));
+		goto cleanup;
+	}
+
+	/*
+	 * Poll the token endpoint until either the user logs in and authorizes the
+	 * use of a token, or a hard failure occurs. We perform one ping _before_
+	 * prompting the user, so that we don't make them do the work of logging in
+	 * only to find that the token endpoint is completely unreachable.
+	 */
+	err = i_run_token_request(&session);
+	while (err)
+	{
+		const char *error_code;
+		uint		interval;
+
+		error_code = i_get_str_parameter(&session, I_OPT_ERROR);
+
+		/*
+		 * authorization_pending and slow_down are the only acceptable errors;
+		 * anything else and we bail.
+		 */
+		if (!error_code || (strcmp(error_code, "authorization_pending")
+							&& strcmp(error_code, "slow_down")))
+		{
+			iddawc_request_error(conn, &session, err,
+								"OAuth token retrieval failed");
+			goto cleanup;
+		}
+
+		if (!user_prompted)
+		{
+			/*
+			 * Now that we know the token endpoint isn't broken, give the user
+			 * the login instructions.
+			 */
+			pqInternalNotice(&conn->noticeHooks,
+							 "Visit %s and enter the code: %s",
+							 verification_uri, user_code);
+
+			user_prompted = true;
+		}
+
+		/*
+		 * We are required to wait between polls; the server tells us how long.
+		 * TODO: if interval's not set, we need to default to five seconds
+		 * TODO: sanity check the interval
+		 */
+		interval = i_get_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL);
+
+		/*
+		 * A slow_down error requires us to permanently increase our retry
+		 * interval by five seconds. RFC 8628, Sec. 3.5.
+		 */
+		if (!strcmp(error_code, "slow_down"))
+		{
+			interval += 5;
+			i_set_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL, interval);
+		}
+
+		sleep(interval);
+
+		/*
+		 * XXX Reset the error code before every call, because iddawc won't do
+		 * that for us. This matters if the server first sends a "pending" error
+		 * code, then later hard-fails without sending an error code to
+		 * overwrite the first one.
+		 *
+		 * That we have to do this at all seems like a bug in iddawc.
+		 */
+		i_set_str_parameter(&session, I_OPT_ERROR, NULL);
+
+		err = i_run_token_request(&session);
+	}
+
+	access_token = i_get_str_parameter(&session, I_OPT_ACCESS_TOKEN);
+	token_type = i_get_str_parameter(&session, I_OPT_TOKEN_TYPE);
+
+	if (!access_token || !token_type || strcasecmp(token_type, "Bearer"))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a bearer token"));
+		goto cleanup;
+	}
+
+	appendPQExpBufferStr(token_buf, "Bearer ");
+	appendPQExpBufferStr(token_buf, access_token);
+
+	if (PQExpBufferBroken(token_buf))
+		goto cleanup;
+
+	token = strdup(token_buf->data);
+
+cleanup:
+	if (token_buf)
+		destroyPQExpBuffer(token_buf);
+	i_clean_session(&session);
+
+	return token;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn)
+{
+	static const char * const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBuffer	token_buf;
+	PQExpBuffer	discovery_buf = NULL;
+	char	   *token = NULL;
+	char	   *response = NULL;
+
+	token_buf = createPQExpBuffer();
+	if (!token_buf)
+		goto cleanup;
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	if (!conn->oauth_discovery_uri && conn->oauth_issuer)
+	{
+		discovery_buf = createPQExpBuffer();
+		if (!discovery_buf)
+			goto cleanup;
+
+		appendPQExpBufferStr(discovery_buf, conn->oauth_issuer);
+		appendPQExpBufferStr(discovery_buf, "/.well-known/openid-configuration");
+
+		if (PQExpBufferBroken(discovery_buf))
+			goto cleanup;
+
+		conn->oauth_discovery_uri = strdup(discovery_buf->data);
+	}
+
+	token = get_auth_token(conn);
+	if (!token)
+		goto cleanup;
+
+	appendPQExpBuffer(token_buf, resp_format, token);
+	if (PQExpBufferBroken(token_buf))
+		goto cleanup;
+
+	response = strdup(token_buf->data);
+
+cleanup:
+	if (token)
+		free(token);
+	if (discovery_buf)
+		destroyPQExpBuffer(discovery_buf);
+	if (token_buf)
+		destroyPQExpBuffer(token_buf);
+
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char		   *errmsg; /* any non-NULL value stops all processing */
+	PQExpBufferData errbuf; /* backing memory for errmsg */
+	int				nested; /* nesting level (zero is the top) */
+
+	const char	   *target_field_name; /* points to a static allocation */
+	char		  **target_field;      /* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char		   *status;
+	char		   *scope;
+	char		   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static void
+oauth_json_object_start(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+		return; /* short-circuit */
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+}
+
+static void
+oauth_json_object_end(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+		return; /* short-circuit */
+
+	--ctx->nested;
+}
+
+static void
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+	{
+		/* short-circuit */
+		free(name);
+		return;
+	}
+
+	if (ctx->nested == 1)
+	{
+		if (!strcmp(name, ERROR_STATUS_FIELD))
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (!strcmp(name, ERROR_SCOPE_FIELD))
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (!strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD))
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+}
+
+static void
+oauth_json_array_start(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+		return; /* short-circuit */
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+}
+
+static void
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+	{
+		/* short-circuit */
+		free(token);
+		return;
+	}
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return; /* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext		lex = {0};
+	JsonSemAction		sem = {0};
+	JsonParseErrorType	err;
+	struct json_ctx		ctx = {0};
+	char			   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL"));
+		return false;
+	}
+
+	initJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		errmsg = json_errdetail(err, &lex);
+	}
+	else if (PQExpBufferDataBroken(ctx.errbuf))
+	{
+		errmsg = libpq_gettext("out of memory");
+	}
+	else if (ctx.errmsg)
+	{
+		errmsg = ctx.errmsg;
+	}
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	termJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (!strcmp(ctx.status, "invalid_token"))
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen,
+			   bool *done, bool *success)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*done = false;
+	*success = false;
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn);
+			if (!*output)
+				goto error;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			break;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				*done = true;
+				*success = true;
+
+				break;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				goto error;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output); /* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			break;
+
+		case FE_OAUTH_SERVER_ERROR:
+			/*
+			 * After an error, the server should send an error response to fail
+			 * the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge which
+			 * isn't defined in the RFC, or completed the handshake successfully
+			 * after telling us it was going to fail. Neither is acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			goto error;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			goto error;
+	}
+
+	return;
+
+error:
+	*done = true;
+	*success = false;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index da3c30b87b..b1bb382f70 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -65,6 +65,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -92,7 +94,8 @@ typedef struct pg_fe_sasl_mech
 	 *			   Ignored if *done is false.
 	 *--------
 	 */
-	void		(*exchange) (void *state, char *input, int inputlen,
+	void		(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen,
 							 bool *done, bool *success);
 
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index e616200704..681b76adbe 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static void scram_exchange(void *opaq, char *input, int inputlen,
+static void scram_exchange(void *opaq, bool final,
+						   char *input, int inputlen,
 						   char **output, int *outputlen,
 						   bool *done, bool *success);
 static bool scram_channel_bound(void *opaq);
@@ -206,7 +207,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static void
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen,
 			   bool *done, bool *success)
 {
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 6fceff561b..2567a34023 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -38,6 +38,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -422,7 +423,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	bool		success;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
-	char	   *password;
+	char	   *password = NULL;
 
 	initPQExpBuffer(&mechanism_buf);
 
@@ -444,8 +445,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	/*
 	 * Parse the list of SASL authentication mechanisms in the
 	 * AuthenticationSASL message, and select the best mechanism that we
-	 * support.  SCRAM-SHA-256-PLUS and SCRAM-SHA-256 are the only ones
-	 * supported at the moment, listed by order of decreasing importance.
+	 * support.  Mechanisms are listed by order of decreasing importance.
 	 */
 	selected_mechanism = NULL;
 	for (;;)
@@ -485,6 +485,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 				{
 					selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
 					conn->sasl = &pg_scram_mech;
+					conn->password_needed = true;
 				}
 #else
 				/*
@@ -522,7 +523,17 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		{
 			selected_mechanism = SCRAM_SHA_256_NAME;
 			conn->sasl = &pg_scram_mech;
+			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				!selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -547,18 +558,19 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	/*
 	 * First, select the password to use for the exchange, complaining if
-	 * there isn't one.  Currently, all supported SASL mechanisms require a
-	 * password, so we can just go ahead here without further distinction.
+	 * there isn't one and the SASL mechanism needs it.
 	 */
-	conn->password_needed = true;
-	password = conn->connhost[conn->whichhost].password;
-	if (password == NULL)
-		password = conn->pgpass;
-	if (password == NULL || password[0] == '\0')
+	if (conn->password_needed)
 	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 PQnoPasswordSupplied);
-		goto error;
+		password = conn->connhost[conn->whichhost].password;
+		if (password == NULL)
+			password = conn->pgpass;
+		if (password == NULL || password[0] == '\0')
+		{
+			appendPQExpBufferStr(&conn->errorMessage,
+								 PQnoPasswordSupplied);
+			goto error;
+		}
 	}
 
 	Assert(conn->sasl);
@@ -576,7 +588,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto oom_error;
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	conn->sasl->exchange(conn->sasl_state,
+	conn->sasl->exchange(conn->sasl_state, false,
 						 NULL, -1,
 						 &initialresponse, &initialresponselen,
 						 &done, &success);
@@ -657,7 +669,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	conn->sasl->exchange(conn->sasl_state,
+	conn->sasl->exchange(conn->sasl_state, final,
 						 challenge, payloadlen,
 						 &output, &outputlen,
 						 &done, &success);
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 049a8bb1a1..2a56774019 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -28,4 +28,7 @@ extern const pg_fe_sasl_mech pg_scram_mech;
 extern char *pg_fe_scram_build_secret(const char *password,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 1c5a2b43e9..5f78439586 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -344,6 +344,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Target-Session-Attrs", "", 15, /* sizeof("prefer-standby") = 15 */
 	offsetof(struct pg_conn, target_session_attrs)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -606,6 +623,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_err_msg = NULL;
 	conn->be_pid = 0;
 	conn->be_key = 0;
+	/* conn->oauth_want_retry = false; TODO */
 }
 
 
@@ -3381,6 +3399,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 #ifdef ENABLE_GSS
 
 					/*
@@ -4161,6 +4189,16 @@ freePGconn(PGconn *conn)
 		free(conn->rowBuf);
 	if (conn->target_session_attrs)
 		free(conn->target_session_attrs);
+	if (conn->oauth_issuer)
+		free(conn->oauth_issuer);
+	if (conn->oauth_discovery_uri)
+		free(conn->oauth_discovery_uri);
+	if (conn->oauth_client_id)
+		free(conn->oauth_client_id);
+	if (conn->oauth_client_secret)
+		free(conn->oauth_client_secret);
+	if (conn->oauth_scope)
+		free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e0cee4b142..0dff13505a 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -394,6 +394,14 @@ struct pg_conn
 	char	   *ssl_max_protocol_version;	/* maximum TLS protocol version */
 	char	   *target_session_attrs;	/* desired session properties */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;			/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery document */
+	char	   *oauth_client_id;		/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;			/* access token scope */
+	bool		oauth_want_retry;		/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
-- 
2.25.1

v3-0006-backend-add-OAUTHBEARER-SASL-mechanism.patchtext/x-patch; name=v3-0006-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 43ab0310ce8ee26a469167a2e4eae4c4bc295518 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v3 6/9] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external program: the oauth_validator_command.
This command must do the following:

1. Receive the bearer token by reading its contents from a file
   descriptor passed from the server. (The numeric value of this
   descriptor may be inserted into the oauth_validator_command using the
   %f specifier.)

   This MUST be the first action the command performs. The server will
   not begin reading stdout from the command until the token has been
   read in full, so if the command tries to print anything and hits a
   buffer limit, the backend will deadlock and time out.

2. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the command must exit with a
   non-zero status. Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The command should print the
      authenticated identity string to stdout, followed by a newline.

      If the user cannot be authenticated, the validator should not
      print anything to stdout. It should also exit with a non-zero
      status, unless the token may be used to authorize the connection
      through some other means (see below).

      On a success, the command may then exit with a zero success code.
      By default, the server will then check to make sure the identity
      string matches the role that is being used (or matches a usermap
      entry, if one is in use).

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below), the validator simply
      returns a zero exit code if the client should be allowed to
      connect with its presented role (which can be passed to the
      command using the %r specifier), or a non-zero code otherwise.

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the command may print
      the authenticated ID and then fail with a non-zero exit code.
      (This makes it easier to see what's going on in the Postgres
      logs.)

4. Token validators may optionally log to stderr. This will be printed
   verbatim into the Postgres server logs.

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Unlike the client, servers support OAuth without needing to be built
against libiddawc (since the responsibility for "speaking" OAuth/OIDC
correctly is delegated entirely to the oauth_validator_command).

Several TODOs:
- port to platforms other than "modern Linux"
- overhaul the communication with oauth_validator_command, which is
  currently a bad hack on OpenPipeStream()
- implement more sanity checks on the OAUTHBEARER message format and
  tokens sent by the client
- implement more helpful handling of HBA misconfigurations
- properly interpolate JSON when generating error responses
- use logdetail during auth failures
- deal with role names that can't be safely passed to system() without
  shell-escaping
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.
---
 src/backend/libpq/Makefile     |   1 +
 src/backend/libpq/auth-oauth.c | 797 +++++++++++++++++++++++++++++++++
 src/backend/libpq/auth-sasl.c  |  10 +-
 src/backend/libpq/auth-scram.c |   4 +-
 src/backend/libpq/auth.c       |   7 +
 src/backend/libpq/hba.c        |  29 +-
 src/backend/utils/misc/guc.c   |  12 +
 src/include/libpq/hba.h        |   8 +-
 src/include/libpq/oauth.h      |  24 +
 src/include/libpq/sasl.h       |  11 +
 10 files changed, 889 insertions(+), 14 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..c1232a31a0
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,797 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+
+/* GUC */
+char *oauth_validator_command;
+
+static void  oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int   oauth_exchange(void *opaq, const char *input, int inputlen,
+							char **output, int *outputlen, const char **logdetail);
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state	state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth, const char **logdetail);
+static bool run_validator_command(Port *port, const char *token);
+static bool check_exit(FILE **fh, const char *command);
+static bool unset_cloexec(int fd);
+static bool username_ok_for_shell(const char *username);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char   *p;
+	char	cbind_flag;
+	char   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a 'y'
+	 * specifier purely for the remote chance that a future specification could
+	 * define one; then future clients can still interoperate with this server
+	 * implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y': /* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character %s.",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth, logdetail))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char   *pos = *input;
+	char   *auth = NULL;
+
+	/*
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char   *end;
+		char   *sep;
+		char   *key;
+		char   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 *
+		 * TODO further validate the key/value grammar? empty keys, bad chars...
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per
+			 * Sec. 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL; /* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData	buf;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's not
+	 * really a way to hide this from the user, either, because we can't choose
+	 * a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: JSON escaping
+	 */
+	appendStringInfo(&buf,
+		"{ "
+			"\"status\": \"invalid_token\", "
+			"\"openid-configuration\": \"%s/.well-known/openid-configuration\","
+			"\"scope\": \"%s\" "
+		"}",
+		ctx->issuer, ctx->scope);
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+static bool
+validate(Port *port, const char *auth, const char **logdetail)
+{
+	static const char * const b64_set = "abcdefghijklmnopqrstuvwxyz"
+										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+										"0123456789-._~+/";
+
+	const char *token;
+	size_t		span;
+	int			ret;
+
+	/* TODO: handle logdetail when the test framework can check it */
+
+	/*
+	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+	 * 2.1:
+	 *
+	 *      b64token    = 1*( ALPHA / DIGIT /
+	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+	 *      credentials = "Bearer" 1*SP b64token
+	 *
+	 * The "credentials" construction is what we receive in our auth value.
+	 *
+	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+	 * compared case-insensitively. (This is not mentioned in RFC 6750, but it's
+	 * pointed out in RFC 7628 Sec. 4.)
+	 *
+	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+	 */
+	if (strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return false;
+
+	/* Pull the bearer token out of the auth value. */
+	token = auth + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/*
+	 * Before invoking the validator command, sanity-check the token format to
+	 * avoid any injection attacks later in the chain. Invalid formats are
+	 * technically a protocol violation, but don't reflect any information about
+	 * the sensitive Bearer token back to the client; log at COMMERROR instead.
+	 */
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is empty.")));
+		return false;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end with
+	 * any number of '=' characters.
+	 */
+	span = strspn(token, b64_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the problematic
+		 * character(s), but that'd be a bit like printing a piece of someone's
+		 * password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return false;
+	}
+
+	/* Have the validator check the token. */
+	if (!run_validator_command(port, token))
+		return false;
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator says
+		 * the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (!port->authn_id)
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	ret = check_usermap(port->hba->usermap, port->user_name, port->authn_id,
+						false);
+	return (ret == STATUS_OK);
+}
+
+static bool
+run_validator_command(Port *port, const char *token)
+{
+	bool		success = false;
+	int			rc;
+	int			pipefd[2];
+	int			rfd = -1;
+	int			wfd = -1;
+
+	StringInfoData command = { 0 };
+	char	   *p;
+	FILE	   *fh = NULL;
+
+	ssize_t		written;
+	char	   *line = NULL;
+	size_t		size = 0;
+	ssize_t		len;
+
+	Assert(oauth_validator_command);
+
+	if (!oauth_validator_command[0])
+	{
+		ereport(COMMERROR,
+				(errmsg("oauth_validator_command is not set"),
+				 errhint("To allow OAuth authenticated connections, set "
+						 "oauth_validator_command in postgresql.conf.")));
+		return false;
+	}
+
+	/*
+	 * Since popen() is unidirectional, open up a pipe for the other direction.
+	 * Use CLOEXEC to ensure that our write end doesn't accidentally get copied
+	 * into child processes, which would prevent us from closing it cleanly.
+	 *
+	 * XXX this is ugly. We should just read from the child process's stdout,
+	 * but that's a lot more code.
+	 * XXX by bypassing the popen API, we open the potential of process
+	 * deadlock. Clearly document child process requirements (i.e. the child
+	 * MUST read all data off of the pipe before writing anything).
+	 * TODO: port to Windows using _pipe().
+	 */
+	rc = pipe2(pipefd, O_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not create child pipe: %m")));
+		return false;
+	}
+
+	rfd = pipefd[0];
+	wfd = pipefd[1];
+
+	/* Allow the read pipe be passed to the child. */
+	if (!unset_cloexec(rfd))
+	{
+		/* error message was already logged */
+		goto cleanup;
+	}
+
+	/*
+	 * Construct the command, substituting any recognized %-specifiers:
+	 *
+	 *   %f: the file descriptor of the input pipe
+	 *   %r: the role that the client wants to assume (port->user_name)
+	 *   %%: a literal '%'
+	 */
+	initStringInfo(&command);
+
+	for (p = oauth_validator_command; *p; p++)
+	{
+		if (p[0] == '%')
+		{
+			switch (p[1])
+			{
+				case 'f':
+					appendStringInfo(&command, "%d", rfd);
+					p++;
+					break;
+				case 'r':
+					/*
+					 * TODO: decide how this string should be escaped. The role
+					 * is controlled by the client, so if we don't escape it,
+					 * command injections are inevitable.
+					 *
+					 * This is probably an indication that the role name needs
+					 * to be communicated to the validator process in some other
+					 * way. For this proof of concept, just be incredibly strict
+					 * about the characters that are allowed in user names.
+					 */
+					if (!username_ok_for_shell(port->user_name))
+						goto cleanup;
+
+					appendStringInfoString(&command, port->user_name);
+					p++;
+					break;
+				case '%':
+					appendStringInfoChar(&command, '%');
+					p++;
+					break;
+				default:
+					appendStringInfoChar(&command, p[0]);
+			}
+		}
+		else
+			appendStringInfoChar(&command, p[0]);
+	}
+
+	/* Execute the command. */
+	fh = OpenPipeStream(command.data, "re");
+	/* TODO: handle failures */
+
+	/* We don't need the read end of the pipe anymore. */
+	close(rfd);
+	rfd = -1;
+
+	/* Give the command the token to validate. */
+	written = write(wfd, token, strlen(token));
+	if (written != strlen(token))
+	{
+		/* TODO must loop for short writes, EINTR et al */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not write token to child pipe: %m")));
+		goto cleanup;
+	}
+
+	close(wfd);
+	wfd = -1;
+
+	/*
+	 * Read the command's response.
+	 *
+	 * TODO: getline() is probably too new to use, unfortunately.
+	 * TODO: loop over all lines
+	 */
+	if ((len = getline(&line, &size, fh)) >= 0)
+	{
+		/* TODO: fail if the authn_id doesn't end with a newline */
+		if (len > 0)
+			line[len - 1] = '\0';
+
+		set_authn_id(port, line);
+	}
+	else if (ferror(fh))
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not read from command \"%s\": %m",
+						command.data)));
+		goto cleanup;
+	}
+
+	/* Make sure the command exits cleanly. */
+	if (!check_exit(&fh, command.data))
+	{
+		/* error message already logged */
+		goto cleanup;
+	}
+
+	/* Done. */
+	success = true;
+
+cleanup:
+	if (line)
+		free(line);
+
+	/*
+	 * In the successful case, the pipe fds are already closed. For the error
+	 * case, always close out the pipe before waiting for the command, to
+	 * prevent deadlock.
+	 */
+	if (rfd >= 0)
+		close(rfd);
+	if (wfd >= 0)
+		close(wfd);
+
+	if (fh)
+	{
+		Assert(!success);
+		check_exit(&fh, command.data);
+	}
+
+	if (command.data)
+		pfree(command.data);
+
+	return success;
+}
+
+static bool
+check_exit(FILE **fh, const char *command)
+{
+	int rc;
+
+	rc = ClosePipeStream(*fh);
+	*fh = NULL;
+
+	if (rc == -1)
+	{
+		/* pclose() itself failed. */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not close pipe to command \"%s\": %m",
+						command)));
+	}
+	else if (rc != 0)
+	{
+		char *reason = wait_result_to_str(rc);
+
+		ereport(COMMERROR,
+				(errmsg("failed to execute command \"%s\": %s",
+						command, reason)));
+
+		pfree(reason);
+	}
+
+	return (rc == 0);
+}
+
+static bool
+unset_cloexec(int fd)
+{
+	int			flags;
+	int			rc;
+
+	flags = fcntl(fd, F_GETFD);
+	if (flags == -1)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not get fd flags for child pipe: %m")));
+		return false;
+	}
+
+	rc = fcntl(fd, F_SETFD, flags & ~FD_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not unset FD_CLOEXEC for child pipe: %m")));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * XXX This should go away eventually and be replaced with either a proper
+ * escape or a different strategy for communication with the validator command.
+ */
+static bool
+username_ok_for_shell(const char *username)
+{
+	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
+	static const char * const allowed = "abcdefghijklmnopqrstuvwxyz"
+										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+										"0123456789-_./:";
+	size_t	span;
+
+	Assert(username && username[0]); /* should have already been checked */
+
+	span = strspn(username, allowed);
+	if (username[span] != '\0')
+	{
+		ereport(COMMERROR,
+				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
+		return false;
+	}
+
+	return true;
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index a1d7dbb6d5..0f461a6696 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index ee7f52218a..4049ace470 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -118,7 +118,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 3533b0bc50..5c30904e2b 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -30,6 +30,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -302,6 +303,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		case uaCustom:
 			if (CustomAuthenticationError_hook)
 				errstr = CustomAuthenticationError_hook(port);
@@ -627,6 +631,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 		case uaCustom:
 			if (CustomAuthenticationCheck_hook)
 				status = CustomAuthenticationCheck_hook(port);
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index ebae992964..f7f3059927 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -135,7 +135,8 @@ static const char *const UserAuthName[] =
 	"cert",
 	"radius",
 	"custom",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 
@@ -1400,6 +1401,8 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else if (strcmp(token->string, "custom") == 0)
 		parsedline->auth_method = uaCustom;
 	else
@@ -1728,8 +1731,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
+			hbaline->auth_method != uaOAuth &&
 			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, oauth, and cert"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2113,6 +2117,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else if (strcmp(name, "provider") == 0)
 	{
 		REQUIRE_AUTH_OPTION(uaCustom, "provider", "custom");
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 1e3650184b..791c7c83df 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -58,6 +58,7 @@
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
+#include "libpq/oauth.h"
 #include "miscadmin.h"
 #include "optimizer/cost.h"
 #include "optimizer/geqo.h"
@@ -4662,6 +4663,17 @@ static struct config_string ConfigureNamesString[] =
 		check_backtrace_functions, assign_backtrace_functions, NULL
 	},
 
+	{
+		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_command,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index c5aef6994c..d46c2108eb 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,8 +39,9 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaCustom,
-	uaPeer
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaPeer,
+	uaOAuth
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -121,6 +122,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 	char	   *custom_provider;
 } HbaLine;
 
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..870e426af1
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern char *oauth_validator_command;
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif /* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 71cc0dc251..3d481cc807 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
-- 
2.25.1

v3-0007-Add-a-very-simple-authn_id-extension.patchtext/x-patch; name=v3-0007-Add-a-very-simple-authn_id-extension.patchDownload
From 667fbd709f67232155cbaa3e09d1a4b4c02eeb22 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 18 May 2021 15:01:29 -0700
Subject: [PATCH v3 7/9] Add a very simple authn_id extension

...for retrieving the authn_id from the server in tests.
---
 contrib/authn_id/Makefile          | 19 +++++++++++++++++++
 contrib/authn_id/authn_id--1.0.sql |  8 ++++++++
 contrib/authn_id/authn_id.c        | 28 ++++++++++++++++++++++++++++
 contrib/authn_id/authn_id.control  |  5 +++++
 4 files changed, 60 insertions(+)
 create mode 100644 contrib/authn_id/Makefile
 create mode 100644 contrib/authn_id/authn_id--1.0.sql
 create mode 100644 contrib/authn_id/authn_id.c
 create mode 100644 contrib/authn_id/authn_id.control

diff --git a/contrib/authn_id/Makefile b/contrib/authn_id/Makefile
new file mode 100644
index 0000000000..46026358e0
--- /dev/null
+++ b/contrib/authn_id/Makefile
@@ -0,0 +1,19 @@
+# contrib/authn_id/Makefile
+
+MODULE_big = authn_id
+OBJS = authn_id.o
+
+EXTENSION = authn_id
+DATA = authn_id--1.0.sql
+PGFILEDESC = "authn_id - information about the authenticated user"
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/authn_id
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/contrib/authn_id/authn_id--1.0.sql b/contrib/authn_id/authn_id--1.0.sql
new file mode 100644
index 0000000000..af2a4d3991
--- /dev/null
+++ b/contrib/authn_id/authn_id--1.0.sql
@@ -0,0 +1,8 @@
+/* contrib/authn_id/authn_id--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION authn_id" to load this file. \quit
+
+CREATE FUNCTION authn_id() RETURNS text
+AS 'MODULE_PATHNAME', 'authn_id'
+LANGUAGE C IMMUTABLE;
diff --git a/contrib/authn_id/authn_id.c b/contrib/authn_id/authn_id.c
new file mode 100644
index 0000000000..0fecac36a8
--- /dev/null
+++ b/contrib/authn_id/authn_id.c
@@ -0,0 +1,28 @@
+/*
+ * Extension to expose the current user's authn_id.
+ *
+ * contrib/authn_id/authn_id.c
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/libpq-be.h"
+#include "miscadmin.h"
+#include "utils/builtins.h"
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(authn_id);
+
+/*
+ * Returns the current user's authenticated identity.
+ */
+Datum
+authn_id(PG_FUNCTION_ARGS)
+{
+	if (!MyProcPort->authn_id)
+		PG_RETURN_NULL();
+
+	PG_RETURN_TEXT_P(cstring_to_text(MyProcPort->authn_id));
+}
diff --git a/contrib/authn_id/authn_id.control b/contrib/authn_id/authn_id.control
new file mode 100644
index 0000000000..e0f9e06bed
--- /dev/null
+++ b/contrib/authn_id/authn_id.control
@@ -0,0 +1,5 @@
+# authn_id extension
+comment = 'current user identity'
+default_version = '1.0'
+module_pathname = '$libdir/authn_id'
+relocatable = true
-- 
2.25.1

v3-0008-Add-pytest-suite-for-OAuth.patchtext/x-patch; name=v3-0008-Add-pytest-suite-for-OAuth.patchDownload
From 1c24881d4b1e8777ce176d2c276fe8120bd6e648 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v3 8/9] Add pytest suite for OAuth

Requires Python 3; on the first run of `make installcheck` the
dependencies will be installed into ./venv for you. See the README for
more details.
---
 src/test/python/.gitignore                 |    2 +
 src/test/python/Makefile                   |   38 +
 src/test/python/README                     |   54 ++
 src/test/python/client/__init__.py         |    0
 src/test/python/client/conftest.py         |  126 +++
 src/test/python/client/test_client.py      |  180 ++++
 src/test/python/client/test_oauth.py       |  936 ++++++++++++++++++
 src/test/python/pq3.py                     |  727 ++++++++++++++
 src/test/python/pytest.ini                 |    4 +
 src/test/python/requirements.txt           |    7 +
 src/test/python/server/__init__.py         |    0
 src/test/python/server/conftest.py         |   45 +
 src/test/python/server/test_oauth.py       | 1012 ++++++++++++++++++++
 src/test/python/server/test_server.py      |   21 +
 src/test/python/server/validate_bearer.py  |  101 ++
 src/test/python/server/validate_reflect.py |   34 +
 src/test/python/test_internals.py          |  138 +++
 src/test/python/test_pq3.py                |  558 +++++++++++
 src/test/python/tls.py                     |  195 ++++
 19 files changed, 4178 insertions(+)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100755 src/test/python/server/validate_bearer.py
 create mode 100755 src/test/python/server/validate_reflect.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py

diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..0bda582c4b
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,54 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..f38da7a138
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,126 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import socket
+import sys
+import threading
+
+import psycopg2
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            conn.close()
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+    client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..c4c946fda4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,180 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = "server closed the connection unexpectedly"
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..a754a9c0b6
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,936 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import http.server
+import json
+import secrets
+import sys
+import threading
+import time
+import urllib.parse
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            self.server.serve_forever()
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+            self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _discovery_handler(self, headers, params):
+            oauth = self.server.oauth
+
+            doc = {
+                "issuer": oauth.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+            }
+
+            for name, path in oauth.endpoint_paths.items():
+                doc[name] = oauth.issuer + path
+
+            return 200, doc
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            code, resp = handler(self.headers, params)
+
+            self.send_response(code)
+            self.send_header("Content-Type", "application/json")
+            self.end_headers()
+
+            resp = json.dumps(resp)
+            resp = resp.encode("utf-8")
+            self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            if self.path == "/.well-known/openid-configuration":
+                self._handle(handler=self._discovery_handler)
+                return
+
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_with_explicit_issuer(
+    capfd, accept, openid_provider, retries, scope, secret
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user with the expected
+        # authorization URL and user code.
+        expected = f"Visit {verification_url} and enter the code: {user_code}"
+        _, stderr = capfd.readouterr()
+        assert expected in stderr
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+def test_oauth_retry_interval(accept, openid_provider, retries, error_code):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": expected_retry_interval,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            {
+                "error": "invalid_client",
+                "error_description": "client authentication failed",
+            },
+            r"client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            {"error": "invalid_request"},
+            r"\(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            {},
+            r"failed to obtain device authorization",
+            id="broken error response",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return 400, failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should not continue the connection due to the hardcoded
+            # provider failure; we disconnect here.
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            {
+                "error": "expired_token",
+                "error_description": "the device code has expired",
+            },
+            r"the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            {"error": "access_denied"},
+            r"\(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            {},
+            r"OAuth token retrieval failed",
+            id="broken error response",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+        return 400, failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should not continue the connection due to the hardcoded
+            # provider failure; we disconnect here.
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..3a22dad0b6
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,727 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload" / FixedSized(this.len - 4, Default(_payload, b"")),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..32f105ea84
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,7 @@
+black
+cryptography~=3.4.6
+construct~=2.10.61
+isort~=5.6
+psycopg2~=2.8.6
+pytest~=6.1
+pytest-asyncio~=0.14.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..ba7342a453
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,45 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+
+import pytest
+
+import pq3
+
+
+@pytest.fixture
+def connect():
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. The calling test will be
+    skipped automatically if a server is not running at PGHOST:PGPORT, so it's
+    best to connect as soon as possible after the test case begins, to avoid
+    doing unnecessary work.
+    """
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            addr = (pq3.pghost(), pq3.pgport())
+
+            try:
+                sock = socket.create_connection(addr, timeout=2)
+            except ConnectionError as e:
+                pytest.skip(f"unable to connect to {addr}: {e}")
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..cb5ca7fa23
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1012 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from psycopg2 import sql
+
+import pq3
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_TOKEN_SIZE = 4096
+MAX_UINT16 = 2 ** 16 - 1
+
+
+def skip_if_no_postgres():
+    """
+    Used by the oauth_ctx fixture to skip this test module if no Postgres server
+    is running.
+
+    This logic is nearly duplicated with the conn fixture. Ideally oauth_ctx
+    would depend on that, but a module-scope fixture can't depend on a
+    test-scope fixture, and we haven't reached the rule of three yet.
+    """
+    addr = (pq3.pghost(), pq3.pgport())
+
+    try:
+        with socket.create_connection(addr, timeout=2):
+            pass
+    except ConnectionError as e:
+        pytest.skip(f"unable to connect to {addr}: {e}")
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + ".bak"
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx():
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    skip_if_no_postgres()  # don't bother running these tests without a server
+
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = (
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    )
+    ident_lines = (r"oauth /^(.*)@example\.com$ \1",)
+
+    conn = psycopg2.connect("")
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Make this test script the server's oauth_validator.
+        path = pathlib.Path(__file__).parent / "validate_bearer.py"
+        path = str(path.absolute())
+
+        cmd = f"{shlex.quote(path)} {SHARED_MEM_NAME} <&%f"
+        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute("ALTER SYSTEM RESET oauth_validator_command;")
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def authn_id_extension(oauth_ctx):
+    """
+    Performs a `CREATE EXTENSION authn_id` in the test database. This fixture is
+    autoused, so tests don't need to rely on it.
+    """
+    conn = psycopg2.connect(database=oauth_ctx.dbname)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        c.execute("CREATE EXTENSION authn_id;")
+
+
+@pytest.fixture(scope="session")
+def shared_mem():
+    """
+    Yields a shared memory segment that can be used for communication between
+    the bearer_token fixture and ./validate_bearer.py.
+    """
+    size = MAX_TOKEN_SIZE + 2  # two byte length prefix
+    mem = shared_memory.SharedMemory(SHARED_MEM_NAME, create=True, size=size)
+
+    try:
+        with contextlib.closing(mem):
+            yield mem
+    finally:
+        mem.unlink()
+
+
+@pytest.fixture()
+def bearer_token(shared_mem):
+    """
+    Returns a factory function that, when called, will store a Bearer token in
+    shared_mem. If token is None (the default), a new token will be generated
+    using secrets.token_urlsafe() and returned; otherwise the passed token will
+    be used as-is.
+
+    When token is None, the generated token size in bytes may be specified as an
+    argument; if unset, a small 16-byte token will be generated. The token size
+    may not exceed MAX_TOKEN_SIZE in any case.
+
+    The return value is the token, converted to a bytes object.
+
+    As a special case for testing failure modes, accept_any may be set to True.
+    This signals to the validator command that any bearer token should be
+    accepted. The returned token in this case may be used or discarded as needed
+    by the test.
+    """
+
+    def set_token(token=None, *, size=16, accept_any=False):
+        if token is not None:
+            size = len(token)
+
+        if size > MAX_TOKEN_SIZE:
+            raise ValueError(f"token size {size} exceeds maximum size {MAX_TOKEN_SIZE}")
+
+        if token is None:
+            if size % 4:
+                raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+            token = secrets.token_urlsafe(size // 4 * 3)
+            assert len(token) == size
+
+        try:
+            token = token.encode("ascii")
+        except AttributeError:
+            pass  # already encoded
+
+        if accept_any:
+            # Two-byte magic value.
+            shared_mem.buf[:2] = struct.pack("H", MAX_UINT16)
+        else:
+            # Two-byte length prefix, then the token data.
+            shared_mem.buf[:2] = struct.pack("H", len(token))
+            shared_mem.buf[2 : size + 2] = token
+
+        return token
+
+    return set_token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(conn, oauth_ctx, bearer_token, auth_prefix, token_len):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    auth = auth_prefix + token
+
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(conn, oauth_ctx, bearer_token, token_value):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=bearer_token(token_value))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(conn, oauth_ctx, bearer_token, user, authn_id, should_succeed):
+    token = None
+
+    authn_id = authn_id(oauth_ctx)
+    if authn_id is not None:
+        authn_id = authn_id.encode("ascii")
+
+        # As a hack to get the validator to reflect arbitrary output from this
+        # test, encode the desired output as a base64 token. The validator will
+        # key on the leading "output=" to differentiate this from the random
+        # tokens generated by secrets.token_urlsafe().
+        output = b"output=" + authn_id + b"\n"
+        token = base64.urlsafe_b64encode(output)
+
+    token = bearer_token(token)
+    username = user(oauth_ctx)
+
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token)
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [authn_id]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx, bearer_token):
+    # Generate a new bearer token, which we will proceed not to use.
+    _ = bearer_token()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer me@example.com",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(conn, oauth_ctx, bearer_token, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    _ = bearer_token(accept_any=True)
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"x" * (MAX_SASL_MESSAGE_LENGTH + 1),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if not isinstance(payload, dict):
+        payload = dict(payload_data=payload)
+    pq3.send(conn, type, **payload)
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(conn, oauth_ctx, bearer_token):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + bearer_token() + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+@pytest.fixture()
+def set_validator():
+    """
+    A per-test fixture that allows a test to override the setting of
+    oauth_validator_command for the cluster. The setting will be reverted during
+    teardown.
+
+    Passing None will perform an ALTER SYSTEM RESET.
+    """
+    conn = psycopg2.connect("")
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Save the previous value.
+        c.execute("SHOW oauth_validator_command;")
+        prev_cmd = c.fetchone()[0]
+
+        def setter(cmd):
+            c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+            c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous value.
+        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (prev_cmd,))
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_oauth_no_validator(oauth_ctx, set_validator, connect, bearer_token):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+def test_oauth_validator_role(oauth_ctx, set_validator, connect):
+    # Switch the validator implementation. This validator will reflect the
+    # PGUSER as the authenticated identity.
+    path = pathlib.Path(__file__).parent / "validate_reflect.py"
+    path = str(path.absolute())
+
+    set_validator(f"{shlex.quote(path)} '%r' <&%f")
+    conn = connect()
+
+    # Log in. Note that the reflection validator ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=oauth_ctx.user)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = oauth_ctx.user.encode("utf-8")
+    assert row.columns == [expected]
+
+
+def test_oauth_role_with_shell_unsafe_characters(oauth_ctx, set_validator, connect):
+    """
+    XXX This test pins undesirable behavior. We should be able to handle any
+    valid Postgres role name.
+    """
+    # Switch the validator implementation. This validator will reflect the
+    # PGUSER as the authenticated identity.
+    path = pathlib.Path(__file__).parent / "validate_reflect.py"
+    path = str(path.absolute())
+
+    set_validator(f"{shlex.quote(path)} '%r' <&%f")
+    conn = connect()
+
+    unsafe_username = "hello'there"
+    begin_oauth_handshake(conn, oauth_ctx, user=unsafe_username)
+
+    # The server should reject the handshake.
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_failure(conn, oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/server/validate_bearer.py b/src/test/python/server/validate_bearer.py
new file mode 100755
index 0000000000..2cc73ff154
--- /dev/null
+++ b/src/test/python/server/validate_bearer.py
@@ -0,0 +1,101 @@
+#! /usr/bin/env python3
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+# DO NOT USE THIS OAUTH VALIDATOR IN PRODUCTION. It doesn't actually validate
+# anything, and it logs the bearer token data, which is sensitive.
+#
+# This executable is used as an oauth_validator_command in concert with
+# test_oauth.py. Memory is shared and communicated from that test module's
+# bearer_token() fixture.
+#
+# This script must run under the Postgres server environment; keep the
+# dependency list fairly standard.
+
+import base64
+import binascii
+import contextlib
+import struct
+import sys
+from multiprocessing import shared_memory
+
+MAX_UINT16 = 2 ** 16 - 1
+
+
+def remove_shm_from_resource_tracker():
+    """
+    Monkey-patch multiprocessing.resource_tracker so SharedMemory won't be
+    tracked. Pulled from this thread, where there are more details:
+
+        https://bugs.python.org/issue38119
+
+    TL;DR: all clients of shared memory segments automatically destroy them on
+    process exit, which makes shared memory segments much less useful. This
+    monkeypatch removes that behavior so that we can defer to the test to manage
+    the segment lifetime.
+
+    Ideally a future Python patch will pull in this fix and then the entire
+    function can go away.
+    """
+    from multiprocessing import resource_tracker
+
+    def fix_register(name, rtype):
+        if rtype == "shared_memory":
+            return
+        return resource_tracker._resource_tracker.register(self, name, rtype)
+
+    resource_tracker.register = fix_register
+
+    def fix_unregister(name, rtype):
+        if rtype == "shared_memory":
+            return
+        return resource_tracker._resource_tracker.unregister(self, name, rtype)
+
+    resource_tracker.unregister = fix_unregister
+
+    if "shared_memory" in resource_tracker._CLEANUP_FUNCS:
+        del resource_tracker._CLEANUP_FUNCS["shared_memory"]
+
+
+def main(args):
+    remove_shm_from_resource_tracker()  # XXX remove some day
+
+    # Get the expected token from the currently running test.
+    shared_mem_name = args[0]
+
+    mem = shared_memory.SharedMemory(shared_mem_name)
+    with contextlib.closing(mem):
+        # First two bytes are the token length.
+        size = struct.unpack("H", mem.buf[:2])[0]
+
+        if size == MAX_UINT16:
+            # Special case: the test wants us to accept any token.
+            sys.stderr.write("accepting token without validation\n")
+            return
+
+        # The remainder of the buffer contains the expected token.
+        assert size <= (mem.size - 2)
+        expected_token = mem.buf[2 : size + 2].tobytes()
+
+        mem.buf[:] = b"\0" * mem.size  # scribble over the token
+
+    token = sys.stdin.buffer.read()
+    if token != expected_token:
+        sys.exit(f"failed to match Bearer token ({token!r} != {expected_token!r})")
+
+    # See if the test wants us to print anything. If so, it will have encoded
+    # the desired output in the token with an "output=" prefix.
+    try:
+        # altchars="-_" corresponds to the urlsafe alphabet.
+        data = base64.b64decode(token, altchars="-_", validate=True)
+
+        if data.startswith(b"output="):
+            sys.stdout.buffer.write(data[7:])
+
+    except binascii.Error:
+        pass
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/src/test/python/server/validate_reflect.py b/src/test/python/server/validate_reflect.py
new file mode 100755
index 0000000000..24c3a7e715
--- /dev/null
+++ b/src/test/python/server/validate_reflect.py
@@ -0,0 +1,34 @@
+#! /usr/bin/env python3
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+# DO NOT USE THIS OAUTH VALIDATOR IN PRODUCTION. It ignores the bearer token
+# entirely and automatically logs the user in.
+#
+# This executable is used as an oauth_validator_command in concert with
+# test_oauth.py. It expects the user's desired role name as an argument; the
+# actual token will be discarded and the user will be logged in with the role
+# name as the authenticated identity.
+#
+# This script must run under the Postgres server environment; keep the
+# dependency list fairly standard.
+
+import sys
+
+
+def main(args):
+    # We have to read the entire token as our first action to unblock the
+    # server, but we won't actually use it.
+    _ = sys.stdin.buffer.read()
+
+    if len(args) != 1:
+        sys.exit("usage: ./validate_reflect.py ROLE")
+
+    # Log the user in as the provided role.
+    role = args[0]
+    print(role)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..e0c0e0568d
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,558 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05\x00",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        ("PGUSER", pq3.pguser, getpass.getuser()),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
-- 
2.25.1

v3-0009-contrib-oauth-switch-to-pluggable-auth-API.patchtext/x-patch; name=v3-0009-contrib-oauth-switch-to-pluggable-auth-API.patchDownload
From d275b329aaed6c6ef2403e4c313725a1ae88fa40 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Mar 2022 08:48:47 -0800
Subject: [PATCH v3 9/9] contrib/oauth: switch to pluggable auth API

Move the core server implementation to contrib/oauth as a pluggable
provider, using the RegisterAuthProvider() API. oauth_validator_command
has been moved from core to a custom GUC. Tests have been updated to
handle the new implementations.

Some server modifications remain:
- Adding new HBA options for custom providers
- Registering support for usermaps
- Allowing custom SASL mechanisms to declare their own maximum message
  length

This patch is optional; you can apply/revert it to compare the two
approaches.
---
 contrib/oauth/Makefile                        | 16 +++++++
 .../auth-oauth.c => contrib/oauth/oauth.c     | 47 +++++++++++++++----
 src/backend/libpq/Makefile                    |  1 -
 src/backend/libpq/auth.c                      |  7 ---
 src/backend/libpq/hba.c                       | 21 +++++----
 src/backend/utils/misc/guc.c                  | 12 -----
 src/include/libpq/hba.h                       |  3 +-
 src/include/libpq/oauth.h                     | 24 ----------
 src/test/python/server/test_oauth.py          | 20 ++++----
 9 files changed, 78 insertions(+), 73 deletions(-)
 create mode 100644 contrib/oauth/Makefile
 rename src/backend/libpq/auth-oauth.c => contrib/oauth/oauth.c (95%)
 delete mode 100644 src/include/libpq/oauth.h

diff --git a/contrib/oauth/Makefile b/contrib/oauth/Makefile
new file mode 100644
index 0000000000..880bc1fef3
--- /dev/null
+++ b/contrib/oauth/Makefile
@@ -0,0 +1,16 @@
+# contrib/oauth/Makefile
+
+MODULE_big = oauth
+OBJS = oauth.o
+PGFILEDESC = "oauth - auth provider supporting OAuth 2.0/OIDC"
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/oauth
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/backend/libpq/auth-oauth.c b/contrib/oauth/oauth.c
similarity index 95%
rename from src/backend/libpq/auth-oauth.c
rename to contrib/oauth/oauth.c
index c1232a31a0..3a6dab19d9 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/contrib/oauth/oauth.c
@@ -1,33 +1,39 @@
-/*-------------------------------------------------------------------------
+/* -------------------------------------------------------------------------
  *
- * auth-oauth.c
+ * oauth.c
  *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
  *
  * See the following RFC for more details:
  * - RFC 7628: https://tools.ietf.org/html/rfc7628
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
- * src/backend/libpq/auth-oauth.c
+ * contrib/oauth/oauth.c
  *
- *-------------------------------------------------------------------------
+ * -------------------------------------------------------------------------
  */
+
 #include "postgres.h"
 
 #include <unistd.h>
 #include <fcntl.h>
 
 #include "common/oauth-common.h"
+#include "fmgr.h"
 #include "lib/stringinfo.h"
 #include "libpq/auth.h"
 #include "libpq/hba.h"
-#include "libpq/oauth.h"
 #include "libpq/sasl.h"
 #include "storage/fd.h"
+#include "utils/guc.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
 
 /* GUC */
-char *oauth_validator_command;
+static char *oauth_validator_command;
 
 static void  oauth_get_mechanisms(Port *port, StringInfo buf);
 static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
@@ -35,7 +41,7 @@ static int   oauth_exchange(void *opaq, const char *input, int inputlen,
 							char **output, int *outputlen, const char **logdetail);
 
 /* Mechanism declaration */
-const pg_be_sasl_mech pg_be_oauth_mech = {
+static const pg_be_sasl_mech oauth_mech = {
 	oauth_get_mechanisms,
 	oauth_init,
 	oauth_exchange,
@@ -795,3 +801,28 @@ username_ok_for_shell(const char *username)
 
 	return true;
 }
+
+static int CheckOAuth(Port *port)
+{
+	return CheckSASLAuth(&oauth_mech, port, NULL, NULL);
+}
+
+static const char *OAuthError(Port *port)
+{
+	return psprintf("OAuth bearer authentication failed for user \"%s\"",
+					port->user_name);
+}
+
+void
+_PG_init(void)
+{
+	RegisterAuthProvider("oauth", CheckOAuth, OAuthError);
+
+	DefineCustomStringVariable("oauth.validator_command",
+							   gettext_noop("Command to validate OAuth v2 bearer tokens."),
+							   NULL,
+							   &oauth_validator_command,
+							   "",
+							   PGC_SIGHUP, GUC_SUPERUSER_ONLY,
+							   NULL, NULL, NULL);
+}
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 98eb2a8242..6d385fd6a4 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,7 +15,6 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
-	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 5c30904e2b..3533b0bc50 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -30,7 +30,6 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
-#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -303,9 +302,6 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
-		case uaOAuth:
-			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
-			break;
 		case uaCustom:
 			if (CustomAuthenticationError_hook)
 				errstr = CustomAuthenticationError_hook(port);
@@ -631,9 +627,6 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
-		case uaOAuth:
-			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
-			break;
 		case uaCustom:
 			if (CustomAuthenticationCheck_hook)
 				status = CustomAuthenticationCheck_hook(port);
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index f7f3059927..fb51c53cc0 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -136,7 +136,6 @@ static const char *const UserAuthName[] =
 	"radius",
 	"custom",
 	"peer",
-	"oauth",
 };
 
 
@@ -1401,8 +1400,6 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
-	else if (strcmp(token->string, "oauth") == 0)
-		parsedline->auth_method = uaOAuth;
 	else if (strcmp(token->string, "custom") == 0)
 		parsedline->auth_method = uaCustom;
 	else
@@ -1731,9 +1728,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaOAuth &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, oauth, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaCustom)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and custom"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2119,19 +2116,25 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 	}
 	else if (strcmp(name, "issuer") == 0)
 	{
-		if (hbaline->auth_method != uaOAuth)
+		if (hbaline->auth_method != uaCustom
+			&& (custom_provider_name != NULL
+				&& strcmp(custom_provider_name, "oauth")))
 			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
 		hbaline->oauth_issuer = pstrdup(val);
 	}
 	else if (strcmp(name, "scope") == 0)
 	{
-		if (hbaline->auth_method != uaOAuth)
+		if (hbaline->auth_method != uaCustom
+			&& (custom_provider_name != NULL
+				&& strcmp(custom_provider_name, "oauth")))
 			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
 		hbaline->oauth_scope = pstrdup(val);
 	}
 	else if (strcmp(name, "trust_validator_authz") == 0)
 	{
-		if (hbaline->auth_method != uaOAuth)
+		if (hbaline->auth_method != uaCustom
+			&& (custom_provider_name != NULL
+				&& strcmp(custom_provider_name, "oauth")))
 			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
 		if (strcmp(val, "1") == 0)
 			hbaline->oauth_skip_usermap = true;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 791c7c83df..1e3650184b 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -58,7 +58,6 @@
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
-#include "libpq/oauth.h"
 #include "miscadmin.h"
 #include "optimizer/cost.h"
 #include "optimizer/geqo.h"
@@ -4663,17 +4662,6 @@ static struct config_string ConfigureNamesString[] =
 		check_backtrace_functions, assign_backtrace_functions, NULL
 	},
 
-	{
-		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
-			gettext_noop("Command to validate OAuth v2 bearer tokens."),
-			NULL,
-			GUC_SUPERUSER_ONLY
-		},
-		&oauth_validator_command,
-		"",
-		NULL, NULL, NULL
-	},
-
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index d46c2108eb..0c6a7dd823 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -40,8 +40,7 @@ typedef enum UserAuth
 	uaRADIUS,
 	uaCustom,
 	uaPeer,
-	uaOAuth
-#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
+#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
 } UserAuth;
 
 /*
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
deleted file mode 100644
index 870e426af1..0000000000
--- a/src/include/libpq/oauth.h
+++ /dev/null
@@ -1,24 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * oauth.h
- *	  Interface to libpq/auth-oauth.c
- *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
- * Portions Copyright (c) 1994, Regents of the University of California
- *
- * src/include/libpq/oauth.h
- *
- *-------------------------------------------------------------------------
- */
-#ifndef PG_OAUTH_H
-#define PG_OAUTH_H
-
-#include "libpq/libpq-be.h"
-#include "libpq/sasl.h"
-
-extern char *oauth_validator_command;
-
-/* Implementation */
-extern const pg_be_sasl_mech pg_be_oauth_mech;
-
-#endif /* PG_OAUTH_H */
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
index cb5ca7fa23..07fc25edc2 100644
--- a/src/test/python/server/test_oauth.py
+++ b/src/test/python/server/test_oauth.py
@@ -103,9 +103,9 @@ def oauth_ctx():
 
     ctx = Context()
     hba_lines = (
-        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
-        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
-        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+        f'host {ctx.dbname} {ctx.map_user}   samehost custom provider=oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost custom provider=oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost custom provider=oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
     )
     ident_lines = (r"oauth /^(.*)@example\.com$ \1",)
 
@@ -126,12 +126,12 @@ def oauth_ctx():
         c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
         c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
 
-        # Make this test script the server's oauth_validator.
+        # Make this test script the server's oauth validator.
         path = pathlib.Path(__file__).parent / "validate_bearer.py"
         path = str(path.absolute())
 
         cmd = f"{shlex.quote(path)} {SHARED_MEM_NAME} <&%f"
-        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+        c.execute("ALTER SYSTEM SET oauth.validator_command TO %s;", (cmd,))
 
         # Replace pg_hba and pg_ident.
         c.execute("SHOW hba_file;")
@@ -149,7 +149,7 @@ def oauth_ctx():
         # Put things back the way they were.
         c.execute("SELECT pg_reload_conf();")
 
-        c.execute("ALTER SYSTEM RESET oauth_validator_command;")
+        c.execute("ALTER SYSTEM RESET oauth.validator_command;")
         c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
         c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
         c.execute(sql.SQL("DROP ROLE {};").format(map_user))
@@ -930,7 +930,7 @@ def test_oauth_empty_initial_response(conn, oauth_ctx, bearer_token):
 def set_validator():
     """
     A per-test fixture that allows a test to override the setting of
-    oauth_validator_command for the cluster. The setting will be reverted during
+    oauth.validator_command for the cluster. The setting will be reverted during
     teardown.
 
     Passing None will perform an ALTER SYSTEM RESET.
@@ -942,17 +942,17 @@ def set_validator():
         c = conn.cursor()
 
         # Save the previous value.
-        c.execute("SHOW oauth_validator_command;")
+        c.execute("SHOW oauth.validator_command;")
         prev_cmd = c.fetchone()[0]
 
         def setter(cmd):
-            c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+            c.execute("ALTER SYSTEM SET oauth.validator_command TO %s;", (cmd,))
             c.execute("SELECT pg_reload_conf();")
 
         yield setter
 
         # Restore the previous value.
-        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (prev_cmd,))
+        c.execute("ALTER SYSTEM SET oauth.validator_command TO %s;", (prev_cmd,))
         c.execute("SELECT pg_reload_conf();")
 
 
-- 
2.25.1

#15samay sharma
smilingsamay@gmail.com
In reply to: Jacob Champion (#14)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi Jacob,

Thank you for porting this on top of the pluggable auth methods API. I've
addressed the feedback around other backend changes in my latest patch, but
the client side changes still remain. I had a few questions to understand
them better.

(a) What specifically do the client side changes in the patch implement?
(b) Are the changes you made on the client side specific to OAUTH or are
they about making SASL more generic? As an additional question, if someone
wanted to implement something similar on top of your patch, would they
still have to make client side changes?

Regards,
Samay

On Fri, Mar 4, 2022 at 11:13 AM Jacob Champion <pchampion@vmware.com> wrote:

Show quoted text

Hi all,

v3 rebases this patchset over the top of Samay's pluggable auth
provider API [1], included here as patches 0001-3. The final patch in
the set ports the server implementation from a core feature to a
contrib module; to switch between the two approaches, simply leave out
that final patch.

There are still some backend changes that must be made to get this
working, as pointed out in 0009, and obviously libpq support still
requires code changes.

--Jacob

[1]
/messages/by-id/CAJxrbyxTRn5P8J-p+wHLwFahK5y56PhK28VOb55jqMO05Y-DJw@mail.gmail.com

#16Jacob Champion
pchampion@vmware.com
In reply to: samay sharma (#15)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, 2022-03-22 at 14:48 -0700, samay sharma wrote:

Thank you for porting this on top of the pluggable auth methods API.
I've addressed the feedback around other backend changes in my latest
patch, but the client side changes still remain. I had a few
questions to understand them better.

(a) What specifically do the client side changes in the patch implement?

Hi Samay,

The client-side changes are an implementation of the OAuth 2.0 Device
Authorization Grant [1]https://datatracker.ietf.org/doc/html/rfc8628 in libpq. The majority of the OAuth logic is
handled by the third-party iddawc library.

The server tells the client what OIDC provider to contact, and then
libpq prompts you to log into that provider on your
smartphone/browser/etc. using a one-time code. After you give libpq
permission to act on your behalf, the Bearer token gets sent to libpq
via a direct connection, and libpq forwards it to the server so that
the server can determine whether you're allowed in.

(b) Are the changes you made on the client side specific to OAUTH or
are they about making SASL more generic?

The original patchset included changes to make SASL more generic. Many
of those changes have since been merged, and the remaining code is
mostly OAuth-specific, but there are still improvements to be made.
(And there's some JSON crud to sift through in the first couple of
patches. I'm still mad that the OAUTHBEARER spec requires clients to
parse JSON in the first place.)

As an additional question,
if someone wanted to implement something similar on top of your
patch, would they still have to make client side changes?

Any new SASL mechanisms require changes to libpq at this point. You
need to implement a new pg_sasl_mech, modify pg_SASL_init() to select
the mechanism correctly, and add whatever connection string options you
need, along with the associated state in pg_conn. Patch 0004 has all
the client-side magic for OAUTHBEARER.

--Jacob

[1]: https://datatracker.ietf.org/doc/html/rfc8628

#17Jacob Champion
pchampion@vmware.com
In reply to: Jacob Champion (#14)
10 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, 2022-03-04 at 19:13 +0000, Jacob Champion wrote:

v3 rebases this patchset over the top of Samay's pluggable auth
provider API [1], included here as patches 0001-3.

v4 rebases over the latest version of the pluggable auth patchset
(included as 0001-4). Note that there's a recent conflict as
of d4781d887; use an older commit as the base (or wait for the other
thread to be updated).

--Jacob

Attachments:

v4-0007-backend-add-OAUTHBEARER-SASL-mechanism.patchtext/x-patch; name=v4-0007-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From b3ceda62e9cc6cbbc24c63c05c5ce072ae771c1b Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v4 07/10] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external program: the oauth_validator_command.
This command must do the following:

1. Receive the bearer token by reading its contents from a file
   descriptor passed from the server. (The numeric value of this
   descriptor may be inserted into the oauth_validator_command using the
   %f specifier.)

   This MUST be the first action the command performs. The server will
   not begin reading stdout from the command until the token has been
   read in full, so if the command tries to print anything and hits a
   buffer limit, the backend will deadlock and time out.

2. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the command must exit with a
   non-zero status. Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The command should print the
      authenticated identity string to stdout, followed by a newline.

      If the user cannot be authenticated, the validator should not
      print anything to stdout. It should also exit with a non-zero
      status, unless the token may be used to authorize the connection
      through some other means (see below).

      On a success, the command may then exit with a zero success code.
      By default, the server will then check to make sure the identity
      string matches the role that is being used (or matches a usermap
      entry, if one is in use).

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below), the validator simply
      returns a zero exit code if the client should be allowed to
      connect with its presented role (which can be passed to the
      command using the %r specifier), or a non-zero code otherwise.

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the command may print
      the authenticated ID and then fail with a non-zero exit code.
      (This makes it easier to see what's going on in the Postgres
      logs.)

4. Token validators may optionally log to stderr. This will be printed
   verbatim into the Postgres server logs.

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Unlike the client, servers support OAuth without needing to be built
against libiddawc (since the responsibility for "speaking" OAuth/OIDC
correctly is delegated entirely to the oauth_validator_command).

Several TODOs:
- port to platforms other than "modern Linux"
- overhaul the communication with oauth_validator_command, which is
  currently a bad hack on OpenPipeStream()
- implement more sanity checks on the OAUTHBEARER message format and
  tokens sent by the client
- implement more helpful handling of HBA misconfigurations
- properly interpolate JSON when generating error responses
- use logdetail during auth failures
- deal with role names that can't be safely passed to system() without
  shell-escaping
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.
---
 src/backend/libpq/Makefile     |   1 +
 src/backend/libpq/auth-oauth.c | 797 +++++++++++++++++++++++++++++++++
 src/backend/libpq/auth-sasl.c  |  10 +-
 src/backend/libpq/auth-scram.c |   4 +-
 src/backend/libpq/auth.c       |   7 +
 src/backend/libpq/hba.c        |  29 +-
 src/backend/utils/misc/guc.c   |  12 +
 src/include/libpq/hba.h        |   8 +-
 src/include/libpq/oauth.h      |  24 +
 src/include/libpq/sasl.h       |  11 +
 10 files changed, 889 insertions(+), 14 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..c1232a31a0
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,797 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+
+/* GUC */
+char *oauth_validator_command;
+
+static void  oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int   oauth_exchange(void *opaq, const char *input, int inputlen,
+							char **output, int *outputlen, const char **logdetail);
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state	state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth, const char **logdetail);
+static bool run_validator_command(Port *port, const char *token);
+static bool check_exit(FILE **fh, const char *command);
+static bool unset_cloexec(int fd);
+static bool username_ok_for_shell(const char *username);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char   *p;
+	char	cbind_flag;
+	char   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a 'y'
+	 * specifier purely for the remote chance that a future specification could
+	 * define one; then future clients can still interoperate with this server
+	 * implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y': /* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character %s.",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth, logdetail))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char   *pos = *input;
+	char   *auth = NULL;
+
+	/*
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char   *end;
+		char   *sep;
+		char   *key;
+		char   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 *
+		 * TODO further validate the key/value grammar? empty keys, bad chars...
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per
+			 * Sec. 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL; /* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData	buf;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's not
+	 * really a way to hide this from the user, either, because we can't choose
+	 * a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: JSON escaping
+	 */
+	appendStringInfo(&buf,
+		"{ "
+			"\"status\": \"invalid_token\", "
+			"\"openid-configuration\": \"%s/.well-known/openid-configuration\","
+			"\"scope\": \"%s\" "
+		"}",
+		ctx->issuer, ctx->scope);
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+static bool
+validate(Port *port, const char *auth, const char **logdetail)
+{
+	static const char * const b64_set = "abcdefghijklmnopqrstuvwxyz"
+										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+										"0123456789-._~+/";
+
+	const char *token;
+	size_t		span;
+	int			ret;
+
+	/* TODO: handle logdetail when the test framework can check it */
+
+	/*
+	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+	 * 2.1:
+	 *
+	 *      b64token    = 1*( ALPHA / DIGIT /
+	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+	 *      credentials = "Bearer" 1*SP b64token
+	 *
+	 * The "credentials" construction is what we receive in our auth value.
+	 *
+	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+	 * compared case-insensitively. (This is not mentioned in RFC 6750, but it's
+	 * pointed out in RFC 7628 Sec. 4.)
+	 *
+	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+	 */
+	if (strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return false;
+
+	/* Pull the bearer token out of the auth value. */
+	token = auth + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/*
+	 * Before invoking the validator command, sanity-check the token format to
+	 * avoid any injection attacks later in the chain. Invalid formats are
+	 * technically a protocol violation, but don't reflect any information about
+	 * the sensitive Bearer token back to the client; log at COMMERROR instead.
+	 */
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is empty.")));
+		return false;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end with
+	 * any number of '=' characters.
+	 */
+	span = strspn(token, b64_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the problematic
+		 * character(s), but that'd be a bit like printing a piece of someone's
+		 * password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return false;
+	}
+
+	/* Have the validator check the token. */
+	if (!run_validator_command(port, token))
+		return false;
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator says
+		 * the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (!port->authn_id)
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	ret = check_usermap(port->hba->usermap, port->user_name, port->authn_id,
+						false);
+	return (ret == STATUS_OK);
+}
+
+static bool
+run_validator_command(Port *port, const char *token)
+{
+	bool		success = false;
+	int			rc;
+	int			pipefd[2];
+	int			rfd = -1;
+	int			wfd = -1;
+
+	StringInfoData command = { 0 };
+	char	   *p;
+	FILE	   *fh = NULL;
+
+	ssize_t		written;
+	char	   *line = NULL;
+	size_t		size = 0;
+	ssize_t		len;
+
+	Assert(oauth_validator_command);
+
+	if (!oauth_validator_command[0])
+	{
+		ereport(COMMERROR,
+				(errmsg("oauth_validator_command is not set"),
+				 errhint("To allow OAuth authenticated connections, set "
+						 "oauth_validator_command in postgresql.conf.")));
+		return false;
+	}
+
+	/*
+	 * Since popen() is unidirectional, open up a pipe for the other direction.
+	 * Use CLOEXEC to ensure that our write end doesn't accidentally get copied
+	 * into child processes, which would prevent us from closing it cleanly.
+	 *
+	 * XXX this is ugly. We should just read from the child process's stdout,
+	 * but that's a lot more code.
+	 * XXX by bypassing the popen API, we open the potential of process
+	 * deadlock. Clearly document child process requirements (i.e. the child
+	 * MUST read all data off of the pipe before writing anything).
+	 * TODO: port to Windows using _pipe().
+	 */
+	rc = pipe2(pipefd, O_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not create child pipe: %m")));
+		return false;
+	}
+
+	rfd = pipefd[0];
+	wfd = pipefd[1];
+
+	/* Allow the read pipe be passed to the child. */
+	if (!unset_cloexec(rfd))
+	{
+		/* error message was already logged */
+		goto cleanup;
+	}
+
+	/*
+	 * Construct the command, substituting any recognized %-specifiers:
+	 *
+	 *   %f: the file descriptor of the input pipe
+	 *   %r: the role that the client wants to assume (port->user_name)
+	 *   %%: a literal '%'
+	 */
+	initStringInfo(&command);
+
+	for (p = oauth_validator_command; *p; p++)
+	{
+		if (p[0] == '%')
+		{
+			switch (p[1])
+			{
+				case 'f':
+					appendStringInfo(&command, "%d", rfd);
+					p++;
+					break;
+				case 'r':
+					/*
+					 * TODO: decide how this string should be escaped. The role
+					 * is controlled by the client, so if we don't escape it,
+					 * command injections are inevitable.
+					 *
+					 * This is probably an indication that the role name needs
+					 * to be communicated to the validator process in some other
+					 * way. For this proof of concept, just be incredibly strict
+					 * about the characters that are allowed in user names.
+					 */
+					if (!username_ok_for_shell(port->user_name))
+						goto cleanup;
+
+					appendStringInfoString(&command, port->user_name);
+					p++;
+					break;
+				case '%':
+					appendStringInfoChar(&command, '%');
+					p++;
+					break;
+				default:
+					appendStringInfoChar(&command, p[0]);
+			}
+		}
+		else
+			appendStringInfoChar(&command, p[0]);
+	}
+
+	/* Execute the command. */
+	fh = OpenPipeStream(command.data, "re");
+	/* TODO: handle failures */
+
+	/* We don't need the read end of the pipe anymore. */
+	close(rfd);
+	rfd = -1;
+
+	/* Give the command the token to validate. */
+	written = write(wfd, token, strlen(token));
+	if (written != strlen(token))
+	{
+		/* TODO must loop for short writes, EINTR et al */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not write token to child pipe: %m")));
+		goto cleanup;
+	}
+
+	close(wfd);
+	wfd = -1;
+
+	/*
+	 * Read the command's response.
+	 *
+	 * TODO: getline() is probably too new to use, unfortunately.
+	 * TODO: loop over all lines
+	 */
+	if ((len = getline(&line, &size, fh)) >= 0)
+	{
+		/* TODO: fail if the authn_id doesn't end with a newline */
+		if (len > 0)
+			line[len - 1] = '\0';
+
+		set_authn_id(port, line);
+	}
+	else if (ferror(fh))
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not read from command \"%s\": %m",
+						command.data)));
+		goto cleanup;
+	}
+
+	/* Make sure the command exits cleanly. */
+	if (!check_exit(&fh, command.data))
+	{
+		/* error message already logged */
+		goto cleanup;
+	}
+
+	/* Done. */
+	success = true;
+
+cleanup:
+	if (line)
+		free(line);
+
+	/*
+	 * In the successful case, the pipe fds are already closed. For the error
+	 * case, always close out the pipe before waiting for the command, to
+	 * prevent deadlock.
+	 */
+	if (rfd >= 0)
+		close(rfd);
+	if (wfd >= 0)
+		close(wfd);
+
+	if (fh)
+	{
+		Assert(!success);
+		check_exit(&fh, command.data);
+	}
+
+	if (command.data)
+		pfree(command.data);
+
+	return success;
+}
+
+static bool
+check_exit(FILE **fh, const char *command)
+{
+	int rc;
+
+	rc = ClosePipeStream(*fh);
+	*fh = NULL;
+
+	if (rc == -1)
+	{
+		/* pclose() itself failed. */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not close pipe to command \"%s\": %m",
+						command)));
+	}
+	else if (rc != 0)
+	{
+		char *reason = wait_result_to_str(rc);
+
+		ereport(COMMERROR,
+				(errmsg("failed to execute command \"%s\": %s",
+						command, reason)));
+
+		pfree(reason);
+	}
+
+	return (rc == 0);
+}
+
+static bool
+unset_cloexec(int fd)
+{
+	int			flags;
+	int			rc;
+
+	flags = fcntl(fd, F_GETFD);
+	if (flags == -1)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not get fd flags for child pipe: %m")));
+		return false;
+	}
+
+	rc = fcntl(fd, F_SETFD, flags & ~FD_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not unset FD_CLOEXEC for child pipe: %m")));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * XXX This should go away eventually and be replaced with either a proper
+ * escape or a different strategy for communication with the validator command.
+ */
+static bool
+username_ok_for_shell(const char *username)
+{
+	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
+	static const char * const allowed = "abcdefghijklmnopqrstuvwxyz"
+										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+										"0123456789-_./:";
+	size_t	span;
+
+	Assert(username && username[0]); /* should have already been checked */
+
+	span = strspn(username, allowed);
+	if (username[span] != '\0')
+	{
+		ereport(COMMERROR,
+				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
+		return false;
+	}
+
+	return true;
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index a1d7dbb6d5..0f461a6696 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index ee7f52218a..4049ace470 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -118,7 +118,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 4a8a63922a..17042d84ad 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -30,6 +30,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -298,6 +299,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		case uaCustom:
 			{
 				CustomAuthProvider *provider = get_provider_by_name(port->hba->custom_provider);
@@ -626,6 +630,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 		case uaCustom:
 			{
 				CustomAuthProvider *provider = get_provider_by_name(port->hba->custom_provider);
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 42cb1ce51d..cd3b1cc140 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -136,7 +136,8 @@ static const char *const UserAuthName[] =
 	"cert",
 	"radius",
 	"custom",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 
@@ -1401,6 +1402,8 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else if (strcmp(token->string, "custom") == 0)
 		parsedline->auth_method = uaCustom;
 	else
@@ -1730,8 +1733,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
 			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth &&
 			hbaline->auth_method != uaCustom)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert and custom"));
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, oauth, and custom"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2115,6 +2119,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else if (strcmp(name, "provider") == 0)
 	{
 		REQUIRE_AUTH_OPTION(uaCustom, "provider", "custom");
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index f70f7f5c01..9a5b2aa496 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -59,6 +59,7 @@
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
+#include "libpq/oauth.h"
 #include "miscadmin.h"
 #include "optimizer/cost.h"
 #include "optimizer/geqo.h"
@@ -4666,6 +4667,17 @@ static struct config_string ConfigureNamesString[] =
 		check_backtrace_functions, assign_backtrace_functions, NULL
 	},
 
+	{
+		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_command,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 31a00c4b71..e405103a2e 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,8 +39,9 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaCustom,
-	uaPeer
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaPeer,
+	uaOAuth
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -128,6 +129,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 	char	   *custom_provider;
 	List	   *custom_auth_options;
 } HbaLine;
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..870e426af1
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern char *oauth_validator_command;
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif /* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 39ccf8f0e3..f7d905591a 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
-- 
2.25.1

v4-0008-Add-a-very-simple-authn_id-extension.patchtext/x-patch; name=v4-0008-Add-a-very-simple-authn_id-extension.patchDownload
From 9dd8e024fde29239829e822b8f2b82028044cd8b Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 18 May 2021 15:01:29 -0700
Subject: [PATCH v4 08/10] Add a very simple authn_id extension

...for retrieving the authn_id from the server in tests.
---
 contrib/authn_id/Makefile          | 19 +++++++++++++++++++
 contrib/authn_id/authn_id--1.0.sql |  8 ++++++++
 contrib/authn_id/authn_id.c        | 28 ++++++++++++++++++++++++++++
 contrib/authn_id/authn_id.control  |  5 +++++
 4 files changed, 60 insertions(+)
 create mode 100644 contrib/authn_id/Makefile
 create mode 100644 contrib/authn_id/authn_id--1.0.sql
 create mode 100644 contrib/authn_id/authn_id.c
 create mode 100644 contrib/authn_id/authn_id.control

diff --git a/contrib/authn_id/Makefile b/contrib/authn_id/Makefile
new file mode 100644
index 0000000000..46026358e0
--- /dev/null
+++ b/contrib/authn_id/Makefile
@@ -0,0 +1,19 @@
+# contrib/authn_id/Makefile
+
+MODULE_big = authn_id
+OBJS = authn_id.o
+
+EXTENSION = authn_id
+DATA = authn_id--1.0.sql
+PGFILEDESC = "authn_id - information about the authenticated user"
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/authn_id
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/contrib/authn_id/authn_id--1.0.sql b/contrib/authn_id/authn_id--1.0.sql
new file mode 100644
index 0000000000..af2a4d3991
--- /dev/null
+++ b/contrib/authn_id/authn_id--1.0.sql
@@ -0,0 +1,8 @@
+/* contrib/authn_id/authn_id--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION authn_id" to load this file. \quit
+
+CREATE FUNCTION authn_id() RETURNS text
+AS 'MODULE_PATHNAME', 'authn_id'
+LANGUAGE C IMMUTABLE;
diff --git a/contrib/authn_id/authn_id.c b/contrib/authn_id/authn_id.c
new file mode 100644
index 0000000000..0fecac36a8
--- /dev/null
+++ b/contrib/authn_id/authn_id.c
@@ -0,0 +1,28 @@
+/*
+ * Extension to expose the current user's authn_id.
+ *
+ * contrib/authn_id/authn_id.c
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/libpq-be.h"
+#include "miscadmin.h"
+#include "utils/builtins.h"
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(authn_id);
+
+/*
+ * Returns the current user's authenticated identity.
+ */
+Datum
+authn_id(PG_FUNCTION_ARGS)
+{
+	if (!MyProcPort->authn_id)
+		PG_RETURN_NULL();
+
+	PG_RETURN_TEXT_P(cstring_to_text(MyProcPort->authn_id));
+}
diff --git a/contrib/authn_id/authn_id.control b/contrib/authn_id/authn_id.control
new file mode 100644
index 0000000000..e0f9e06bed
--- /dev/null
+++ b/contrib/authn_id/authn_id.control
@@ -0,0 +1,5 @@
+# authn_id extension
+comment = 'current user identity'
+default_version = '1.0'
+module_pathname = '$libdir/authn_id'
+relocatable = true
-- 
2.25.1

v4-0001-Add-support-for-custom-authentication-methods.patchtext/x-patch; name=v4-0001-Add-support-for-custom-authentication-methods.patchDownload
From 575431b4e035c266b55a25414f802fbf8ba16b97 Mon Sep 17 00:00:00 2001
From: Samay Sharma <smilingsamay@gmail.com>
Date: Tue, 15 Feb 2022 22:23:29 -0800
Subject: [PATCH v4 01/10] Add support for custom authentication methods

Currently, PostgreSQL supports only a set of pre-defined authentication
methods. This patch adds support for 2 hooks which allow users to add
their custom authentication methods by defining a check function and an
error function. Users can then use these methods by using a new "custom"
keyword in pg_hba.conf and specifying the authentication provider they
want to use.
---
 src/backend/libpq/auth.c | 108 ++++++++++++++++++++++++++++++++-------
 src/backend/libpq/hba.c  |  44 ++++++++++++++++
 src/include/libpq/auth.h |  37 ++++++++++++++
 src/include/libpq/hba.h  |   2 +
 4 files changed, 172 insertions(+), 19 deletions(-)

diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index efc53f3135..375ee33892 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -47,8 +47,6 @@
  *----------------------------------------------------------------
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
-static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -206,22 +204,11 @@ static int	pg_SSPI_make_upn(char *accountname,
 static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
-
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
+/*----------------------------------------------------------------
+ * Custom Authentication
+ *----------------------------------------------------------------
  */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+static List *custom_auth_providers = NIL;
 
 /*----------------------------------------------------------------
  * Global authentication functions
@@ -311,6 +298,15 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaCustom:
+			{
+				CustomAuthProvider *provider = get_provider_by_name(port->hba->custom_provider);
+				if (provider->auth_error_hook)
+					errstr = provider->auth_error_hook(port);
+				else
+					errstr = gettext_noop("Custom authentication failed for user \"%s\"");
+				break;
+			}
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -345,7 +341,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of the Port, so it is safe to pass a string that is managed by an
  * external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -630,6 +626,13 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaCustom:
+			{
+				CustomAuthProvider *provider = get_provider_by_name(port->hba->custom_provider);
+				if (provider->auth_check_hook)
+					status = provider->auth_check_hook(port);
+				break;
+			}
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
@@ -689,7 +692,7 @@ sendAuthRequest(Port *port, AuthRequest areq, const char *extradata, int extrale
  *
  * Returns NULL if couldn't get password, else palloc'd string.
  */
-static char *
+char *
 recv_password_packet(Port *port)
 {
 	StringInfoData buf;
@@ -3343,3 +3346,70 @@ PerformRadiusTransaction(const char *server, const char *secret, const char *por
 		}
 	}							/* while (true) */
 }
+
+/*----------------------------------------------------------------
+ * Custom authentication
+ *----------------------------------------------------------------
+ */
+
+/*
+ * RegisterAuthProvider registers a custom authentication provider to be
+ * used for authentication. It validates the inputs and adds the provider
+ * name and it's hooks to a list of loaded providers. The right provider's
+ * hooks can then be called based on the provider name specified in
+ * pg_hba.conf.
+ *
+ * This function should be called in _PG_init() by any extension looking to
+ * add a custom authentication method.
+ */
+void RegisterAuthProvider(const char *provider_name,
+		CustomAuthenticationCheck_hook_type AuthenticationCheckFunction,
+		CustomAuthenticationError_hook_type AuthenticationErrorFunction)
+{
+	CustomAuthProvider *provider = NULL;
+	MemoryContext old_context;
+
+	if (provider_name == NULL)
+	{
+		ereport(ERROR,
+				(errmsg("cannot register authentication provider without name")));
+	}
+
+	if (AuthenticationCheckFunction == NULL)
+	{
+		ereport(ERROR,
+				(errmsg("cannot register authentication provider without a check function")));
+	}
+
+	/*
+	 * Allocate in top memory context as we need to read this whenever
+	 * we parse pg_hba.conf
+	 */
+	old_context = MemoryContextSwitchTo(TopMemoryContext);
+	provider = palloc(sizeof(CustomAuthProvider));
+	provider->name = MemoryContextStrdup(TopMemoryContext,provider_name);
+	provider->auth_check_hook = AuthenticationCheckFunction;
+	provider->auth_error_hook = AuthenticationErrorFunction;
+	custom_auth_providers = lappend(custom_auth_providers, provider);
+	MemoryContextSwitchTo(old_context);
+}
+
+/*
+ * Returns the authentication provider (which includes it's
+ * callback functions) based on name specified.
+ */
+CustomAuthProvider *get_provider_by_name(const char *name)
+{
+	ListCell *lc;
+
+	foreach(lc, custom_auth_providers)
+	{
+		CustomAuthProvider *provider = (CustomAuthProvider *) lfirst(lc);
+		if (strcmp(provider->name,name) == 0)
+		{
+			return provider;
+		}
+	}
+
+	return NULL;
+}
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 90953c38f3..9f15252789 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -31,6 +31,7 @@
 #include "common/ip.h"
 #include "common/string.h"
 #include "funcapi.h"
+#include "libpq/auth.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq.h"
 #include "miscadmin.h"
@@ -134,6 +135,7 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
+	"custom",
 	"peer"
 };
 
@@ -1399,6 +1401,8 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "custom") == 0)
+		parsedline->auth_method = uaCustom;
 	else
 	{
 		ereport(elevel,
@@ -1691,6 +1695,14 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Ensure that the provider name is specified for custom authentication method.
+	 */
+	if (parsedline->auth_method == uaCustom)
+	{
+		MANDATORY_AUTH_ARG(parsedline->custom_provider, "provider", "custom");
+	}
+
 	return parsedline;
 }
 
@@ -2102,6 +2114,31 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "provider") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaCustom, "provider", "custom");
+
+		/*
+		 * Verify that the provider mentioned is loaded via shared_preload_libraries.
+		 */
+
+		if (get_provider_by_name(val) == NULL)
+		{
+			ereport(elevel,
+					(errcode(ERRCODE_CONFIG_FILE_ERROR),
+					 errmsg("cannot use authentication provider %s",val),
+					 errhint("Load authentication provider via shared_preload_libraries."),
+					 errcontext("line %d of configuration file \"%s\"",
+							line_num, HbaFileName)));
+			*err_msg = psprintf("cannot use authentication provider %s", val);
+
+			return false;
+		}
+		else
+		{
+			hbaline->custom_provider = pstrdup(val);
+		}
+	}
 	else
 	{
 		ereport(elevel,
@@ -2442,6 +2479,13 @@ gethba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaCustom)
+	{
+		if (hba->custom_provider)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("provider=%s",hba->custom_provider));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 6d7ee1acb9..7aff98d919 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -23,9 +23,46 @@ extern char *pg_krb_realm;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
+extern char *recv_password_packet(Port *port);
 
 /* Hook for plugins to get control in ClientAuthentication() */
+typedef int (*CustomAuthenticationCheck_hook_type) (Port *);
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
 extern PGDLLIMPORT ClientAuthentication_hook_type ClientAuthentication_hook;
 
+/* Hook for plugins to report error messages in auth_failed() */
+typedef const char * (*CustomAuthenticationError_hook_type) (Port *);
+
+extern void RegisterAuthProvider
+		(const char *provider_name,
+		 CustomAuthenticationCheck_hook_type CustomAuthenticationCheck_hook,
+		 CustomAuthenticationError_hook_type CustomAuthenticationError_hook);
+
+/* Declarations for custom authentication providers */
+typedef struct CustomAuthProvider
+{
+	const char *name;
+	CustomAuthenticationCheck_hook_type auth_check_hook;
+	CustomAuthenticationError_hook_type auth_error_hook;
+} CustomAuthProvider;
+
+extern CustomAuthProvider *get_provider_by_name(const char *name);
+
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 #endif							/* AUTH_H */
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8d9f3821b1..48490c44ed 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -38,6 +38,7 @@ typedef enum UserAuth
 	uaLDAP,
 	uaCert,
 	uaRADIUS,
+	uaCustom,
 	uaPeer
 #define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
 } UserAuth;
@@ -120,6 +121,7 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *custom_provider;
 } HbaLine;
 
 typedef struct IdentLine
-- 
2.25.1

v4-0002-Add-sample-extension-to-test-custom-auth-provider.patchtext/x-patch; name=v4-0002-Add-sample-extension-to-test-custom-auth-provider.patchDownload
From cb4131d8424861826e443708bc0f4e6baa76c871 Mon Sep 17 00:00:00 2001
From: Samay Sharma <smilingsamay@gmail.com>
Date: Tue, 15 Feb 2022 22:28:40 -0800
Subject: [PATCH v4 02/10] Add sample extension to test custom auth provider
 hooks

This change adds a new extension to src/test/modules to
test the custom authentication provider hooks. In this
extension, we use an array to define which users to
authenticate and what passwords to use. We then get
encrypted passwords from the client and match them with
the encrypted version of the password in the array.
---
 src/include/libpq/scram.h                     |  2 +-
 src/test/modules/test_auth_provider/Makefile  | 16 ++++
 .../test_auth_provider/test_auth_provider.c   | 86 +++++++++++++++++++
 3 files changed, 103 insertions(+), 1 deletion(-)
 create mode 100644 src/test/modules/test_auth_provider/Makefile
 create mode 100644 src/test/modules/test_auth_provider/test_auth_provider.c

diff --git a/src/include/libpq/scram.h b/src/include/libpq/scram.h
index e60992a0d2..c51e848c24 100644
--- a/src/include/libpq/scram.h
+++ b/src/include/libpq/scram.h
@@ -18,7 +18,7 @@
 #include "libpq/sasl.h"
 
 /* SASL implementation callbacks */
-extern const pg_be_sasl_mech pg_be_scram_mech;
+extern PGDLLIMPORT const pg_be_sasl_mech pg_be_scram_mech;
 
 /* Routines to handle and check SCRAM-SHA-256 secret */
 extern char *pg_be_scram_build_secret(const char *password);
diff --git a/src/test/modules/test_auth_provider/Makefile b/src/test/modules/test_auth_provider/Makefile
new file mode 100644
index 0000000000..17971a5c7a
--- /dev/null
+++ b/src/test/modules/test_auth_provider/Makefile
@@ -0,0 +1,16 @@
+# src/test/modules/test_auth_provider/Makefile
+
+MODULE_big = test_auth_provider
+OBJS = test_auth_provider.o
+PGFILEDESC = "test_auth_provider - provider to test auth hooks"
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_auth_provider
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_auth_provider/test_auth_provider.c b/src/test/modules/test_auth_provider/test_auth_provider.c
new file mode 100644
index 0000000000..7c4b1f3500
--- /dev/null
+++ b/src/test/modules/test_auth_provider/test_auth_provider.c
@@ -0,0 +1,86 @@
+/* -------------------------------------------------------------------------
+ *
+ * test_auth_provider.c
+ *			example authentication provider plugin
+ *
+ * Copyright (c) 2022, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *		contrib/test_auth_provider/test_auth_provider.c
+ *
+ * -------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+#include "fmgr.h"
+#include "libpq/auth.h"
+#include "libpq/libpq.h"
+#include "libpq/scram.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
+
+static char *get_encrypted_password_for_user(char *user_name);
+
+/*
+ * List of usernames / passwords to approve. Here we are not
+ * getting passwords from Postgres but from this list. In a more real-life
+ * extension, you can fetch valid credentials and authentication tokens /
+ * passwords from an external authentication provider.
+ */
+char credentials[3][3][50] = {
+	{"bob","alice","carol"},
+	{"bob123","alice123","carol123"}
+};
+
+static int TestAuthenticationCheck(Port *port)
+{
+	int result = STATUS_ERROR;
+	char *real_pass;
+	const char *logdetail = NULL;
+
+	real_pass = get_encrypted_password_for_user(port->user_name);
+	if (real_pass)
+	{
+		result = CheckSASLAuth(&pg_be_scram_mech, port, real_pass, &logdetail);
+		pfree(real_pass);
+	}
+
+	if (result == STATUS_OK)
+		set_authn_id(port, port->user_name);
+
+	return result;
+}
+
+/*
+ * Get SCRAM encrypted version of the password for user.
+ */
+static char *
+get_encrypted_password_for_user(char *user_name)
+{
+	char *password = NULL;
+	int i;
+	for (i=0; i<3; i++)
+	{
+		if (strcmp(user_name, credentials[0][i]) == 0)
+		{
+			password = pstrdup(pg_be_scram_build_secret(credentials[1][i]));
+		}
+	}
+
+	return password;
+}
+
+static const char *TestAuthenticationError(Port *port)
+{
+	char *error_message = (char *)palloc (100);
+	sprintf(error_message, "Test authentication failed for user %s", port->user_name);
+	return error_message;
+}
+
+void
+_PG_init(void)
+{
+	RegisterAuthProvider("test", TestAuthenticationCheck, TestAuthenticationError);
+}
-- 
2.25.1

v4-0003-Add-tests-for-test_auth_provider-extension.patchtext/x-patch; name=v4-0003-Add-tests-for-test_auth_provider-extension.patchDownload
From 24702486bfaca691d6ca9388544fb23e6d765055 Mon Sep 17 00:00:00 2001
From: Samay Sharma <smilingsamay@gmail.com>
Date: Wed, 16 Feb 2022 12:28:36 -0800
Subject: [PATCH v4 03/10] Add tests for test_auth_provider extension

Add tap tests for test_auth_provider extension allow make check in
src/test/modules to run them.
---
 src/test/modules/Makefile                     |   1 +
 src/test/modules/test_auth_provider/Makefile  |   2 +
 .../test_auth_provider/t/001_custom_auth.pl   | 125 ++++++++++++++++++
 3 files changed, 128 insertions(+)
 create mode 100644 src/test/modules/test_auth_provider/t/001_custom_auth.pl

diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 9090226daa..d0d461ef9e 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -14,6 +14,7 @@ SUBDIRS = \
 		  plsample \
 		  snapshot_too_old \
 		  spgist_name_ops \
+		  test_auth_provider \
 		  test_bloomfilter \
 		  test_ddl_deparse \
 		  test_extensions \
diff --git a/src/test/modules/test_auth_provider/Makefile b/src/test/modules/test_auth_provider/Makefile
index 17971a5c7a..7d601cf7d5 100644
--- a/src/test/modules/test_auth_provider/Makefile
+++ b/src/test/modules/test_auth_provider/Makefile
@@ -4,6 +4,8 @@ MODULE_big = test_auth_provider
 OBJS = test_auth_provider.o
 PGFILEDESC = "test_auth_provider - provider to test auth hooks"
 
+TAP_TESTS = 1
+
 ifdef USE_PGXS
 PG_CONFIG = pg_config
 PGXS := $(shell $(PG_CONFIG) --pgxs)
diff --git a/src/test/modules/test_auth_provider/t/001_custom_auth.pl b/src/test/modules/test_auth_provider/t/001_custom_auth.pl
new file mode 100644
index 0000000000..3b7472dc7f
--- /dev/null
+++ b/src/test/modules/test_auth_provider/t/001_custom_auth.pl
@@ -0,0 +1,125 @@
+
+# Copyright (c) 2021-2022, PostgreSQL Global Development Group
+
+# Set of tests for testing custom authentication hooks.
+
+use strict;
+use warnings;
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+# Delete pg_hba.conf from the given node, add a new entry to it
+# and then execute a reload to refresh it.
+sub reset_pg_hba
+{
+	my $node       = shift;
+	my $hba_method = shift;
+
+	unlink($node->data_dir . '/pg_hba.conf');
+	# just for testing purposes, use a continuation line
+	$node->append_conf('pg_hba.conf', "local all all\\\n $hba_method");
+	$node->reload;
+	return;
+}
+
+# Test if you get expected results in pg_hba_file_rules error column after
+# changing pg_hba.conf and reloading it.
+sub test_hba_reload
+{
+	my ($node, $method, $expected_res) = @_;
+	my $status_string = 'failed';
+	$status_string = 'success' if ($expected_res eq 0);
+	my $testname = "pg_hba.conf reload $status_string for method $method";
+
+	reset_pg_hba($node, $method);
+
+	my ($cmdret, $stdout, $stderr) = $node->psql("postgres",
+		"select count(*) from pg_hba_file_rules where error is not null",extra_params => ['-U','bob']);
+
+	is($stdout, $expected_res, $testname);
+}
+
+# Test access for a single role, useful to wrap all tests into one.  Extra
+# named parameters are passed to connect_ok/fails as-is.
+sub test_role
+{
+	local $Test::Builder::Level = $Test::Builder::Level + 1;
+
+	my ($node, $role, $method, $expected_res, %params) = @_;
+	my $status_string = 'failed';
+	$status_string = 'success' if ($expected_res eq 0);
+
+	my $connstr = "user=$role";
+	my $testname =
+	  "authentication $status_string for method $method, role $role";
+
+	if ($expected_res eq 0)
+	{
+		$node->connect_ok($connstr, $testname, %params);
+	}
+	else
+	{
+		# No checks of the error message, only the status code.
+		$node->connect_fails($connstr, $testname, %params);
+	}
+}
+
+# Initialize server node
+my $node = PostgreSQL::Test::Cluster->new('server');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'test_auth_provider.so'\n");
+$node->start;
+
+$node->safe_psql('postgres', "CREATE ROLE bob SUPERUSER LOGIN;");
+$node->safe_psql('postgres', "CREATE ROLE alice LOGIN;");
+$node->safe_psql('postgres', "CREATE ROLE test LOGIN;");
+
+# Add custom auth method to pg_hba.conf
+reset_pg_hba($node, 'custom provider=test');
+
+# Test that users are able to login with correct passwords.
+$ENV{"PGPASSWORD"} = 'bob123';
+test_role($node, 'bob', 'custom', 0, log_like => [qr/connection authorized: user=bob/]);
+$ENV{"PGPASSWORD"} = 'alice123';
+test_role($node, 'alice', 'custom', 0, log_like => [qr/connection authorized: user=alice/]);
+
+# Test that bad passwords are rejected.
+$ENV{"PGPASSWORD"} = 'badpassword';
+test_role($node, 'bob', 'custom', 2, log_unlike => [qr/connection authorized:/]);
+test_role($node, 'alice', 'custom', 2, log_unlike => [qr/connection authorized:/]);
+
+# Test that users not in authentication list are rejected.
+test_role($node, 'test', 'custom', 2, log_unlike => [qr/connection authorized:/]);
+
+$ENV{"PGPASSWORD"} = 'bob123';
+
+# Tests for invalid auth options
+
+# Test that an incorrect provider name is not accepted.
+test_hba_reload($node, 'custom provider=wrong', 1);
+
+# Test that specifying provider option with different auth method is not allowed.
+test_hba_reload($node, 'trust provider=test', 1);
+
+# Test that provider name is a mandatory option for custom auth.
+test_hba_reload($node, 'custom', 1);
+
+# Test that correct provider name allows reload to succeed.
+test_hba_reload($node, 'custom provider=test', 0);
+
+# Custom auth modules require mentioning extension in shared_preload_libraries.
+
+# Remove extension from shared_preload_libraries and try to restart.
+$node->adjust_conf('postgresql.conf', 'shared_preload_libraries', "''");
+command_fails(['pg_ctl', '-w', '-D', $node->data_dir, '-l', $node->logfile, 'restart'],'restart with empty shared_preload_libraries failed');
+
+# Fix shared_preload_libraries and confirm that you can now restart.
+$node->adjust_conf('postgresql.conf', 'shared_preload_libraries', "'test_auth_provider.so'");
+command_ok(['pg_ctl', '-w', '-D', $node->data_dir, '-l', $node->logfile,'start'],'restart with correct shared_preload_libraries succeeded');
+
+# Test that we can connect again
+test_role($node, 'bob', 'custom', 0, log_like => [qr/connection authorized: user=bob/]);
+
+done_testing();
-- 
2.25.1

v4-0004-Add-support-for-map-and-custom-auth-options.patchtext/x-patch; name=v4-0004-Add-support-for-map-and-custom-auth-options.patchDownload
From c30970a354b23f26eaf3e1db7c7d7759f2f828b3 Mon Sep 17 00:00:00 2001
From: Samay Sharma <smilingsamay@gmail.com>
Date: Mon, 14 Mar 2022 14:54:08 -0700
Subject: [PATCH v4 04/10] Add support for "map" and custom auth options

This commit allows extensions to now specify, validate and use
custom options for their custom auth methods. This is done by
exposing a validation function hook which can be defined by
extensions. The valid options are then stored as key / value
pairs which can be used while checking authentication. We also
allow custom auth providers to use the "map" option to use
usermaps.

The test module was updated to use custom options and new tests
were added.
---
 src/backend/libpq/auth.c                      |  4 +-
 src/backend/libpq/hba.c                       | 76 +++++++++++++++----
 src/include/libpq/auth.h                      | 17 +++--
 src/include/libpq/hba.h                       |  8 ++
 .../test_auth_provider/t/001_custom_auth.pl   | 22 ++++++
 .../test_auth_provider/test_auth_provider.c   | 50 +++++++++++-
 6 files changed, 157 insertions(+), 20 deletions(-)

diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 375ee33892..4a8a63922a 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -3364,7 +3364,8 @@ PerformRadiusTransaction(const char *server, const char *secret, const char *por
  */
 void RegisterAuthProvider(const char *provider_name,
 		CustomAuthenticationCheck_hook_type AuthenticationCheckFunction,
-		CustomAuthenticationError_hook_type AuthenticationErrorFunction)
+		CustomAuthenticationError_hook_type AuthenticationErrorFunction,
+		CustomAuthenticationValidateOptions_hook_type AuthenticationOptionsFunction)
 {
 	CustomAuthProvider *provider = NULL;
 	MemoryContext old_context;
@@ -3390,6 +3391,7 @@ void RegisterAuthProvider(const char *provider_name,
 	provider->name = MemoryContextStrdup(TopMemoryContext,provider_name);
 	provider->auth_check_hook = AuthenticationCheckFunction;
 	provider->auth_error_hook = AuthenticationErrorFunction;
+	provider->auth_options_hook = AuthenticationOptionsFunction;
 	custom_auth_providers = lappend(custom_auth_providers, provider);
 	MemoryContextSwitchTo(old_context);
 }
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 9f15252789..42cb1ce51d 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -1729,8 +1729,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaCustom)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert and custom"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2121,7 +2122,6 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		/*
 		 * Verify that the provider mentioned is loaded via shared_preload_libraries.
 		 */
-
 		if (get_provider_by_name(val) == NULL)
 		{
 			ereport(elevel,
@@ -2129,7 +2129,7 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 					 errmsg("cannot use authentication provider %s",val),
 					 errhint("Load authentication provider via shared_preload_libraries."),
 					 errcontext("line %d of configuration file \"%s\"",
-							line_num, HbaFileName)));
+								line_num, HbaFileName)));
 			*err_msg = psprintf("cannot use authentication provider %s", val);
 
 			return false;
@@ -2141,15 +2141,55 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 	}
 	else
 	{
-		ereport(elevel,
-				(errcode(ERRCODE_CONFIG_FILE_ERROR),
-				 errmsg("unrecognized authentication option name: \"%s\"",
-						name),
-				 errcontext("line %d of configuration file \"%s\"",
-							line_num, HbaFileName)));
-		*err_msg = psprintf("unrecognized authentication option name: \"%s\"",
-							name);
-		return false;
+		/*
+		 * Allow custom providers to validate their options if they have an
+		 * option validation function defined.
+		 */
+		if (hbaline->auth_method == uaCustom && (hbaline->custom_provider != NULL))
+		{
+			bool valid_option = false;
+			CustomAuthProvider *provider = get_provider_by_name(hbaline->custom_provider);
+			if (provider->auth_options_hook)
+			{
+				valid_option = provider->auth_options_hook(name, val, hbaline, err_msg);
+				if (valid_option)
+				{
+					CustomOption *option = palloc(sizeof(CustomOption));
+					option->name = pstrdup(name);
+					option->value = pstrdup(val);
+					hbaline->custom_auth_options = lappend(hbaline->custom_auth_options,
+														   option);
+				}
+			}
+			else
+			{
+				*err_msg = psprintf("unrecognized authentication option name: \"%s\"",
+									name);
+			}
+
+			/* Report the error returned by the provider as it is */
+			if (!valid_option)
+			{
+				ereport(elevel,
+						(errcode(ERRCODE_CONFIG_FILE_ERROR),
+						 errmsg("%s", *err_msg),
+						 errcontext("line %d of configuration file \"%s\"",
+									line_num, HbaFileName)));
+				return false;
+			}
+		}
+		else
+		{
+			ereport(elevel,
+					(errcode(ERRCODE_CONFIG_FILE_ERROR),
+					 errmsg("unrecognized authentication option name: \"%s\"",
+							name),
+					 errcontext("line %d of configuration file \"%s\"",
+								line_num, HbaFileName)));
+			*err_msg = psprintf("unrecognized authentication option name: \"%s\"",
+								name);
+			return false;
+		}
 	}
 	return true;
 }
@@ -2484,6 +2524,16 @@ gethba_options(HbaLine *hba)
 		if (hba->custom_provider)
 			options[noptions++] =
 				CStringGetTextDatum(psprintf("provider=%s",hba->custom_provider));
+		if (hba->custom_auth_options)
+		{
+			ListCell *lc;
+			foreach(lc, hba->custom_auth_options)
+			{
+				CustomOption *option = (CustomOption *)lfirst(lc);
+				options[noptions++] =
+					CStringGetTextDatum(psprintf("%s=%s",option->name, option->value));
+			}
+		}
 	}
 
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 7aff98d919..cbdc63b4df 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -31,22 +31,29 @@ typedef int (*CustomAuthenticationCheck_hook_type) (Port *);
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
 extern PGDLLIMPORT ClientAuthentication_hook_type ClientAuthentication_hook;
 
+/* Declarations for custom authentication providers */
+
 /* Hook for plugins to report error messages in auth_failed() */
 typedef const char * (*CustomAuthenticationError_hook_type) (Port *);
 
-extern void RegisterAuthProvider
-		(const char *provider_name,
-		 CustomAuthenticationCheck_hook_type CustomAuthenticationCheck_hook,
-		 CustomAuthenticationError_hook_type CustomAuthenticationError_hook);
+/* Hook for plugins to validate custom authentication options */
+typedef bool (*CustomAuthenticationValidateOptions_hook_type)
+			 (char *, char *, HbaLine *, char **);
 
-/* Declarations for custom authentication providers */
 typedef struct CustomAuthProvider
 {
 	const char *name;
 	CustomAuthenticationCheck_hook_type auth_check_hook;
 	CustomAuthenticationError_hook_type auth_error_hook;
+	CustomAuthenticationValidateOptions_hook_type auth_options_hook;
 } CustomAuthProvider;
 
+extern void RegisterAuthProvider
+		(const char *provider_name,
+		 CustomAuthenticationCheck_hook_type CustomAuthenticationCheck_hook,
+		 CustomAuthenticationError_hook_type CustomAuthenticationError_hook,
+		 CustomAuthenticationValidateOptions_hook_type CustomAuthenticationOptions_hook);
+
 extern CustomAuthProvider *get_provider_by_name(const char *name);
 
 /*
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 48490c44ed..31a00c4b71 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -78,6 +78,13 @@ typedef enum ClientCertName
 	clientCertDN
 } ClientCertName;
 
+/* Struct for custom options defined by custom auth plugins */
+typedef struct CustomOption
+{
+	char	*name;
+	char	*value;
+}CustomOption;
+
 typedef struct HbaLine
 {
 	int			linenumber;
@@ -122,6 +129,7 @@ typedef struct HbaLine
 	List	   *radiusports;
 	char	   *radiusports_s;
 	char	   *custom_provider;
+	List	   *custom_auth_options;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/test/modules/test_auth_provider/t/001_custom_auth.pl b/src/test/modules/test_auth_provider/t/001_custom_auth.pl
index 3b7472dc7f..e964c2f723 100644
--- a/src/test/modules/test_auth_provider/t/001_custom_auth.pl
+++ b/src/test/modules/test_auth_provider/t/001_custom_auth.pl
@@ -109,6 +109,28 @@ test_hba_reload($node, 'custom', 1);
 # Test that correct provider name allows reload to succeed.
 test_hba_reload($node, 'custom provider=test', 0);
 
+# Tests for custom auth options
+
+# Test that a custom option doesn't work without a provider.
+test_hba_reload($node, 'custom allow=bob', 1);
+
+# Test that options other than allowed ones are not accepted.
+test_hba_reload($node, 'custom provider=test wrong=true', 1);
+
+# Test that only valid values are accepted for allowed options.
+test_hba_reload($node, 'custom provider=test allow=wrong', 1);
+
+# Test that setting allow option for a user doesn't look at the password.
+test_hba_reload($node, 'custom provider=test allow=bob', 0);
+$ENV{"PGPASSWORD"} = 'bad123';
+test_role($node, 'bob', 'custom', 0, log_like => [qr/connection authorized: user=bob/]);
+
+# Password is still checked for other users.
+test_role($node, 'alice', 'custom', 2, log_unlike => [qr/connection authorized:/]);
+
+# Reset the password for future tests.
+$ENV{"PGPASSWORD"} = 'bob123';
+
 # Custom auth modules require mentioning extension in shared_preload_libraries.
 
 # Remove extension from shared_preload_libraries and try to restart.
diff --git a/src/test/modules/test_auth_provider/test_auth_provider.c b/src/test/modules/test_auth_provider/test_auth_provider.c
index 7c4b1f3500..5ac425f5b6 100644
--- a/src/test/modules/test_auth_provider/test_auth_provider.c
+++ b/src/test/modules/test_auth_provider/test_auth_provider.c
@@ -39,7 +39,27 @@ static int TestAuthenticationCheck(Port *port)
 	int result = STATUS_ERROR;
 	char *real_pass;
 	const char *logdetail = NULL;
+	ListCell *lc;
 
+	/*
+	 * If user's name is in the the "allow" list, do not request password
+	 * for them and allow them to authenticate.
+	 */
+	foreach(lc,port->hba->custom_auth_options)
+	{
+		CustomOption *option = (CustomOption *) lfirst(lc);
+		if (strcmp(option->name, "allow") == 0 &&
+			strcmp(option->value, port->user_name) == 0)
+		{
+			set_authn_id(port, port->user_name);
+			return STATUS_OK;
+		}
+	}
+
+	/*
+	 * Encrypt the password and validate that it's the same as the one
+	 * returned by the client.
+	 */
 	real_pass = get_encrypted_password_for_user(port->user_name);
 	if (real_pass)
 	{
@@ -79,8 +99,36 @@ static const char *TestAuthenticationError(Port *port)
 	return error_message;
 }
 
+/*
+ * Returns if the options passed are supported by the extension
+ * and are valid. Currently only "allow" is supported.
+ */
+static bool TestAuthenticationOptions(char *name, char *val, HbaLine *hbaline, char **err_msg)
+{
+	/* Validate that an actual user is in the "allow" list. */
+	if (strcmp(name,"allow") == 0)
+	{
+		for (int i=0;i<3;i++)
+		{
+			if (strcmp(val,credentials[0][i]) == 0)
+			{
+				return true;
+			}
+		}
+
+		*err_msg = psprintf("\"%s\" is not valid value for option \"%s\"", val, name);
+		return false;
+	}
+	else
+	{
+		*err_msg = psprintf("option \"%s\" not recognized by \"%s\" provider", val, hbaline->custom_provider);
+		return false;
+	}
+}
+
 void
 _PG_init(void)
 {
-	RegisterAuthProvider("test", TestAuthenticationCheck, TestAuthenticationError);
+	RegisterAuthProvider("test", TestAuthenticationCheck,
+						 TestAuthenticationError,TestAuthenticationOptions);
 }
-- 
2.25.1

v4-0005-common-jsonapi-support-FRONTEND-clients.patchtext/x-patch; name=v4-0005-common-jsonapi-support-FRONTEND-clients.patchDownload
From 0ca324b52c94760e799974e5661fe29c3912d2d8 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v4 05/10] common/jsonapi: support FRONTEND clients

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed. json_errdetail() now allocates its error message inside
memory owned by the JsonLexContext, so clients don't need to worry about
freeing it.

For convenience, the backend now has destroyJsonLexContext() to mirror
other create/destroy APIs. The frontend has init/term versions of the
API to handle stack-allocated JsonLexContexts.

We can now partially revert b44669b2ca, now that json_errdetail() works
correctly.
---
 src/backend/utils/adt/jsonfuncs.c             |   4 +-
 src/bin/pg_verifybackup/parse_manifest.c      |  13 +-
 src/bin/pg_verifybackup/t/005_bad_manifest.pl |   2 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 290 +++++++++++++-----
 src/include/common/jsonapi.h                  |  47 ++-
 6 files changed, 270 insertions(+), 88 deletions(-)

diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index 29664aa6e4..7d32a99d8c 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -723,9 +723,7 @@ json_object_keys(PG_FUNCTION_ARGS)
 		pg_parse_json_or_ereport(lex, sem);
 		/* keys are now in state->result */
 
-		pfree(lex->strval->data);
-		pfree(lex->strval);
-		pfree(lex);
+		destroyJsonLexContext(lex);
 		pfree(sem);
 
 		MemoryContextSwitchTo(oldcontext);
diff --git a/src/bin/pg_verifybackup/parse_manifest.c b/src/bin/pg_verifybackup/parse_manifest.c
index 6364b01282..4b38fd3963 100644
--- a/src/bin/pg_verifybackup/parse_manifest.c
+++ b/src/bin/pg_verifybackup/parse_manifest.c
@@ -119,7 +119,7 @@ void
 json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 					size_t size)
 {
-	JsonLexContext *lex;
+	JsonLexContext lex = {0};
 	JsonParseErrorType json_error;
 	JsonSemAction sem;
 	JsonManifestParseState parse;
@@ -129,8 +129,8 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 	parse.state = JM_EXPECT_TOPLEVEL_START;
 	parse.saw_version_field = false;
 
-	/* Create a JSON lexing context. */
-	lex = makeJsonLexContextCstringLen(buffer, size, PG_UTF8, true);
+	/* Initialize a JSON lexing context. */
+	initJsonLexContextCstringLen(&lex, buffer, size, PG_UTF8, true);
 
 	/* Set up semantic actions. */
 	sem.semstate = &parse;
@@ -145,14 +145,17 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 	sem.scalar = json_manifest_scalar;
 
 	/* Run the actual JSON parser. */
-	json_error = pg_parse_json(lex, &sem);
+	json_error = pg_parse_json(&lex, &sem);
 	if (json_error != JSON_SUCCESS)
-		json_manifest_parse_failure(context, "parsing failed");
+		json_manifest_parse_failure(context, json_errdetail(json_error, &lex));
 	if (parse.state != JM_EXPECT_EOF)
 		json_manifest_parse_failure(context, "manifest ended unexpectedly");
 
 	/* Verify the manifest checksum. */
 	verify_manifest_checksum(&parse, buffer, size);
+
+	/* Clean up. */
+	termJsonLexContext(&lex);
 }
 
 /*
diff --git a/src/bin/pg_verifybackup/t/005_bad_manifest.pl b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
index 118beb53d7..f2692972fe 100644
--- a/src/bin/pg_verifybackup/t/005_bad_manifest.pl
+++ b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
@@ -16,7 +16,7 @@ my $tempdir = PostgreSQL::Test::Utils::tempdir;
 
 test_bad_manifest(
 	'input string ended unexpectedly',
-	qr/could not parse backup manifest: parsing failed/,
+	qr/could not parse backup manifest: The input string ended unexpectedly/,
 	<<EOM);
 {
 EOM
diff --git a/src/common/Makefile b/src/common/Makefile
index f627349835..694da03658 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 # If you add objects here, see also src/tools/msvc/Mkvcbuild.pm
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 6666077a93..7fc5eaf460 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -20,10 +20,39 @@
 #include "common/jsonapi.h"
 #include "mb/pg_wchar.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+
+#define appendStrVal		appendPQExpBuffer
+#define appendStrValChar	appendPQExpBufferChar
+#define createStrVal		createPQExpBuffer
+#define resetStrVal			resetPQExpBuffer
+
+#else /* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+
+#define appendStrVal		appendStringInfo
+#define appendStrValChar	appendStringInfoChar
+#define createStrVal		makeStringInfo
+#define resetStrVal			resetStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -132,10 +161,12 @@ IsValidJsonNumber(const char *str, int len)
 	return (!numeric_error) && (total_len == dummy_lex.input_length);
 }
 
+#ifndef FRONTEND
+
 /*
  * makeJsonLexContextCstringLen
  *
- * lex constructor, with or without StringInfo object for de-escaped lexemes.
+ * lex constructor, with or without a string object for de-escaped lexemes.
  *
  * Without is better as it makes the processing faster, so only make one
  * if really required.
@@ -145,13 +176,66 @@ makeJsonLexContextCstringLen(char *json, int len, int encoding, bool need_escape
 {
 	JsonLexContext *lex = palloc0(sizeof(JsonLexContext));
 
+	initJsonLexContextCstringLen(lex, json, len, encoding, need_escapes);
+
+	return lex;
+}
+
+void
+destroyJsonLexContext(JsonLexContext *lex)
+{
+	termJsonLexContext(lex);
+	pfree(lex);
+}
+
+#endif /* !FRONTEND */
+
+void
+initJsonLexContextCstringLen(JsonLexContext *lex, char *json, int len, int encoding, bool need_escapes)
+{
 	lex->input = lex->token_terminator = lex->line_start = json;
 	lex->line_number = 1;
 	lex->input_length = len;
 	lex->input_encoding = encoding;
-	if (need_escapes)
-		lex->strval = makeStringInfo();
-	return lex;
+	lex->parse_strval = need_escapes;
+	if (lex->parse_strval)
+	{
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to time
+		 * of use (json_lex_string()) since there's no way to signal failure
+		 * here, and we might not need to parse any strings anyway.
+		 */
+		lex->strval = createStrVal();
+	}
+	lex->errormsg = NULL;
+}
+
+void
+termJsonLexContext(JsonLexContext *lex)
+{
+	static const JsonLexContext empty = {0};
+
+	if (lex->strval)
+	{
+#ifdef FRONTEND
+		destroyPQExpBuffer(lex->strval);
+#else
+		pfree(lex->strval->data);
+		pfree(lex->strval);
+#endif
+	}
+
+	if (lex->errormsg)
+	{
+#ifdef FRONTEND
+		destroyPQExpBuffer(lex->errormsg);
+#else
+		pfree(lex->errormsg->data);
+		pfree(lex->errormsg);
+#endif
+	}
+
+	*lex = empty;
 }
 
 /*
@@ -217,7 +301,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;		/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -279,14 +363,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -320,8 +411,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -368,6 +463,10 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -676,8 +775,15 @@ json_lex_string(JsonLexContext *lex)
 	int			len;
 	int			hi_surrogate = -1;
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -737,7 +843,7 @@ json_lex_string(JsonLexContext *lex)
 						return JSON_UNICODE_ESCAPE_FORMAT;
 					}
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -797,19 +903,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						return JSON_UNICODE_HIGH_ESCAPE;
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					return JSON_UNICODE_LOW_SURROGATE;
@@ -819,22 +925,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 						/* Not a valid string escape, so signal error. */
@@ -858,12 +964,12 @@ json_lex_string(JsonLexContext *lex)
 			}
 
 		}
-		else if (lex->strval != NULL)
+		else if (lex->parse_strval)
 		{
 			if (hi_surrogate != -1)
 				return JSON_UNICODE_LOW_SURROGATE;
 
-			appendStringInfoChar(lex->strval, *s);
+			appendStrValChar(lex->strval, *s);
 		}
 
 	}
@@ -871,6 +977,11 @@ json_lex_string(JsonLexContext *lex)
 	if (hi_surrogate != -1)
 		return JSON_UNICODE_LOW_SURROGATE;
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -1043,72 +1154,93 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 	return JSON_SUCCESS;		/* silence stupider compilers */
 }
 
-
-#ifndef FRONTEND
-/*
- * Extract the current token from a lexing context, for error reporting.
- */
-static char *
-extract_token(JsonLexContext *lex)
-{
-	int			toklen = lex->token_terminator - lex->token_start;
-	char	   *token = palloc(toklen + 1);
-
-	memcpy(token, lex->token_start, toklen);
-	token[toklen] = '\0';
-	return token;
-}
-
 /*
  * Construct a detail message for a JSON error.
  *
- * Note that the error message generated by this routine may not be
- * palloc'd, making it unsafe for frontend code as there is no way to
- * know if this can be safery pfree'd or not.
+ * The returned allocation is either static or owned by the JsonLexContext and
+ * should not be freed.
  */
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	int		toklen = lex->token_terminator - lex->token_start;
+
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
+	if (lex->errormsg)
+		resetStrVal(lex->errormsg);
+	else
+		lex->errormsg = createStrVal();
+
 	switch (error)
 	{
 		case JSON_SUCCESS:
 			/* fall through to the error code after switch */
 			break;
 		case JSON_ESCAPING_INVALID:
-			return psprintf(_("Escape sequence \"\\%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Escape sequence \"\\%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_ESCAPING_REQUIRED:
-			return psprintf(_("Character with value 0x%02x must be escaped."),
-							(unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
+			break;
 		case JSON_EXPECTED_END:
-			return psprintf(_("Expected end of input, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected end of input, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_FIRST:
-			return psprintf(_("Expected array element or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected array element or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_NEXT:
-			return psprintf(_("Expected \",\" or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_COLON:
-			return psprintf(_("Expected \":\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \":\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_JSON:
-			return psprintf(_("Expected JSON value, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected JSON value, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_MORE:
 			return _("The input string ended unexpectedly.");
 		case JSON_EXPECTED_OBJECT_FIRST:
-			return psprintf(_("Expected string or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_OBJECT_NEXT:
-			return psprintf(_("Expected \",\" or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_STRING:
-			return psprintf(_("Expected string, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_INVALID_TOKEN:
-			return psprintf(_("Token \"%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Token \"%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -1122,12 +1254,22 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			return _("Unicode low surrogate must follow a high surrogate.");
 	}
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	elog(ERROR, "unexpected json parse error type: %d", (int) error);
-	return NULL;
-}
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && !lex->errormsg->data[0])
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover the
+		 * possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d", (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
 #endif
+
+	return lex->errormsg->data;
+}
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 52cb4a9339..d7cafc84fe 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum
 {
 	JSON_TOKEN_INVALID,
@@ -48,6 +46,7 @@ typedef enum
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -55,6 +54,17 @@ typedef enum
 	JSON_UNICODE_LOW_SURROGATE
 } JsonParseErrorType;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
 
 /*
  * All the fields in this structure should be treated as read-only.
@@ -81,7 +91,9 @@ typedef struct JsonLexContext
 	int			lex_level;
 	int			line_number;	/* line number, starting from 1 */
 	char	   *line_start;		/* where that line starts within input */
-	StringInfo	strval;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef void (*json_struct_action) (void *state);
@@ -141,9 +153,10 @@ extern JsonSemAction nullSemAction;
  */
 extern JsonParseErrorType json_count_array_elements(JsonLexContext *lex,
 													int *elements);
+#ifndef FRONTEND
 
 /*
- * constructor for JsonLexContext, with or without strval element.
+ * allocating constructor for JsonLexContext, with or without strval element.
  * If supplied, the strval element will contain a de-escaped version of
  * the lexeme. However, doing this imposes a performance penalty, so
  * it should be avoided if the de-escaped lexeme is not required.
@@ -153,6 +166,32 @@ extern JsonLexContext *makeJsonLexContextCstringLen(char *json,
 													int encoding,
 													bool need_escapes);
 
+/*
+ * Counterpart to makeJsonLexContextCstringLen(): clears and deallocates lex.
+ * The context pointer should not be used after this call.
+ */
+extern void destroyJsonLexContext(JsonLexContext *lex);
+
+#endif /* !FRONTEND */
+
+/*
+ * stack constructor for JsonLexContext, with or without strval element.
+ * If supplied, the strval element will contain a de-escaped version of
+ * the lexeme. However, doing this imposes a performance penalty, so
+ * it should be avoided if the de-escaped lexeme is not required.
+ */
+extern void initJsonLexContextCstringLen(JsonLexContext *lex,
+										 char *json,
+										 int len,
+										 int encoding,
+										 bool need_escapes);
+
+/*
+ * Counterpart to initJsonLexContextCstringLen(): clears the contents of lex,
+ * but does not deallocate lex itself.
+ */
+extern void termJsonLexContext(JsonLexContext *lex);
+
 /* lex one token */
 extern JsonParseErrorType json_lex(JsonLexContext *lex);
 
-- 
2.25.1

v4-0006-libpq-add-OAUTHBEARER-SASL-mechanism.patchtext/x-patch; name=v4-0006-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From f4752b6daea9b519ff3482094a1f445a4731cc15 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v4 06/10] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented.

The client implementation requires libiddawc and its development
headers. Configure --with-oauth (and --with-includes/--with-libraries to
point at the iddawc installation, if it's in a custom location).

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- ...and more.
---
 configure                            | 100 ++++
 configure.ac                         |  19 +
 src/Makefile.global.in               |   1 +
 src/include/common/oauth-common.h    |  19 +
 src/include/pg_config.h.in           |   6 +
 src/interfaces/libpq/Makefile        |   7 +-
 src/interfaces/libpq/fe-auth-oauth.c | 744 +++++++++++++++++++++++++++
 src/interfaces/libpq/fe-auth-sasl.h  |   5 +-
 src/interfaces/libpq/fe-auth-scram.c |   6 +-
 src/interfaces/libpq/fe-auth.c       |  42 +-
 src/interfaces/libpq/fe-auth.h       |   3 +
 src/interfaces/libpq/fe-connect.c    |  38 ++
 src/interfaces/libpq/libpq-int.h     |   8 +
 13 files changed, 979 insertions(+), 19 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c

diff --git a/configure b/configure
index e066cbe2c8..42a3304681 100755
--- a/configure
+++ b/configure
@@ -718,6 +718,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -861,6 +862,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1570,6 +1572,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth            build with OAuth 2.0 support
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8377,6 +8380,42 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-oauth option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_oauth=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13503,6 +13542,56 @@ fi
 
 
 
+if test "$with_oauth" = yes ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for i_init_session in -liddawc" >&5
+$as_echo_n "checking for i_init_session in -liddawc... " >&6; }
+if ${ac_cv_lib_iddawc_i_init_session+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-liddawc  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char i_init_session ();
+int
+main ()
+{
+return i_init_session ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_iddawc_i_init_session=yes
+else
+  ac_cv_lib_iddawc_i_init_session=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_iddawc_i_init_session" >&5
+$as_echo "$ac_cv_lib_iddawc_i_init_session" >&6; }
+if test "x$ac_cv_lib_iddawc_i_init_session" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBIDDAWC 1
+_ACEOF
+
+  LIBS="-liddawc $LIBS"
+
+else
+  as_fn_error $? "library 'iddawc' is required for OAuth support" "$LINENO" 5
+fi
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14516,6 +14605,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" != no; then
+  ac_fn_c_check_header_mongrel "$LINENO" "iddawc.h" "ac_cv_header_iddawc_h" "$ac_includes_default"
+if test "x$ac_cv_header_iddawc_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <iddawc.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 078381e568..4050f91dbd 100644
--- a/configure.ac
+++ b/configure.ac
@@ -887,6 +887,17 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_BOOL(with, oauth, no,
+              [build with OAuth 2.0 support],
+              [AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])])
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1388,6 +1399,10 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = yes ; then
+  AC_CHECK_LIB(iddawc, i_init_session, [], [AC_MSG_ERROR([library 'iddawc' is required for OAuth support])])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1606,6 +1621,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" != no; then
+  AC_CHECK_HEADER(iddawc.h, [], [AC_MSG_ERROR([header file <iddawc.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbdc1c4bda..c9c61a9c99 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..3fa95ac7e8
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif /* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 635fbb2181..1b3332601e 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -319,6 +319,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `iddawc' library (-liddawc). */
+#undef HAVE_LIBIDDAWC
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -922,6 +925,9 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 3c53393fa4..727305c578 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -62,6 +62,11 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_oauth),yes)
+OBJS += \
+	fe-auth-oauth.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -83,7 +88,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -liddawc -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..383c9d4bdb
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,744 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include <iddawc.h>
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static void oauth_exchange(void *opaq, bool final,
+						   char *input, int inputlen,
+						   char **output, int *outputlen,
+						   bool *done, bool *success);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+} fe_oauth_state;
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(!strcmp(sasl_mechanism, OAUTHBEARER_NAME));
+
+	state = malloc(sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+static const char *
+iddawc_error_string(int errcode)
+{
+	switch (errcode)
+	{
+		case I_OK:
+			return "I_OK";
+
+		case I_ERROR:
+			return "I_ERROR";
+
+		case I_ERROR_PARAM:
+			return "I_ERROR_PARAM";
+
+		case I_ERROR_MEMORY:
+			return "I_ERROR_MEMORY";
+
+		case I_ERROR_UNAUTHORIZED:
+			return "I_ERROR_UNAUTHORIZED";
+
+		case I_ERROR_SERVER:
+			return "I_ERROR_SERVER";
+	}
+
+	return "<unknown>";
+}
+
+static void
+iddawc_error(PGconn *conn, int errcode, const char *msg)
+{
+	appendPQExpBufferStr(&conn->errorMessage, libpq_gettext(msg));
+	appendPQExpBuffer(&conn->errorMessage,
+					  libpq_gettext(" (iddawc error %s)\n"),
+					  iddawc_error_string(errcode));
+}
+
+static void
+iddawc_request_error(PGconn *conn, struct _i_session *i, int err, const char *msg)
+{
+	const char *error_code;
+	const char *desc;
+
+	appendPQExpBuffer(&conn->errorMessage, "%s: ", libpq_gettext(msg));
+
+	error_code = i_get_str_parameter(i, I_OPT_ERROR);
+	if (!error_code)
+	{
+		/*
+		 * The server didn't give us any useful information, so just print the
+		 * error code.
+		 */
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("(iddawc error %s)\n"),
+						  iddawc_error_string(err));
+		return;
+	}
+
+	/* If the server gave a string description, print that too. */
+	desc = i_get_str_parameter(i, I_OPT_ERROR_DESCRIPTION);
+	if (desc)
+		appendPQExpBuffer(&conn->errorMessage, "%s ", desc);
+
+	appendPQExpBuffer(&conn->errorMessage, "(%s)\n", error_code);
+}
+
+static char *
+get_auth_token(PGconn *conn)
+{
+	PQExpBuffer	token_buf = NULL;
+	struct _i_session session;
+	int			err;
+	int			auth_method;
+	bool		user_prompted = false;
+	const char *verification_uri;
+	const char *user_code;
+	const char *access_token;
+	const char *token_type;
+	char	   *token = NULL;
+
+	if (!conn->oauth_discovery_uri)
+		return strdup(""); /* ask the server for one */
+
+	if (!conn->oauth_client_id)
+	{
+		/* We can't talk to a server without a client identifier. */
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("no oauth_client_id is set for the connection"));
+		return NULL;
+	}
+
+	i_init_session(&session);
+
+	token_buf = createPQExpBuffer();
+	if (!token_buf)
+		goto cleanup;
+
+	err = i_set_str_parameter(&session, I_OPT_OPENID_CONFIG_ENDPOINT, conn->oauth_discovery_uri);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set OpenID config endpoint");
+		goto cleanup;
+	}
+
+	err = i_get_openid_config(&session);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to fetch OpenID discovery document");
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_TOKEN_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer has no token endpoint"));
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_DEVICE_AUTHORIZATION_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer does not support device authorization"));
+		goto cleanup;
+	}
+
+	err = i_set_response_type(&session, I_RESPONSE_TYPE_DEVICE_CODE);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set device code response type");
+		goto cleanup;
+	}
+
+	auth_method = I_TOKEN_AUTH_METHOD_NONE;
+	if (conn->oauth_client_secret && *conn->oauth_client_secret)
+		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
+
+	err = i_set_parameter_list(&session,
+		I_OPT_CLIENT_ID, conn->oauth_client_id,
+		I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
+		I_OPT_TOKEN_METHOD, auth_method,
+		I_OPT_SCOPE, conn->oauth_scope,
+		I_OPT_NONE
+	);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set client identifier");
+		goto cleanup;
+	}
+
+	err = i_run_device_auth_request(&session);
+	if (err)
+	{
+		iddawc_request_error(conn, &session, err,
+							"failed to obtain device authorization");
+		goto cleanup;
+	}
+
+	verification_uri = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_VERIFICATION_URI);
+	if (!verification_uri)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a verification URI"));
+		goto cleanup;
+	}
+
+	user_code = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_USER_CODE);
+	if (!user_code)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a user code"));
+		goto cleanup;
+	}
+
+	/*
+	 * Poll the token endpoint until either the user logs in and authorizes the
+	 * use of a token, or a hard failure occurs. We perform one ping _before_
+	 * prompting the user, so that we don't make them do the work of logging in
+	 * only to find that the token endpoint is completely unreachable.
+	 */
+	err = i_run_token_request(&session);
+	while (err)
+	{
+		const char *error_code;
+		uint		interval;
+
+		error_code = i_get_str_parameter(&session, I_OPT_ERROR);
+
+		/*
+		 * authorization_pending and slow_down are the only acceptable errors;
+		 * anything else and we bail.
+		 */
+		if (!error_code || (strcmp(error_code, "authorization_pending")
+							&& strcmp(error_code, "slow_down")))
+		{
+			iddawc_request_error(conn, &session, err,
+								"OAuth token retrieval failed");
+			goto cleanup;
+		}
+
+		if (!user_prompted)
+		{
+			/*
+			 * Now that we know the token endpoint isn't broken, give the user
+			 * the login instructions.
+			 */
+			pqInternalNotice(&conn->noticeHooks,
+							 "Visit %s and enter the code: %s",
+							 verification_uri, user_code);
+
+			user_prompted = true;
+		}
+
+		/*
+		 * We are required to wait between polls; the server tells us how long.
+		 * TODO: if interval's not set, we need to default to five seconds
+		 * TODO: sanity check the interval
+		 */
+		interval = i_get_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL);
+
+		/*
+		 * A slow_down error requires us to permanently increase our retry
+		 * interval by five seconds. RFC 8628, Sec. 3.5.
+		 */
+		if (!strcmp(error_code, "slow_down"))
+		{
+			interval += 5;
+			i_set_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL, interval);
+		}
+
+		sleep(interval);
+
+		/*
+		 * XXX Reset the error code before every call, because iddawc won't do
+		 * that for us. This matters if the server first sends a "pending" error
+		 * code, then later hard-fails without sending an error code to
+		 * overwrite the first one.
+		 *
+		 * That we have to do this at all seems like a bug in iddawc.
+		 */
+		i_set_str_parameter(&session, I_OPT_ERROR, NULL);
+
+		err = i_run_token_request(&session);
+	}
+
+	access_token = i_get_str_parameter(&session, I_OPT_ACCESS_TOKEN);
+	token_type = i_get_str_parameter(&session, I_OPT_TOKEN_TYPE);
+
+	if (!access_token || !token_type || strcasecmp(token_type, "Bearer"))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a bearer token"));
+		goto cleanup;
+	}
+
+	appendPQExpBufferStr(token_buf, "Bearer ");
+	appendPQExpBufferStr(token_buf, access_token);
+
+	if (PQExpBufferBroken(token_buf))
+		goto cleanup;
+
+	token = strdup(token_buf->data);
+
+cleanup:
+	if (token_buf)
+		destroyPQExpBuffer(token_buf);
+	i_clean_session(&session);
+
+	return token;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn)
+{
+	static const char * const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBuffer	token_buf;
+	PQExpBuffer	discovery_buf = NULL;
+	char	   *token = NULL;
+	char	   *response = NULL;
+
+	token_buf = createPQExpBuffer();
+	if (!token_buf)
+		goto cleanup;
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	if (!conn->oauth_discovery_uri && conn->oauth_issuer)
+	{
+		discovery_buf = createPQExpBuffer();
+		if (!discovery_buf)
+			goto cleanup;
+
+		appendPQExpBufferStr(discovery_buf, conn->oauth_issuer);
+		appendPQExpBufferStr(discovery_buf, "/.well-known/openid-configuration");
+
+		if (PQExpBufferBroken(discovery_buf))
+			goto cleanup;
+
+		conn->oauth_discovery_uri = strdup(discovery_buf->data);
+	}
+
+	token = get_auth_token(conn);
+	if (!token)
+		goto cleanup;
+
+	appendPQExpBuffer(token_buf, resp_format, token);
+	if (PQExpBufferBroken(token_buf))
+		goto cleanup;
+
+	response = strdup(token_buf->data);
+
+cleanup:
+	if (token)
+		free(token);
+	if (discovery_buf)
+		destroyPQExpBuffer(discovery_buf);
+	if (token_buf)
+		destroyPQExpBuffer(token_buf);
+
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char		   *errmsg; /* any non-NULL value stops all processing */
+	PQExpBufferData errbuf; /* backing memory for errmsg */
+	int				nested; /* nesting level (zero is the top) */
+
+	const char	   *target_field_name; /* points to a static allocation */
+	char		  **target_field;      /* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char		   *status;
+	char		   *scope;
+	char		   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static void
+oauth_json_object_start(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+		return; /* short-circuit */
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+}
+
+static void
+oauth_json_object_end(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+		return; /* short-circuit */
+
+	--ctx->nested;
+}
+
+static void
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+	{
+		/* short-circuit */
+		free(name);
+		return;
+	}
+
+	if (ctx->nested == 1)
+	{
+		if (!strcmp(name, ERROR_STATUS_FIELD))
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (!strcmp(name, ERROR_SCOPE_FIELD))
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (!strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD))
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+}
+
+static void
+oauth_json_array_start(void *state)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+		return; /* short-circuit */
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+}
+
+static void
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx	   *ctx = state;
+
+	if (oauth_json_has_error(ctx))
+	{
+		/* short-circuit */
+		free(token);
+		return;
+	}
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return; /* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext		lex = {0};
+	JsonSemAction		sem = {0};
+	JsonParseErrorType	err;
+	struct json_ctx		ctx = {0};
+	char			   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL"));
+		return false;
+	}
+
+	initJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		errmsg = json_errdetail(err, &lex);
+	}
+	else if (PQExpBufferDataBroken(ctx.errbuf))
+	{
+		errmsg = libpq_gettext("out of memory");
+	}
+	else if (ctx.errmsg)
+	{
+		errmsg = ctx.errmsg;
+	}
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	termJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (!strcmp(ctx.status, "invalid_token"))
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen,
+			   bool *done, bool *success)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*done = false;
+	*success = false;
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn);
+			if (!*output)
+				goto error;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			break;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				*done = true;
+				*success = true;
+
+				break;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				goto error;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output); /* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			break;
+
+		case FE_OAUTH_SERVER_ERROR:
+			/*
+			 * After an error, the server should send an error response to fail
+			 * the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge which
+			 * isn't defined in the RFC, or completed the handshake successfully
+			 * after telling us it was going to fail. Neither is acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			goto error;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			goto error;
+	}
+
+	return;
+
+error:
+	*done = true;
+	*success = false;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index da3c30b87b..b1bb382f70 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -65,6 +65,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -92,7 +94,8 @@ typedef struct pg_fe_sasl_mech
 	 *			   Ignored if *done is false.
 	 *--------
 	 */
-	void		(*exchange) (void *state, char *input, int inputlen,
+	void		(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen,
 							 bool *done, bool *success);
 
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index e616200704..681b76adbe 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static void scram_exchange(void *opaq, char *input, int inputlen,
+static void scram_exchange(void *opaq, bool final,
+						   char *input, int inputlen,
 						   char **output, int *outputlen,
 						   bool *done, bool *success);
 static bool scram_channel_bound(void *opaq);
@@ -206,7 +207,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static void
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen,
 			   bool *done, bool *success)
 {
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 6fceff561b..2567a34023 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -38,6 +38,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -422,7 +423,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	bool		success;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
-	char	   *password;
+	char	   *password = NULL;
 
 	initPQExpBuffer(&mechanism_buf);
 
@@ -444,8 +445,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	/*
 	 * Parse the list of SASL authentication mechanisms in the
 	 * AuthenticationSASL message, and select the best mechanism that we
-	 * support.  SCRAM-SHA-256-PLUS and SCRAM-SHA-256 are the only ones
-	 * supported at the moment, listed by order of decreasing importance.
+	 * support.  Mechanisms are listed by order of decreasing importance.
 	 */
 	selected_mechanism = NULL;
 	for (;;)
@@ -485,6 +485,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 				{
 					selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
 					conn->sasl = &pg_scram_mech;
+					conn->password_needed = true;
 				}
 #else
 				/*
@@ -522,7 +523,17 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		{
 			selected_mechanism = SCRAM_SHA_256_NAME;
 			conn->sasl = &pg_scram_mech;
+			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				!selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -547,18 +558,19 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	/*
 	 * First, select the password to use for the exchange, complaining if
-	 * there isn't one.  Currently, all supported SASL mechanisms require a
-	 * password, so we can just go ahead here without further distinction.
+	 * there isn't one and the SASL mechanism needs it.
 	 */
-	conn->password_needed = true;
-	password = conn->connhost[conn->whichhost].password;
-	if (password == NULL)
-		password = conn->pgpass;
-	if (password == NULL || password[0] == '\0')
+	if (conn->password_needed)
 	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 PQnoPasswordSupplied);
-		goto error;
+		password = conn->connhost[conn->whichhost].password;
+		if (password == NULL)
+			password = conn->pgpass;
+		if (password == NULL || password[0] == '\0')
+		{
+			appendPQExpBufferStr(&conn->errorMessage,
+								 PQnoPasswordSupplied);
+			goto error;
+		}
 	}
 
 	Assert(conn->sasl);
@@ -576,7 +588,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto oom_error;
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	conn->sasl->exchange(conn->sasl_state,
+	conn->sasl->exchange(conn->sasl_state, false,
 						 NULL, -1,
 						 &initialresponse, &initialresponselen,
 						 &done, &success);
@@ -657,7 +669,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	conn->sasl->exchange(conn->sasl_state,
+	conn->sasl->exchange(conn->sasl_state, final,
 						 challenge, payloadlen,
 						 &output, &outputlen,
 						 &done, &success);
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 049a8bb1a1..2a56774019 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -28,4 +28,7 @@ extern const pg_fe_sasl_mech pg_scram_mech;
 extern char *pg_fe_scram_build_secret(const char *password,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index cf554d389f..fdd30d71de 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -344,6 +344,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Target-Session-Attrs", "", 15, /* sizeof("prefer-standby") = 15 */
 	offsetof(struct pg_conn, target_session_attrs)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -606,6 +623,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_err_msg = NULL;
 	conn->be_pid = 0;
 	conn->be_key = 0;
+	/* conn->oauth_want_retry = false; TODO */
 }
 
 
@@ -3386,6 +3404,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 #ifdef ENABLE_GSS
 
 					/*
@@ -4166,6 +4194,16 @@ freePGconn(PGconn *conn)
 		free(conn->rowBuf);
 	if (conn->target_session_attrs)
 		free(conn->target_session_attrs);
+	if (conn->oauth_issuer)
+		free(conn->oauth_issuer);
+	if (conn->oauth_discovery_uri)
+		free(conn->oauth_discovery_uri);
+	if (conn->oauth_client_id)
+		free(conn->oauth_client_id);
+	if (conn->oauth_client_secret)
+		free(conn->oauth_client_secret);
+	if (conn->oauth_scope)
+		free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e0cee4b142..0dff13505a 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -394,6 +394,14 @@ struct pg_conn
 	char	   *ssl_max_protocol_version;	/* maximum TLS protocol version */
 	char	   *target_session_attrs;	/* desired session properties */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;			/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery document */
+	char	   *oauth_client_id;		/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;			/* access token scope */
+	bool		oauth_want_retry;		/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
-- 
2.25.1

v4-0009-Add-pytest-suite-for-OAuth.patchtext/x-patch; name=v4-0009-Add-pytest-suite-for-OAuth.patchDownload
From 6d8fd9e5b352fd0847c9454ced2b763a6b11e73f Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v4 09/10] Add pytest suite for OAuth

Requires Python 3; on the first run of `make installcheck` the
dependencies will be installed into ./venv for you. See the README for
more details.
---
 src/test/python/.gitignore                 |    2 +
 src/test/python/Makefile                   |   38 +
 src/test/python/README                     |   54 ++
 src/test/python/client/__init__.py         |    0
 src/test/python/client/conftest.py         |  126 +++
 src/test/python/client/test_client.py      |  180 ++++
 src/test/python/client/test_oauth.py       |  936 ++++++++++++++++++
 src/test/python/pq3.py                     |  727 ++++++++++++++
 src/test/python/pytest.ini                 |    4 +
 src/test/python/requirements.txt           |    7 +
 src/test/python/server/__init__.py         |    0
 src/test/python/server/conftest.py         |   45 +
 src/test/python/server/test_oauth.py       | 1012 ++++++++++++++++++++
 src/test/python/server/test_server.py      |   21 +
 src/test/python/server/validate_bearer.py  |  101 ++
 src/test/python/server/validate_reflect.py |   34 +
 src/test/python/test_internals.py          |  138 +++
 src/test/python/test_pq3.py                |  558 +++++++++++
 src/test/python/tls.py                     |  195 ++++
 19 files changed, 4178 insertions(+)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100755 src/test/python/server/validate_bearer.py
 create mode 100755 src/test/python/server/validate_reflect.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py

diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..0bda582c4b
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,54 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..f38da7a138
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,126 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import socket
+import sys
+import threading
+
+import psycopg2
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            conn.close()
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+    client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..c4c946fda4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,180 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = "server closed the connection unexpectedly"
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..a754a9c0b6
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,936 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import http.server
+import json
+import secrets
+import sys
+import threading
+import time
+import urllib.parse
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            self.server.serve_forever()
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+            self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _discovery_handler(self, headers, params):
+            oauth = self.server.oauth
+
+            doc = {
+                "issuer": oauth.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+            }
+
+            for name, path in oauth.endpoint_paths.items():
+                doc[name] = oauth.issuer + path
+
+            return 200, doc
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            code, resp = handler(self.headers, params)
+
+            self.send_response(code)
+            self.send_header("Content-Type", "application/json")
+            self.end_headers()
+
+            resp = json.dumps(resp)
+            resp = resp.encode("utf-8")
+            self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            if self.path == "/.well-known/openid-configuration":
+                self._handle(handler=self._discovery_handler)
+                return
+
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_with_explicit_issuer(
+    capfd, accept, openid_provider, retries, scope, secret
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user with the expected
+        # authorization URL and user code.
+        expected = f"Visit {verification_url} and enter the code: {user_code}"
+        _, stderr = capfd.readouterr()
+        assert expected in stderr
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+def test_oauth_retry_interval(accept, openid_provider, retries, error_code):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": expected_retry_interval,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            {
+                "error": "invalid_client",
+                "error_description": "client authentication failed",
+            },
+            r"client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            {"error": "invalid_request"},
+            r"\(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            {},
+            r"failed to obtain device authorization",
+            id="broken error response",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return 400, failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should not continue the connection due to the hardcoded
+            # provider failure; we disconnect here.
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            {
+                "error": "expired_token",
+                "error_description": "the device code has expired",
+            },
+            r"the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            {"error": "access_denied"},
+            r"\(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            {},
+            r"OAuth token retrieval failed",
+            id="broken error response",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+        return 400, failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should not continue the connection due to the hardcoded
+            # provider failure; we disconnect here.
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..3a22dad0b6
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,727 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload" / FixedSized(this.len - 4, Default(_payload, b"")),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..32f105ea84
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,7 @@
+black
+cryptography~=3.4.6
+construct~=2.10.61
+isort~=5.6
+psycopg2~=2.8.6
+pytest~=6.1
+pytest-asyncio~=0.14.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..ba7342a453
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,45 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+
+import pytest
+
+import pq3
+
+
+@pytest.fixture
+def connect():
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. The calling test will be
+    skipped automatically if a server is not running at PGHOST:PGPORT, so it's
+    best to connect as soon as possible after the test case begins, to avoid
+    doing unnecessary work.
+    """
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            addr = (pq3.pghost(), pq3.pgport())
+
+            try:
+                sock = socket.create_connection(addr, timeout=2)
+            except ConnectionError as e:
+                pytest.skip(f"unable to connect to {addr}: {e}")
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..cb5ca7fa23
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1012 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from psycopg2 import sql
+
+import pq3
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_TOKEN_SIZE = 4096
+MAX_UINT16 = 2 ** 16 - 1
+
+
+def skip_if_no_postgres():
+    """
+    Used by the oauth_ctx fixture to skip this test module if no Postgres server
+    is running.
+
+    This logic is nearly duplicated with the conn fixture. Ideally oauth_ctx
+    would depend on that, but a module-scope fixture can't depend on a
+    test-scope fixture, and we haven't reached the rule of three yet.
+    """
+    addr = (pq3.pghost(), pq3.pgport())
+
+    try:
+        with socket.create_connection(addr, timeout=2):
+            pass
+    except ConnectionError as e:
+        pytest.skip(f"unable to connect to {addr}: {e}")
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + ".bak"
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx():
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    skip_if_no_postgres()  # don't bother running these tests without a server
+
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = (
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    )
+    ident_lines = (r"oauth /^(.*)@example\.com$ \1",)
+
+    conn = psycopg2.connect("")
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Make this test script the server's oauth_validator.
+        path = pathlib.Path(__file__).parent / "validate_bearer.py"
+        path = str(path.absolute())
+
+        cmd = f"{shlex.quote(path)} {SHARED_MEM_NAME} <&%f"
+        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute("ALTER SYSTEM RESET oauth_validator_command;")
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+@pytest.fixture(scope="module", autouse=True)
+def authn_id_extension(oauth_ctx):
+    """
+    Performs a `CREATE EXTENSION authn_id` in the test database. This fixture is
+    autoused, so tests don't need to rely on it.
+    """
+    conn = psycopg2.connect(database=oauth_ctx.dbname)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        c.execute("CREATE EXTENSION authn_id;")
+
+
+@pytest.fixture(scope="session")
+def shared_mem():
+    """
+    Yields a shared memory segment that can be used for communication between
+    the bearer_token fixture and ./validate_bearer.py.
+    """
+    size = MAX_TOKEN_SIZE + 2  # two byte length prefix
+    mem = shared_memory.SharedMemory(SHARED_MEM_NAME, create=True, size=size)
+
+    try:
+        with contextlib.closing(mem):
+            yield mem
+    finally:
+        mem.unlink()
+
+
+@pytest.fixture()
+def bearer_token(shared_mem):
+    """
+    Returns a factory function that, when called, will store a Bearer token in
+    shared_mem. If token is None (the default), a new token will be generated
+    using secrets.token_urlsafe() and returned; otherwise the passed token will
+    be used as-is.
+
+    When token is None, the generated token size in bytes may be specified as an
+    argument; if unset, a small 16-byte token will be generated. The token size
+    may not exceed MAX_TOKEN_SIZE in any case.
+
+    The return value is the token, converted to a bytes object.
+
+    As a special case for testing failure modes, accept_any may be set to True.
+    This signals to the validator command that any bearer token should be
+    accepted. The returned token in this case may be used or discarded as needed
+    by the test.
+    """
+
+    def set_token(token=None, *, size=16, accept_any=False):
+        if token is not None:
+            size = len(token)
+
+        if size > MAX_TOKEN_SIZE:
+            raise ValueError(f"token size {size} exceeds maximum size {MAX_TOKEN_SIZE}")
+
+        if token is None:
+            if size % 4:
+                raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+            token = secrets.token_urlsafe(size // 4 * 3)
+            assert len(token) == size
+
+        try:
+            token = token.encode("ascii")
+        except AttributeError:
+            pass  # already encoded
+
+        if accept_any:
+            # Two-byte magic value.
+            shared_mem.buf[:2] = struct.pack("H", MAX_UINT16)
+        else:
+            # Two-byte length prefix, then the token data.
+            shared_mem.buf[:2] = struct.pack("H", len(token))
+            shared_mem.buf[2 : size + 2] = token
+
+        return token
+
+    return set_token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(conn, oauth_ctx, bearer_token, auth_prefix, token_len):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    auth = auth_prefix + token
+
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(conn, oauth_ctx, bearer_token, token_value):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=bearer_token(token_value))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(conn, oauth_ctx, bearer_token, user, authn_id, should_succeed):
+    token = None
+
+    authn_id = authn_id(oauth_ctx)
+    if authn_id is not None:
+        authn_id = authn_id.encode("ascii")
+
+        # As a hack to get the validator to reflect arbitrary output from this
+        # test, encode the desired output as a base64 token. The validator will
+        # key on the leading "output=" to differentiate this from the random
+        # tokens generated by secrets.token_urlsafe().
+        output = b"output=" + authn_id + b"\n"
+        token = base64.urlsafe_b64encode(output)
+
+    token = bearer_token(token)
+    username = user(oauth_ctx)
+
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token)
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [authn_id]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx, bearer_token):
+    # Generate a new bearer token, which we will proceed not to use.
+    _ = bearer_token()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer me@example.com",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(conn, oauth_ctx, bearer_token, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    _ = bearer_token(accept_any=True)
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"x" * (MAX_SASL_MESSAGE_LENGTH + 1),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if not isinstance(payload, dict):
+        payload = dict(payload_data=payload)
+    pq3.send(conn, type, **payload)
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(conn, oauth_ctx, bearer_token):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + bearer_token() + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+@pytest.fixture()
+def set_validator():
+    """
+    A per-test fixture that allows a test to override the setting of
+    oauth_validator_command for the cluster. The setting will be reverted during
+    teardown.
+
+    Passing None will perform an ALTER SYSTEM RESET.
+    """
+    conn = psycopg2.connect("")
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Save the previous value.
+        c.execute("SHOW oauth_validator_command;")
+        prev_cmd = c.fetchone()[0]
+
+        def setter(cmd):
+            c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+            c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous value.
+        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (prev_cmd,))
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_oauth_no_validator(oauth_ctx, set_validator, connect, bearer_token):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+def test_oauth_validator_role(oauth_ctx, set_validator, connect):
+    # Switch the validator implementation. This validator will reflect the
+    # PGUSER as the authenticated identity.
+    path = pathlib.Path(__file__).parent / "validate_reflect.py"
+    path = str(path.absolute())
+
+    set_validator(f"{shlex.quote(path)} '%r' <&%f")
+    conn = connect()
+
+    # Log in. Note that the reflection validator ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=oauth_ctx.user)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = oauth_ctx.user.encode("utf-8")
+    assert row.columns == [expected]
+
+
+def test_oauth_role_with_shell_unsafe_characters(oauth_ctx, set_validator, connect):
+    """
+    XXX This test pins undesirable behavior. We should be able to handle any
+    valid Postgres role name.
+    """
+    # Switch the validator implementation. This validator will reflect the
+    # PGUSER as the authenticated identity.
+    path = pathlib.Path(__file__).parent / "validate_reflect.py"
+    path = str(path.absolute())
+
+    set_validator(f"{shlex.quote(path)} '%r' <&%f")
+    conn = connect()
+
+    unsafe_username = "hello'there"
+    begin_oauth_handshake(conn, oauth_ctx, user=unsafe_username)
+
+    # The server should reject the handshake.
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_failure(conn, oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/server/validate_bearer.py b/src/test/python/server/validate_bearer.py
new file mode 100755
index 0000000000..2cc73ff154
--- /dev/null
+++ b/src/test/python/server/validate_bearer.py
@@ -0,0 +1,101 @@
+#! /usr/bin/env python3
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+# DO NOT USE THIS OAUTH VALIDATOR IN PRODUCTION. It doesn't actually validate
+# anything, and it logs the bearer token data, which is sensitive.
+#
+# This executable is used as an oauth_validator_command in concert with
+# test_oauth.py. Memory is shared and communicated from that test module's
+# bearer_token() fixture.
+#
+# This script must run under the Postgres server environment; keep the
+# dependency list fairly standard.
+
+import base64
+import binascii
+import contextlib
+import struct
+import sys
+from multiprocessing import shared_memory
+
+MAX_UINT16 = 2 ** 16 - 1
+
+
+def remove_shm_from_resource_tracker():
+    """
+    Monkey-patch multiprocessing.resource_tracker so SharedMemory won't be
+    tracked. Pulled from this thread, where there are more details:
+
+        https://bugs.python.org/issue38119
+
+    TL;DR: all clients of shared memory segments automatically destroy them on
+    process exit, which makes shared memory segments much less useful. This
+    monkeypatch removes that behavior so that we can defer to the test to manage
+    the segment lifetime.
+
+    Ideally a future Python patch will pull in this fix and then the entire
+    function can go away.
+    """
+    from multiprocessing import resource_tracker
+
+    def fix_register(name, rtype):
+        if rtype == "shared_memory":
+            return
+        return resource_tracker._resource_tracker.register(self, name, rtype)
+
+    resource_tracker.register = fix_register
+
+    def fix_unregister(name, rtype):
+        if rtype == "shared_memory":
+            return
+        return resource_tracker._resource_tracker.unregister(self, name, rtype)
+
+    resource_tracker.unregister = fix_unregister
+
+    if "shared_memory" in resource_tracker._CLEANUP_FUNCS:
+        del resource_tracker._CLEANUP_FUNCS["shared_memory"]
+
+
+def main(args):
+    remove_shm_from_resource_tracker()  # XXX remove some day
+
+    # Get the expected token from the currently running test.
+    shared_mem_name = args[0]
+
+    mem = shared_memory.SharedMemory(shared_mem_name)
+    with contextlib.closing(mem):
+        # First two bytes are the token length.
+        size = struct.unpack("H", mem.buf[:2])[0]
+
+        if size == MAX_UINT16:
+            # Special case: the test wants us to accept any token.
+            sys.stderr.write("accepting token without validation\n")
+            return
+
+        # The remainder of the buffer contains the expected token.
+        assert size <= (mem.size - 2)
+        expected_token = mem.buf[2 : size + 2].tobytes()
+
+        mem.buf[:] = b"\0" * mem.size  # scribble over the token
+
+    token = sys.stdin.buffer.read()
+    if token != expected_token:
+        sys.exit(f"failed to match Bearer token ({token!r} != {expected_token!r})")
+
+    # See if the test wants us to print anything. If so, it will have encoded
+    # the desired output in the token with an "output=" prefix.
+    try:
+        # altchars="-_" corresponds to the urlsafe alphabet.
+        data = base64.b64decode(token, altchars="-_", validate=True)
+
+        if data.startswith(b"output="):
+            sys.stdout.buffer.write(data[7:])
+
+    except binascii.Error:
+        pass
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/src/test/python/server/validate_reflect.py b/src/test/python/server/validate_reflect.py
new file mode 100755
index 0000000000..24c3a7e715
--- /dev/null
+++ b/src/test/python/server/validate_reflect.py
@@ -0,0 +1,34 @@
+#! /usr/bin/env python3
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+# DO NOT USE THIS OAUTH VALIDATOR IN PRODUCTION. It ignores the bearer token
+# entirely and automatically logs the user in.
+#
+# This executable is used as an oauth_validator_command in concert with
+# test_oauth.py. It expects the user's desired role name as an argument; the
+# actual token will be discarded and the user will be logged in with the role
+# name as the authenticated identity.
+#
+# This script must run under the Postgres server environment; keep the
+# dependency list fairly standard.
+
+import sys
+
+
+def main(args):
+    # We have to read the entire token as our first action to unblock the
+    # server, but we won't actually use it.
+    _ = sys.stdin.buffer.read()
+
+    if len(args) != 1:
+        sys.exit("usage: ./validate_reflect.py ROLE")
+
+    # Log the user in as the provided role.
+    role = args[0]
+    print(role)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..e0c0e0568d
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,558 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05\x00",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        ("PGUSER", pq3.pguser, getpass.getuser()),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
-- 
2.25.1

v4-0010-contrib-oauth-switch-to-pluggable-auth-API.patchtext/x-patch; name=v4-0010-contrib-oauth-switch-to-pluggable-auth-API.patchDownload
From f520c08e1aee5239051e304c8a8faf5cb25bdbf2 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 25 Mar 2022 16:35:30 -0700
Subject: [PATCH v4 10/10] contrib/oauth: switch to pluggable auth API

Move the core server implementation to contrib/oauth as a pluggable
provider, using the RegisterAuthProvider() API. oauth_validator_command
has been moved from core to a custom GUC. HBA options are handled
using the new hook. Tests have been updated to handle the new
implementations.

One server modification remains: allowing custom SASL mechanisms to
declare their own maximum message length.

This patch is optional; you can apply/revert it to compare the two
approaches.
---
 contrib/oauth/Makefile                        | 16 ++++
 .../auth-oauth.c => contrib/oauth/oauth.c     | 88 ++++++++++++++++---
 src/backend/libpq/Makefile                    |  1 -
 src/backend/libpq/auth.c                      |  7 --
 src/backend/libpq/hba.c                       | 27 +-----
 src/backend/utils/misc/guc.c                  | 12 ---
 src/include/libpq/hba.h                       |  6 +-
 src/include/libpq/oauth.h                     | 24 -----
 src/test/python/README                        |  3 +-
 src/test/python/server/test_oauth.py          | 20 ++---
 10 files changed, 104 insertions(+), 100 deletions(-)
 create mode 100644 contrib/oauth/Makefile
 rename src/backend/libpq/auth-oauth.c => contrib/oauth/oauth.c (90%)
 delete mode 100644 src/include/libpq/oauth.h

diff --git a/contrib/oauth/Makefile b/contrib/oauth/Makefile
new file mode 100644
index 0000000000..880bc1fef3
--- /dev/null
+++ b/contrib/oauth/Makefile
@@ -0,0 +1,16 @@
+# contrib/oauth/Makefile
+
+MODULE_big = oauth
+OBJS = oauth.o
+PGFILEDESC = "oauth - auth provider supporting OAuth 2.0/OIDC"
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = contrib/oauth
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/backend/libpq/auth-oauth.c b/contrib/oauth/oauth.c
similarity index 90%
rename from src/backend/libpq/auth-oauth.c
rename to contrib/oauth/oauth.c
index c1232a31a0..e83f3c5d99 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/contrib/oauth/oauth.c
@@ -1,33 +1,39 @@
-/*-------------------------------------------------------------------------
+/* -------------------------------------------------------------------------
  *
- * auth-oauth.c
+ * oauth.c
  *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
  *
  * See the following RFC for more details:
  * - RFC 7628: https://tools.ietf.org/html/rfc7628
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
- * src/backend/libpq/auth-oauth.c
+ * contrib/oauth/oauth.c
  *
- *-------------------------------------------------------------------------
+ * -------------------------------------------------------------------------
  */
+
 #include "postgres.h"
 
 #include <unistd.h>
 #include <fcntl.h>
 
 #include "common/oauth-common.h"
+#include "fmgr.h"
 #include "lib/stringinfo.h"
 #include "libpq/auth.h"
 #include "libpq/hba.h"
-#include "libpq/oauth.h"
 #include "libpq/sasl.h"
 #include "storage/fd.h"
+#include "utils/guc.h"
+
+PG_MODULE_MAGIC;
+
+void _PG_init(void);
 
 /* GUC */
-char *oauth_validator_command;
+static char *oauth_validator_command;
 
 static void  oauth_get_mechanisms(Port *port, StringInfo buf);
 static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
@@ -35,7 +41,7 @@ static int   oauth_exchange(void *opaq, const char *input, int inputlen,
 							char **output, int *outputlen, const char **logdetail);
 
 /* Mechanism declaration */
-const pg_be_sasl_mech pg_be_oauth_mech = {
+static const pg_be_sasl_mech oauth_mech = {
 	oauth_get_mechanisms,
 	oauth_init,
 	oauth_exchange,
@@ -57,12 +63,13 @@ struct oauth_ctx
 	Port	   *port;
 	const char *issuer;
 	const char *scope;
+	bool		skip_usermap;
 };
 
 static char *sanitize_char(char c);
 static char *parse_kvpairs_for_auth(char **input);
 static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
-static bool validate(Port *port, const char *auth, const char **logdetail);
+static bool validate(struct oauth_ctx *ctx, const char *auth, const char **logdetail);
 static bool run_validator_command(Port *port, const char *token);
 static bool check_exit(FILE **fh, const char *command);
 static bool unset_cloexec(int fd);
@@ -84,6 +91,7 @@ static void *
 oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 {
 	struct oauth_ctx *ctx;
+	ListCell	   *lc;
 
 	if (strcmp(selected_mech, OAUTHBEARER_NAME))
 		ereport(ERROR,
@@ -96,8 +104,21 @@ oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 	ctx->port = port;
 
 	Assert(port->hba);
-	ctx->issuer = port->hba->oauth_issuer;
-	ctx->scope = port->hba->oauth_scope;
+
+	foreach (lc, port->hba->custom_auth_options)
+	{
+		CustomOption *option = lfirst(lc);
+
+		if (strcmp(option->name, "issuer") == 0)
+			ctx->issuer = option->value;
+		else if (strcmp(option->name, "scope") == 0)
+			ctx->scope = option->value;
+		else if (strcmp(option->name, "trust_validator_authz") == 0)
+		{
+			if (strcmp(option->value, "1") == 0)
+				ctx->skip_usermap = true;
+		}
+	}
 
 	return ctx;
 }
@@ -248,7 +269,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 				 errmsg("malformed OAUTHBEARER message"),
 				 errdetail("Message contains additional data after the final terminator.")));
 
-	if (!validate(ctx->port, auth, logdetail))
+	if (!validate(ctx, auth, logdetail))
 	{
 		generate_error_response(ctx, output, outputlen);
 
@@ -415,12 +436,13 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 }
 
 static bool
-validate(Port *port, const char *auth, const char **logdetail)
+validate(struct oauth_ctx *ctx, const char *auth, const char **logdetail)
 {
 	static const char * const b64_set = "abcdefghijklmnopqrstuvwxyz"
 										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
 										"0123456789-._~+/";
 
+	Port	   *port = ctx->port;
 	const char *token;
 	size_t		span;
 	int			ret;
@@ -497,7 +519,7 @@ validate(Port *port, const char *auth, const char **logdetail)
 	if (!run_validator_command(port, token))
 		return false;
 
-	if (port->hba->oauth_skip_usermap)
+	if (ctx->skip_usermap)
 	{
 		/*
 		 * If the validator is our authorization authority, we're done.
@@ -795,3 +817,41 @@ username_ok_for_shell(const char *username)
 
 	return true;
 }
+
+static int CheckOAuth(Port *port)
+{
+	return CheckSASLAuth(&oauth_mech, port, NULL, NULL);
+}
+
+static const char *OAuthError(Port *port)
+{
+	return psprintf("OAuth bearer authentication failed for user \"%s\"",
+					port->user_name);
+}
+
+static bool OAuthCheckOption(char *name, char *val,
+							 struct HbaLine *hbaline, char **errmsg)
+{
+	if (!strcmp(name, "issuer"))
+		return true;
+	if (!strcmp(name, "scope"))
+		return true;
+	if (!strcmp(name, "trust_validator_authz"))
+		return true;
+
+	return false;
+}
+
+void
+_PG_init(void)
+{
+	RegisterAuthProvider("oauth", CheckOAuth, OAuthError, OAuthCheckOption);
+
+	DefineCustomStringVariable("oauth.validator_command",
+							   gettext_noop("Command to validate OAuth v2 bearer tokens."),
+							   NULL,
+							   &oauth_validator_command,
+							   "",
+							   PGC_SIGHUP, GUC_SUPERUSER_ONLY,
+							   NULL, NULL, NULL);
+}
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 98eb2a8242..6d385fd6a4 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,7 +15,6 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
-	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 17042d84ad..4a8a63922a 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -30,7 +30,6 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
-#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -299,9 +298,6 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
-		case uaOAuth:
-			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
-			break;
 		case uaCustom:
 			{
 				CustomAuthProvider *provider = get_provider_by_name(port->hba->custom_provider);
@@ -630,9 +626,6 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
-		case uaOAuth:
-			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
-			break;
 		case uaCustom:
 			{
 				CustomAuthProvider *provider = get_provider_by_name(port->hba->custom_provider);
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index cd3b1cc140..6bf986d5b3 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -137,7 +137,6 @@ static const char *const UserAuthName[] =
 	"radius",
 	"custom",
 	"peer",
-	"oauth",
 };
 
 
@@ -1402,8 +1401,6 @@ parse_hba_line(TokenizedLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
-	else if (strcmp(token->string, "oauth") == 0)
-		parsedline->auth_method = uaOAuth;
 	else if (strcmp(token->string, "custom") == 0)
 		parsedline->auth_method = uaCustom;
 	else
@@ -1733,9 +1730,8 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
 			hbaline->auth_method != uaCert &&
-			hbaline->auth_method != uaOAuth &&
 			hbaline->auth_method != uaCustom)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, oauth, and custom"));
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and custom"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2119,27 +2115,6 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
-	else if (strcmp(name, "issuer") == 0)
-	{
-		if (hbaline->auth_method != uaOAuth)
-			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
-		hbaline->oauth_issuer = pstrdup(val);
-	}
-	else if (strcmp(name, "scope") == 0)
-	{
-		if (hbaline->auth_method != uaOAuth)
-			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
-		hbaline->oauth_scope = pstrdup(val);
-	}
-	else if (strcmp(name, "trust_validator_authz") == 0)
-	{
-		if (hbaline->auth_method != uaOAuth)
-			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
-		if (strcmp(val, "1") == 0)
-			hbaline->oauth_skip_usermap = true;
-		else
-			hbaline->oauth_skip_usermap = false;
-	}
 	else if (strcmp(name, "provider") == 0)
 	{
 		REQUIRE_AUTH_OPTION(uaCustom, "provider", "custom");
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 9a5b2aa496..f70f7f5c01 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -59,7 +59,6 @@
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
-#include "libpq/oauth.h"
 #include "miscadmin.h"
 #include "optimizer/cost.h"
 #include "optimizer/geqo.h"
@@ -4667,17 +4666,6 @@ static struct config_string ConfigureNamesString[] =
 		check_backtrace_functions, assign_backtrace_functions, NULL
 	},
 
-	{
-		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
-			gettext_noop("Command to validate OAuth v2 bearer tokens."),
-			NULL,
-			GUC_SUPERUSER_ONLY
-		},
-		&oauth_validator_command,
-		"",
-		NULL, NULL, NULL
-	},
-
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index e405103a2e..bbc94363cb 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -40,8 +40,7 @@ typedef enum UserAuth
 	uaRADIUS,
 	uaCustom,
 	uaPeer,
-	uaOAuth
-#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
+#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -129,9 +128,6 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
-	char	   *oauth_issuer;
-	char	   *oauth_scope;
-	bool		oauth_skip_usermap;
 	char	   *custom_provider;
 	List	   *custom_auth_options;
 } HbaLine;
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
deleted file mode 100644
index 870e426af1..0000000000
--- a/src/include/libpq/oauth.h
+++ /dev/null
@@ -1,24 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * oauth.h
- *	  Interface to libpq/auth-oauth.c
- *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
- * Portions Copyright (c) 1994, Regents of the University of California
- *
- * src/include/libpq/oauth.h
- *
- *-------------------------------------------------------------------------
- */
-#ifndef PG_OAUTH_H
-#define PG_OAUTH_H
-
-#include "libpq/libpq-be.h"
-#include "libpq/sasl.h"
-
-extern char *oauth_validator_command;
-
-/* Implementation */
-extern const pg_be_sasl_mech pg_be_oauth_mech;
-
-#endif /* PG_OAUTH_H */
diff --git a/src/test/python/README b/src/test/python/README
index 0bda582c4b..0fbc1046cf 100644
--- a/src/test/python/README
+++ b/src/test/python/README
@@ -13,7 +13,8 @@ but you can adjust as needed for your setup.
 
 ## Requirements
 
-A supported version (3.6+) of Python.
+- A supported version (3.6+) of Python.
+- The oauth extension must be installed and loaded via shared_preload_libraries.
 
 The first run of
 
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
index cb5ca7fa23..07fc25edc2 100644
--- a/src/test/python/server/test_oauth.py
+++ b/src/test/python/server/test_oauth.py
@@ -103,9 +103,9 @@ def oauth_ctx():
 
     ctx = Context()
     hba_lines = (
-        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
-        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
-        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+        f'host {ctx.dbname} {ctx.map_user}   samehost custom provider=oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost custom provider=oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost custom provider=oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
     )
     ident_lines = (r"oauth /^(.*)@example\.com$ \1",)
 
@@ -126,12 +126,12 @@ def oauth_ctx():
         c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
         c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
 
-        # Make this test script the server's oauth_validator.
+        # Make this test script the server's oauth validator.
         path = pathlib.Path(__file__).parent / "validate_bearer.py"
         path = str(path.absolute())
 
         cmd = f"{shlex.quote(path)} {SHARED_MEM_NAME} <&%f"
-        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+        c.execute("ALTER SYSTEM SET oauth.validator_command TO %s;", (cmd,))
 
         # Replace pg_hba and pg_ident.
         c.execute("SHOW hba_file;")
@@ -149,7 +149,7 @@ def oauth_ctx():
         # Put things back the way they were.
         c.execute("SELECT pg_reload_conf();")
 
-        c.execute("ALTER SYSTEM RESET oauth_validator_command;")
+        c.execute("ALTER SYSTEM RESET oauth.validator_command;")
         c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
         c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
         c.execute(sql.SQL("DROP ROLE {};").format(map_user))
@@ -930,7 +930,7 @@ def test_oauth_empty_initial_response(conn, oauth_ctx, bearer_token):
 def set_validator():
     """
     A per-test fixture that allows a test to override the setting of
-    oauth_validator_command for the cluster. The setting will be reverted during
+    oauth.validator_command for the cluster. The setting will be reverted during
     teardown.
 
     Passing None will perform an ALTER SYSTEM RESET.
@@ -942,17 +942,17 @@ def set_validator():
         c = conn.cursor()
 
         # Save the previous value.
-        c.execute("SHOW oauth_validator_command;")
+        c.execute("SHOW oauth.validator_command;")
         prev_cmd = c.fetchone()[0]
 
         def setter(cmd):
-            c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
+            c.execute("ALTER SYSTEM SET oauth.validator_command TO %s;", (cmd,))
             c.execute("SELECT pg_reload_conf();")
 
         yield setter
 
         # Restore the previous value.
-        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (prev_cmd,))
+        c.execute("ALTER SYSTEM SET oauth.validator_command TO %s;", (prev_cmd,))
         c.execute("SELECT pg_reload_conf();")
 
 
-- 
2.25.1

#18mahendrakar s
mahendrakarforpg@gmail.com
In reply to: Jacob Champion (#7)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi Hackers,

We are trying to implement AAD(Azure AD) support in PostgreSQL and it
can be achieved with support of the OAuth method. To support AAD on
top of OAuth in a generic fashion (i.e for all other OAuth providers),
we are proposing this patch. It basically exposes two new hooks (one
for error reporting and one for OAuth provider specific token
validation) and passing OAuth bearer token to backend. It also adds
support for client credentials flow of OAuth additional to device code
flow which Jacob has proposed.

The changes for each component are summarized below.

1. Provider-specific extension:
Each OAuth provider implements their own token validator as an
extension. Extension registers an OAuth provider hook which is matched
to a line in the HBA file.

2. Add support to pass on the OAuth bearer token. In this
obtaining the bearer token is left to 3rd party application or user.

./psql -U <username> -d 'dbname=postgres
oauth_client_id=<client_id> oauth_bearer_token=<token>

3. HBA: An additional param ‘provider’ is added for the oauth method.
Defining "oauth" as method + passing provider, issuer endpoint
and expected audience

* * * * oauth provider=<token validation extension>
issuer=.... scope=....

4. Engine Backend:
Support for generic OAUTHBEARER type, requesting client to
provide token and passing to token for provider-specific extension.

5. Engine Frontend: Two-tiered approach.
a) libpq transparently passes on the token received
from 3rd party client as is to the backend.
b) libpq optionally compiled for the clients which
explicitly need libpq to orchestrate OAuth communication with the
issuer (it depends heavily on 3rd party library iddawc as Jacob
already pointed out. The library seems to be supporting all the OAuth
flows.)

Please let us know your thoughts as the proposed method supports
different OAuth flows with the use of provider specific hooks. We
think that the proposal would be useful for various OAuth providers.

Thanks,
Mahendrakar.

Show quoted text

On Tue, 20 Sept 2022 at 10:18, Jacob Champion <pchampion@vmware.com> wrote:

On Tue, 2021-06-22 at 23:22 +0000, Jacob Champion wrote:

On Fri, 2021-06-18 at 11:31 +0300, Heikki Linnakangas wrote:

A few small things caught my eye in the backend oauth_exchange function:

+       /* Handle the client's initial message. */
+       p = strdup(input);

this strdup() should be pstrdup().

Thanks, I'll fix that in the next re-roll.

In the same function, there are a bunch of reports like this:

ereport(ERROR,
+                          (errcode(ERRCODE_PROTOCOL_VIOLATION),
+                           errmsg("malformed OAUTHBEARER message"),
+                           errdetail("Comma expected, but found character \"%s\".",
+                                     sanitize_char(*p))));

I don't think the double quotes are needed here, because sanitize_char
will return quotes if it's a single character. So it would end up
looking like this: ... found character "'x'".

I'll fix this too. Thanks!

v2, attached, incorporates Heikki's suggested fixes and also rebases on
top of latest HEAD, which had the SASL refactoring changes committed
last month.

The biggest change from the last patchset is 0001, an attempt at
enabling jsonapi in the frontend without the use of palloc(), based on
suggestions by Michael and Tom from last commitfest. I've also made
some improvements to the pytest suite. No major changes to the OAuth
implementation yet.

--Jacob

Attachments:

v1-0001-oauth-provider-support.patchapplication/x-patch; name=v1-0001-oauth-provider-support.patchDownload
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index c47211132c..86f820482b 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -24,7 +24,9 @@
 #include "libpq/hba.h"
 #include "libpq/oauth.h"
 #include "libpq/sasl.h"
+#include "miscadmin.h"
 #include "storage/fd.h"
+#include "utils/memutils.h"
 
 /* GUC */
 char *oauth_validator_command;
@@ -34,6 +36,13 @@ static void *oauth_init(Port *port, const char *selected_mech, const char *shado
 static int   oauth_exchange(void *opaq, const char *input, int inputlen,
 							char **output, int *outputlen, char **logdetail);
 
+/*----------------------------------------------------------------
+ * OAuth Authentication
+ *----------------------------------------------------------------
+ */
+static List *oauth_providers = NIL;
+static OAuthProvider* oauth_provider = NULL;
+
 /* Mechanism declaration */
 const pg_be_sasl_mech pg_be_oauth_mech = {
 	oauth_get_mechanisms,
@@ -63,15 +72,90 @@ static char *sanitize_char(char c);
 static char *parse_kvpairs_for_auth(char **input);
 static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
 static bool validate(Port *port, const char *auth, char **logdetail);
-static bool run_validator_command(Port *port, const char *token);
+static const char* run_validator_command(Port *port, const char *token);
 static bool check_exit(FILE **fh, const char *command);
 static bool unset_cloexec(int fd);
-static bool username_ok_for_shell(const char *username);
 
 #define KVSEP 0x01
 #define AUTH_KEY "auth"
 #define BEARER_SCHEME "Bearer "
 
+/*----------------------------------------------------------------
+ * OAuth Token Validator
+ *----------------------------------------------------------------
+ */
+
+/*
+ * RegisterOAuthProvider registers a OAuth Token Validator to be
+ * used for oauth token validation. It validates the token and adds the valiator
+ * name and it's hooks to a list of loaded token validator. The right validator's
+ * hooks can then be called based on the validator name specified in
+ * pg_hba.conf.
+ *
+ * This function should be called in _PG_init() by any extension looking to
+ * add a custom authentication method.
+ */
+void
+RegisterOAuthProvider(
+	const char *provider_name,
+	OAuthProviderCheck_hook_type OAuthProviderCheck_hook,
+	OAuthProviderError_hook_type OAuthProviderError_hook	
+)
+{
+	if (!process_shared_preload_libraries_in_progress)
+	{
+		ereport(ERROR,
+			(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("RegisterOAuthProvider can only be called by a shared_preload_library")));
+		return;
+	}
+
+	MemoryContext oldcxt;
+	if (oauth_provider == NULL)
+	{
+		oldcxt = MemoryContextSwitchTo(TopMemoryContext);
+		oauth_provider = palloc(sizeof(OAuthProvider));
+		oauth_provider->name = pstrdup(provider_name);
+		oauth_provider->oauth_provider_hook = OAuthProviderCheck_hook;
+		oauth_provider->oauth_error_hook = OAuthProviderError_hook;		
+		oauth_providers = lappend(oauth_providers, oauth_provider);
+		MemoryContextSwitchTo(oldcxt);
+	}
+	else
+	{
+		if (oauth_provider && oauth_provider->name)
+		{
+			ereport(ERROR,
+				(errmsg("OAuth provider \"%s\" is already loaded.",
+					oauth_provider->name)));
+		}
+		else
+		{
+			ereport(ERROR,
+				(errmsg("OAuth provider is already loaded.")));
+		}
+	}
+}
+
+/*
+ * Returns the oauth provider (which includes it's
+ * callback functions) based on name specified.
+ */
+OAuthProvider *get_provider_by_name(const char *name)
+{
+	ListCell *lc;
+	foreach(lc, oauth_providers)
+	{
+		OAuthProvider *provider = (OAuthProvider *) lfirst(lc);
+		if (strcmp(provider->name, name) == 0)
+		{
+			return provider;
+		}
+	}
+
+	return NULL;
+}
+
 static void
 oauth_get_mechanisms(Port *port, StringInfo buf)
 {
@@ -494,17 +578,17 @@ validate(Port *port, const char *auth, char **logdetail)
 	}
 
 	/* Have the validator check the token. */
-	if (!run_validator_command(port, token))
+	if (run_validator_command(port, token) == NULL)
 		return false;
-
+	
 	if (port->hba->oauth_skip_usermap)
 	{
 		/*
-		 * If the validator is our authorization authority, we're done.
-		 * Authentication may or may not have been performed depending on the
-		 * validator implementation; all that matters is that the validator says
-		 * the user can log in with the target role.
-		 */
+	 	* If the validator is our authorization authority, we're done.
+	 	* Authentication may or may not have been performed depending on the
+	 	* validator implementation; all that matters is that the validator says
+	 	* the user can log in with the target role.
+	 	*/
 		return true;
 	}
 
@@ -524,193 +608,26 @@ validate(Port *port, const char *auth, char **logdetail)
 	return (ret == STATUS_OK);
 }
 
-static bool
+static const char*
 run_validator_command(Port *port, const char *token)
 {
-	bool		success = false;
-	int			rc;
-	int			pipefd[2];
-	int			rfd = -1;
-	int			wfd = -1;
-
-	StringInfoData command = { 0 };
-	char	   *p;
-	FILE	   *fh = NULL;
-
-	ssize_t		written;
-	char	   *line = NULL;
-	size_t		size = 0;
-	ssize_t		len;
-
-	Assert(oauth_validator_command);
-
-	if (!oauth_validator_command[0])
-	{
-		ereport(COMMERROR,
-				(errmsg("oauth_validator_command is not set"),
-				 errhint("To allow OAuth authenticated connections, set "
-						 "oauth_validator_command in postgresql.conf.")));
-		return false;
-	}
-
-	/*
-	 * Since popen() is unidirectional, open up a pipe for the other direction.
-	 * Use CLOEXEC to ensure that our write end doesn't accidentally get copied
-	 * into child processes, which would prevent us from closing it cleanly.
-	 *
-	 * XXX this is ugly. We should just read from the child process's stdout,
-	 * but that's a lot more code.
-	 * XXX by bypassing the popen API, we open the potential of process
-	 * deadlock. Clearly document child process requirements (i.e. the child
-	 * MUST read all data off of the pipe before writing anything).
-	 * TODO: port to Windows using _pipe().
-	 */
-	rc = pipe2(pipefd, O_CLOEXEC);
-	if (rc < 0)
+	if(oauth_provider->oauth_provider_hook == NULL)
 	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not create child pipe: %m")));
 		return false;
 	}
 
-	rfd = pipefd[0];
-	wfd = pipefd[1];
-
-	/* Allow the read pipe be passed to the child. */
-	if (!unset_cloexec(rfd))
+	char *id = oauth_provider->
+			   oauth_provider_hook(port, token);
+	if(id == NULL)
 	{
-		/* error message was already logged */
-		goto cleanup;
-	}
-
-	/*
-	 * Construct the command, substituting any recognized %-specifiers:
-	 *
-	 *   %f: the file descriptor of the input pipe
-	 *   %r: the role that the client wants to assume (port->user_name)
-	 *   %%: a literal '%'
-	 */
-	initStringInfo(&command);
-
-	for (p = oauth_validator_command; *p; p++)
-	{
-		if (p[0] == '%')
-		{
-			switch (p[1])
-			{
-				case 'f':
-					appendStringInfo(&command, "%d", rfd);
-					p++;
-					break;
-				case 'r':
-					/*
-					 * TODO: decide how this string should be escaped. The role
-					 * is controlled by the client, so if we don't escape it,
-					 * command injections are inevitable.
-					 *
-					 * This is probably an indication that the role name needs
-					 * to be communicated to the validator process in some other
-					 * way. For this proof of concept, just be incredibly strict
-					 * about the characters that are allowed in user names.
-					 */
-					if (!username_ok_for_shell(port->user_name))
-						goto cleanup;
-
-					appendStringInfoString(&command, port->user_name);
-					p++;
-					break;
-				case '%':
-					appendStringInfoChar(&command, '%');
-					p++;
-					break;
-				default:
-					appendStringInfoChar(&command, p[0]);
-			}
-		}
-		else
-			appendStringInfoChar(&command, p[0]);
-	}
-
-	/* Execute the command. */
-	fh = OpenPipeStream(command.data, "re");
-	/* TODO: handle failures */
-
-	/* We don't need the read end of the pipe anymore. */
-	close(rfd);
-	rfd = -1;
-
-	/* Give the command the token to validate. */
-	written = write(wfd, token, strlen(token));
-	if (written != strlen(token))
-	{
-		/* TODO must loop for short writes, EINTR et al */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not write token to child pipe: %m")));
-		goto cleanup;
-	}
-
-	close(wfd);
-	wfd = -1;
-
-	/*
-	 * Read the command's response.
-	 *
-	 * TODO: getline() is probably too new to use, unfortunately.
-	 * TODO: loop over all lines
-	 */
-	if ((len = getline(&line, &size, fh)) >= 0)
-	{
-		/* TODO: fail if the authn_id doesn't end with a newline */
-		if (len > 0)
-			line[len - 1] = '\0';
-
-		set_authn_id(port, line);
-	}
-	else if (ferror(fh))
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not read from command \"%s\": %m",
-						command.data)));
-		goto cleanup;
-	}
-
-	/* Make sure the command exits cleanly. */
-	if (!check_exit(&fh, command.data))
-	{
-		/* error message already logged */
-		goto cleanup;
-	}
-
-	/* Done. */
-	success = true;
-
-cleanup:
-	if (line)
-		free(line);
-
-	/*
-	 * In the successful case, the pipe fds are already closed. For the error
-	 * case, always close out the pipe before waiting for the command, to
-	 * prevent deadlock.
-	 */
-	if (rfd >= 0)
-		close(rfd);
-	if (wfd >= 0)
-		close(wfd);
-
-	if (fh)
-	{
-		Assert(!success);
-		check_exit(&fh, command.data);
+		ereport(LOG,
+				(errmsg("OAuth bearer token validation failed" )));
+		return NULL;
 	}
 
-	if (command.data)
-		pfree(command.data);
-
-	return success;
+	set_authn_id(port, id);
+	
+	return id;
 }
 
 static bool
@@ -769,29 +686,3 @@ unset_cloexec(int fd)
 
 	return true;
 }
-
-/*
- * XXX This should go away eventually and be replaced with either a proper
- * escape or a different strategy for communication with the validator command.
- */
-static bool
-username_ok_for_shell(const char *username)
-{
-	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
-	static const char * const allowed = "abcdefghijklmnopqrstuvwxyz"
-										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
-										"0123456789-_./:";
-	size_t	span;
-
-	Assert(username && username[0]); /* should have already been checked */
-
-	span = strspn(username, allowed);
-	if (username[span] != '\0')
-	{
-		ereport(COMMERROR,
-				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
-		return false;
-	}
-
-	return true;
-}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 333051ad3c..0bbcf231d2 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -296,8 +296,14 @@ auth_failed(Port *port, int status, const char *logdetail)
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
 		case uaOAuth:
-			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
-			break;
+			{
+				OAuthProvider *provider = get_provider_by_name(port->hba->oauth_provider);
+				if(provider->oauth_error_hook)
+					errstr = provider->oauth_error_hook(port);
+				else
+					errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+				break;
+			}
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 943e78ddff..94fb5d434d 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -1663,6 +1663,14 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Ensure that the token validation provider name is specified as provider for oauth method.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_provider, "provider", "oauth");
+	}
+
 	return parsedline;
 }
 
@@ -2095,6 +2103,31 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		else
 			hbaline->oauth_skip_usermap = false;
 	}
+	else if (strcmp(name, "provider") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "provider", "oauth");
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("provider", gettext_noop("oauth"));
+		/*
+		 * Verify that the token validation mentioned is loaded via shared_preload_libraries.
+		 */
+		if (get_provider_by_name(val) == NULL)
+		{
+			ereport(elevel,
+					(errcode(ERRCODE_CONFIG_FILE_ERROR),
+					 errmsg("cannot use oauth provider %s",val),
+					 errhint("Load provider token validation via shared_preload_libraries."),
+					 errcontext("line %d of configuration file \"%s\"",
+								line_num, HbaFileName)));
+			*err_msg = psprintf("cannot use oauth provider %s", val);
+
+			return false;
+		}
+		else
+		{
+			hbaline->oauth_provider = pstrdup(val);
+		}
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 485e48970e..938ac399dc 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -44,4 +44,29 @@ extern void set_authn_id(Port *port, const char *id);
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
 extern PGDLLIMPORT ClientAuthentication_hook_type ClientAuthentication_hook;
 
+/* Declarations for oAuth authentication providers */
+typedef const char* (*OAuthProviderCheck_hook_type) (Port *, const char*);
+
+/* Hook for plugins to report error messages in validation_failed() */
+typedef const char * (*OAuthProviderError_hook_type) (Port *);
+
+/* Hook for plugins to validate oauth provider options */
+typedef bool (*OAuthProviderValidateOptions_hook_type)
+			 (char *, char *, HbaLine *, char **);
+
+typedef struct OAuthProvider
+{
+	const char *name;
+	OAuthProviderCheck_hook_type oauth_provider_hook;
+	OAuthProviderError_hook_type oauth_error_hook;	
+} OAuthProvider;
+
+extern void RegisterOAuthProvider
+		(const char *provider_name,
+		OAuthProviderCheck_hook_type OAuthProviderCheck_hook,
+		OAuthProviderError_hook_type OAuthProviderError_hook
+		);
+
+extern OAuthProvider *get_provider_by_name(const char *name);
+
 #endif							/* AUTH_H */
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index c1b1313989..d65395cc22 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -123,6 +123,7 @@ typedef struct HbaLine
 	char	   *radiusports_s;
 	char	   *oauth_issuer;
 	char	   *oauth_scope;
+	char       *oauth_provider;
 	bool		oauth_skip_usermap;
 } HbaLine;
 
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 91d2c69f16..61a0b80b7e 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -174,6 +174,16 @@ get_auth_token(PGconn *conn)
 	if (!token_buf)
 		goto cleanup;
 
+	if(conn->oauth_bearer_token)
+	{
+		appendPQExpBufferStr(token_buf, "Bearer ");
+		appendPQExpBufferStr(token_buf, conn->oauth_bearer_token);
+		if (PQExpBufferBroken(token_buf))
+			goto cleanup;
+		token = strdup(token_buf->data);
+		goto cleanup;
+	}
+
 	err = i_set_str_parameter(&session, I_OPT_OPENID_CONFIG_ENDPOINT, conn->oauth_discovery_uri);
 	if (err)
 	{
@@ -201,18 +211,22 @@ get_auth_token(PGconn *conn)
 							 libpq_gettext("issuer does not support device authorization"));
 		goto cleanup;
 	}
+	
+	//default device flow
+	int session_response_type = I_RESPONSE_TYPE_DEVICE_CODE; 
+	auth_method = I_TOKEN_AUTH_METHOD_NONE;
+	if (conn->oauth_client_secret && *conn->oauth_client_secret)
+	{
+		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
+	}
 
-	err = i_set_response_type(&session, I_RESPONSE_TYPE_DEVICE_CODE);
+	err = i_set_response_type(&session, session_response_type);
 	if (err)
 	{
 		iddawc_error(conn, err, "failed to set device code response type");
 		goto cleanup;
 	}
 
-	auth_method = I_TOKEN_AUTH_METHOD_NONE;
-	if (conn->oauth_client_secret && *conn->oauth_client_secret)
-		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
-
 	err = i_set_parameter_list(&session,
 		I_OPT_CLIENT_ID, conn->oauth_client_id,
 		I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
@@ -250,6 +264,18 @@ get_auth_token(PGconn *conn)
 		goto cleanup;
 	}
 
+	if (conn->oauth_client_secret && *conn->oauth_client_secret)
+	{
+		session_response_type = I_RESPONSE_TYPE_CLIENT_CREDENTIALS;
+	}
+	
+	err = i_set_response_type(&session, session_response_type);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set session response type");
+		goto cleanup;
+	}
+
 	/*
 	 * Poll the token endpoint until either the user logs in and authorizes the
 	 * use of a token, or a hard failure occurs. We perform one ping _before_
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 2ff450ce05..5d804c8c0d 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -361,6 +361,10 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"OAuth-Scope", "", 15,
 	offsetof(struct pg_conn, oauth_scope)},
 
+	{"oauth_bearer_token", NULL, NULL, NULL,
+		"OAuth-Bearer", "", 20,
+	offsetof(struct pg_conn, oauth_bearer_token)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -4200,6 +4204,8 @@ freePGconn(PGconn *conn)
 		free(conn->oauth_discovery_uri);
 	if (conn->oauth_client_id)
 		free(conn->oauth_client_id);
+	if(conn->oauth_bearer_token)
+		free(conn->oauth_bearer_token);
 	if (conn->oauth_client_secret)
 		free(conn->oauth_client_secret);
 	if (conn->oauth_scope)
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 1b4de3dff0..91e71afe14 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -402,6 +402,7 @@ struct pg_conn
 	char	   *oauth_client_id;		/* client identifier */
 	char	   *oauth_client_secret;	/* client secret */
 	char	   *oauth_scope;			/* access token scope */
+	char       *oauth_bearer_token;		/* oauth token */
 	bool		oauth_want_retry;		/* should we retry on failure? */
 
 	/* Optional file to write trace info to */
#19Jacob Champion
jchampion@timescale.com
In reply to: mahendrakar s (#18)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi Mahendrakar, thanks for your interest and for the patch!

On Mon, Sep 19, 2022 at 10:03 PM mahendrakar s
<mahendrakarforpg@gmail.com> wrote:

The changes for each component are summarized below.

1. Provider-specific extension:
Each OAuth provider implements their own token validator as an
extension. Extension registers an OAuth provider hook which is matched
to a line in the HBA file.

How easy is it to write a Bearer validator using C? My limited
understanding was that most providers were publishing libraries in
higher-level languages.

Along those lines, sample validators will need to be provided, both to
help in review and to get the pytest suite green again. (And coverage
for the new code is important, too.)

2. Add support to pass on the OAuth bearer token. In this
obtaining the bearer token is left to 3rd party application or user.

./psql -U <username> -d 'dbname=postgres
oauth_client_id=<client_id> oauth_bearer_token=<token>

This hurts, but I think people are definitely going to ask for it, given
the frightening practice of copy-pasting these (incredibly sensitive
secret) tokens all over the place... Ideally I'd like to implement
sender constraints for the Bearer token, to *prevent* copy-pasting (or,
you know, outright theft). But I'm not sure that sender constraints are
well-implemented yet for the major providers.

3. HBA: An additional param ‘provider’ is added for the oauth method.
Defining "oauth" as method + passing provider, issuer endpoint
and expected audience

* * * * oauth provider=<token validation extension>
issuer=.... scope=....

Naming aside (this conflicts with Samay's previous proposal, I think), I
have concerns about the implementation. There's this code:

+		if (oauth_provider && oauth_provider->name)
+		{
+			ereport(ERROR,
+				(errmsg("OAuth provider \"%s\" is already loaded.",
+					oauth_provider->name)));
+		}

which appears to prevent loading more than one global provider. But
there's also code that deals with a provider list? (Again, it'd help to
have test code covering the new stuff.)

b) libpq optionally compiled for the clients which
explicitly need libpq to orchestrate OAuth communication with the
issuer (it depends heavily on 3rd party library iddawc as Jacob
already pointed out. The library seems to be supporting all the OAuth
flows.)

Speaking of iddawc, I don't think it's a dependency we should choose to
rely on. For all the code that it has, it doesn't seem to provide
compatibility with several real-world providers.

Google, for one, chose not to follow the IETF spec it helped author, and
iddawc doesn't support its flavor of Device Authorization. At another
point, I think iddawc tried to decode Azure's Bearer tokens, which is
incorrect...

I haven't been able to check if those problems have been fixed in a
recent version, but if we're going to tie ourselves to a huge
dependency, I'd at least like to believe that said dependency is
battle-tested and solid, and personally I don't feel like iddawc is.

- auth_method = I_TOKEN_AUTH_METHOD_NONE;
- if (conn->oauth_client_secret && *conn->oauth_client_secret)
- auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;

This code got moved, but I'm not sure why? It doesn't appear to have
made a change to the logic.

+	if (conn->oauth_client_secret && *conn->oauth_client_secret)
+	{
+		session_response_type = I_RESPONSE_TYPE_CLIENT_CREDENTIALS;
+	}

Is this an Azure-specific requirement? Ideally a public client (which
psql is) shouldn't have to provide a secret to begin with, if I
understand that bit of the protocol correctly. I think Google also
required provider-specific changes in this part of the code, and
unfortunately I don't think they looked the same as yours.

We'll have to figure all that out... Standards are great; everyone has
one of their own. :)

Thanks,
--Jacob

#20Jacob Champion
jchampion@timescale.com
In reply to: Jacob Champion (#19)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Sep 20, 2022 at 4:19 PM Jacob Champion <jchampion@timescale.com> wrote:

2. Add support to pass on the OAuth bearer token. In this
obtaining the bearer token is left to 3rd party application or user.

./psql -U <username> -d 'dbname=postgres
oauth_client_id=<client_id> oauth_bearer_token=<token>

This hurts, but I think people are definitely going to ask for it, given
the frightening practice of copy-pasting these (incredibly sensitive
secret) tokens all over the place...

After some further thought -- in this case, you already have an opaque
Bearer token (and therefore you already know, out of band, which
provider needs to be used), you're willing to copy-paste it from
whatever service you got it from, and you have an extension plugged
into Postgres on the backend that verifies this Bearer blob using some
procedure that Postgres knows nothing about.

Why do you need the OAUTHBEARER mechanism logic at that point? Isn't
that identical to a custom password scheme? It seems like that could
be handled completely by Samay's pluggable auth proposal.

--Jacob

#21Andrey Chudnovskiy
Andrey.Chudnovskiy@microsoft.com
In reply to: Jacob Champion (#20)
RE: [EXTERNAL] Re: [PoC] Federated Authn/z with OAUTHBEARER

We can support both passing the token from an upstream client and libpq implementing OAUTH2 protocol to obtaining one.

Libpq implementing OAUTHBEARER is needed for community/3rd party tools to have user-friendly authentication experience:
1. For community client tools, like pg_admin, psql etc.
Example experience: pg_admin would be able to open a popup dialog to authenticate customer and keep refresh token to avoid asking the user frequently.
2. For 3rd party connectors supporting generic OAUTH with any provider. Useful for datawiz clients, like Tableau or ETL tools. Those can support both user and client OAUTH flows.

Libpq passing toked directly from an upstream client is useful in other scenarios:
1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advance provider-specific token acquisition flows.
2. Resource-tight (like IoT) clients. Those can be compiled without optional libpq flag not including the iddawc or other dependency.

Thanks!
Andrey.

-----Original Message-----
From: Jacob Champion <jchampion@timescale.com>
Sent: Wednesday, September 21, 2022 9:03 AM
To: mahendrakar s <mahendrakarforpg@gmail.com>
Cc: pgsql-hackers@postgresql.org; smilingsamay@gmail.com; andres@anarazel.de; Andrey Chudnovskiy <Andrey.Chudnovskiy@microsoft.com>; Mahendrakar Srinivasarao <mahendrakars@microsoft.com>
Subject: [EXTERNAL] Re: [PoC] Federated Authn/z with OAUTHBEARER

[You don't often get email from jchampion@timescale.com. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]

On Tue, Sep 20, 2022 at 4:19 PM Jacob Champion <jchampion@timescale.com> wrote:

2. Add support to pass on the OAuth bearer token. In this
obtaining the bearer token is left to 3rd party application or user.

./psql -U <username> -d 'dbname=postgres
oauth_client_id=<client_id> oauth_bearer_token=<token>

This hurts, but I think people are definitely going to ask for it,
given the frightening practice of copy-pasting these (incredibly
sensitive
secret) tokens all over the place...

After some further thought -- in this case, you already have an opaque Bearer token (and therefore you already know, out of band, which provider needs to be used), you're willing to copy-paste it from whatever service you got it from, and you have an extension plugged into Postgres on the backend that verifies this Bearer blob using some procedure that Postgres knows nothing about.

Why do you need the OAUTHBEARER mechanism logic at that point? Isn't that identical to a custom password scheme? It seems like that could be handled completely by Samay's pluggable auth proposal.

--Jacob

#22Jacob Champion
jchampion@timescale.com
In reply to: Andrey Chudnovskiy (#21)
Re: [EXTERNAL] Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Sep 21, 2022 at 3:10 PM Andrey Chudnovskiy
<Andrey.Chudnovskiy@microsoft.com> wrote:

We can support both passing the token from an upstream client and libpq implementing OAUTH2 protocol to obtaining one.

Right, I agree that we could potentially do both.

Libpq passing toked directly from an upstream client is useful in other scenarios:
1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advance provider-specific token acquisition flows.
2. Resource-tight (like IoT) clients. Those can be compiled without optional libpq flag not including the iddawc or other dependency.

What I don't understand is how the OAUTHBEARER mechanism helps you in
this case. You're short-circuiting the negotiation where the server
tells the client what provider to use and what scopes to request, and
instead you're saying "here's a secret string, just take it and
validate it with magic."

I realize the ability to pass an opaque token may be useful, but from
the server's perspective, I don't see what differentiates it from the
password auth method plus a custom authenticator plugin. Why pay for
the additional complexity of OAUTHBEARER if you're not going to use
it?

--Jacob

#23Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#22)
Re: [EXTERNAL] Re: [PoC] Federated Authn/z with OAUTHBEARER

First, My message from corp email wasn't displayed in the thread,
That is what Jacob replied to, let me post it here for context:

We can support both passing the token from an upstream client and libpq implementing OAUTH2 protocol to obtain one.

Libpq implementing OAUTHBEARER is needed for community/3rd party tools to have user-friendly authentication experience:

1. For community client tools, like pg_admin, psql etc.
Example experience: pg_admin would be able to open a popup dialog to authenticate customers and keep refresh tokens to avoid asking the user frequently.
2. For 3rd party connectors supporting generic OAUTH with any provider. Useful for datawiz clients, like Tableau or ETL tools. Those can support both user and client OAUTH flows.

Libpq passing toked directly from an upstream client is useful in other scenarios:
1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advanced provider-specific token acquisition flows.
2. Resource-tight (like IoT) clients. Those can be compiled without the optional libpq flag not including the iddawc or other dependency.

-----------------------------------------------------------------------------------------------------
On this:

What I don't understand is how the OAUTHBEARER mechanism helps you in
this case. You're short-circuiting the negotiation where the server
tells the client what provider to use and what scopes to request, and
instead you're saying "here's a secret string, just take it and
validate it with magic."

I realize the ability to pass an opaque token may be useful, but from
the server's perspective, I don't see what differentiates it from the
password auth method plus a custom authenticator plugin. Why pay for
the additional complexity of OAUTHBEARER if you're not going to use
it?

Yes, passing a token as a new auth method won't make much sense in
isolation. However:
1. Since OAUTHBEARER is supported in the ecosystem, passing a token as
a way to authenticate with OAUTHBEARER is more consistent (IMO), then
passing it as a password.
2. Validation on the backend side doesn't depend on whether the token
is obtained by libpq or transparently passed by the upstream client.
3. Single OAUTH auth method on the server side for both scenarios,
would allow both enterprise clients with their own Token acquisition
and community clients using libpq flows to connect as the same PG
users/roles.

Show quoted text

On Wed, Sep 21, 2022 at 8:36 PM Jacob Champion <jchampion@timescale.com> wrote:

On Wed, Sep 21, 2022 at 3:10 PM Andrey Chudnovskiy
<Andrey.Chudnovskiy@microsoft.com> wrote:

We can support both passing the token from an upstream client and libpq implementing OAUTH2 protocol to obtaining one.

Right, I agree that we could potentially do both.

Libpq passing toked directly from an upstream client is useful in other scenarios:
1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advance provider-specific token acquisition flows.
2. Resource-tight (like IoT) clients. Those can be compiled without optional libpq flag not including the iddawc or other dependency.

What I don't understand is how the OAUTHBEARER mechanism helps you in
this case. You're short-circuiting the negotiation where the server
tells the client what provider to use and what scopes to request, and
instead you're saying "here's a secret string, just take it and
validate it with magic."

I realize the ability to pass an opaque token may be useful, but from
the server's perspective, I don't see what differentiates it from the
password auth method plus a custom authenticator plugin. Why pay for
the additional complexity of OAUTHBEARER if you're not going to use
it?

--Jacob

#24Jacob Champion
jchampion@timescale.com
In reply to: Andrey Chudnovsky (#23)
Re: [EXTERNAL] Re: [PoC] Federated Authn/z with OAUTHBEARER

On 9/21/22 21:55, Andrey Chudnovsky wrote:

First, My message from corp email wasn't displayed in the thread,

I see it on the public archives [1]/messages/by-id/MN0PR21MB31694BAC193ECE1807FD45358F4F9@MN0PR21MB3169.namprd21.prod.outlook.com. Your client is choosing some pretty
confusing quoting tactics, though, which you may want to adjust. :D

I have what I'll call some "skeptical curiosity" here -- you don't need
to defend your use cases to me by any means, but I'd love to understand
more about them.

Yes, passing a token as a new auth method won't make much sense in
isolation. However:
1. Since OAUTHBEARER is supported in the ecosystem, passing a token as
a way to authenticate with OAUTHBEARER is more consistent (IMO), then
passing it as a password.

Agreed. It's probably not a very strong argument for the new mechanism,
though, especially if you're not using the most expensive code inside it.

2. Validation on the backend side doesn't depend on whether the token
is obtained by libpq or transparently passed by the upstream client.

Sure.

3. Single OAUTH auth method on the server side for both scenarios,
would allow both enterprise clients with their own Token acquisition
and community clients using libpq flows to connect as the same PG
users/roles.

Okay, this is a stronger argument. With that in mind, I want to revisit
your examples and maybe provide some counterproposals:

Libpq passing toked directly from an upstream client is useful in other scenarios:
1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advanced provider-specific token acquisition flows.

I can see that providing a token directly would help you work around
limitations in libpq's "standard" OAuth flows, whether we use iddawc or
not. And it's cheap in terms of implementation. But I have a feeling it
would fall apart rapidly with error cases, where the server is giving
libpq information via the OAUTHBEARER mechanism, but libpq can only
communicate to your wrapper through human-readable error messages on stderr.

This seems like clear motivation for client-side SASL plugins (which
were also discussed on Samay's proposal thread). That's a lot more
expensive to implement in libpq, but if it were hypothetically
available, wouldn't you rather your provider-specific code be able to
speak OAUTHBEARER directly with the server?

2. Resource-tight (like IoT) clients. Those can be compiled without the optional libpq flag not including the iddawc or other dependency.

I want to dig into this much more; resource-constrained systems are near
and dear to me. I can see two cases here:

Case 1: The device is an IoT client that wants to connect on its own
behalf. Why would you want to use OAuth in that case? And how would the
IoT device get its Bearer token to begin with? I'm much more used to
architectures that provision high-entropy secrets for this, whether
they're incredibly long passwords per device (in which case,
channel-bound SCRAM should be a fairly strong choice?) or client certs
(which can be better decentralized, but make for a lot of bookkeeping).

If the answer to that is, "we want an IoT client to be able to connect
using the same role as a person", then I think that illustrates a clear
need for SASL negotiation. That would let the IoT client choose
SCRAM-*-PLUS or EXTERNAL, and the person at the keyboard can choose
OAUTHBEARER. Then we have incredible flexibility, because you don't have
to engineer one mechanism to handle them all.

Case 2: The constrained device is being used as a jump point. So there's
an actual person at a keyboard, trying to get into a backend server
(maybe behind a firewall layer, etc.), and the middlebox is either not
web-connected or is incredibly tiny for some reason. That might be a
good use case for a copy-pasted Bearer token, but is there actual demand
for that use case? What motivation would you (or your end user) have for
choosing a fairly heavy, web-centric authentication method in such a
constrained environment?

Are there other resource-constrained use cases I've missed?

Thanks,
--Jacob

[1]: /messages/by-id/MN0PR21MB31694BAC193ECE1807FD45358F4F9@MN0PR21MB3169.namprd21.prod.outlook.com
/messages/by-id/MN0PR21MB31694BAC193ECE1807FD45358F4F9@MN0PR21MB3169.namprd21.prod.outlook.com

#25Jacob Champion
jchampion@timescale.com
In reply to: Jacob Champion (#17)
10 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Mar 25, 2022 at 5:00 PM Jacob Champion <pchampion@vmware.com> wrote:

v4 rebases over the latest version of the pluggable auth patchset
(included as 0001-4). Note that there's a recent conflict as
of d4781d887; use an older commit as the base (or wait for the other
thread to be updated).

Here's a newly rebased v5. (They're all zipped now, which I probably
should have done a while back, sorry.)

- As before, 0001-4 are the pluggable auth set; they've now diverged
from the official version over on the other thread [1]/messages/by-id/CAJxrbyxgFzfqby+VRCkeAhJnwVZE50+ZLPx0JT2TDg9LbZtkCg@mail.gmail.com.
- I'm not sure that 0005 is still completely coherent after the
rebase, given the recent changes to jsonapi.c. But for now, the tests
are green, and that should be enough to keep the conversation going.
- 0008 will hopefully be obsoleted when the SYSTEM_USER proposal [2]/messages/by-id/7e692b8c-0b11-45db-1cad-3afc5b57409f@amazon.com lands.

Thanks,
--Jacob

[1]: /messages/by-id/CAJxrbyxgFzfqby+VRCkeAhJnwVZE50+ZLPx0JT2TDg9LbZtkCg@mail.gmail.com
[2]: /messages/by-id/7e692b8c-0b11-45db-1cad-3afc5b57409f@amazon.com

Attachments:

v5-0004-Add-support-for-map-and-custom-auth-options.patch.gzapplication/gzip; name=v5-0004-Add-support-for-map-and-custom-auth-options.patch.gzDownload
v5-0001-Add-support-for-custom-authentication-methods.patch.gzapplication/gzip; name=v5-0001-Add-support-for-custom-authentication-methods.patch.gzDownload
v5-0002-Add-sample-extension-to-test-custom-auth-provider.patch.gzapplication/gzip; name=v5-0002-Add-sample-extension-to-test-custom-auth-provider.patch.gzDownload
v5-0005-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/gzip; name=v5-0005-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v5-0003-Add-tests-for-test_auth_provider-extension.patch.gzapplication/gzip; name=v5-0003-Add-tests-for-test_auth_provider-extension.patch.gzDownload
v5-0006-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v5-0006-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v5-0009-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v5-0009-Add-pytest-suite-for-OAuth.patch.gzDownload
v5-0010-contrib-oauth-switch-to-pluggable-auth-API.patch.gzapplication/gzip; name=v5-0010-contrib-oauth-switch-to-pluggable-auth-API.patch.gzDownload
v5-0008-Add-a-very-simple-authn_id-extension.patch.gzapplication/gzip; name=v5-0008-Add-a-very-simple-authn_id-extension.patch.gzDownload
v5-0007-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v5-0007-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
#26Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#25)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Libpq passing toked directly from an upstream client is useful in other scenarios:
1. Enterprise clients, built with .Net / Java and using provider-specific authentication libraries, like MSAL for AAD. Those can also support more advanced provider-specific token acquisition flows.

I can see that providing a token directly would help you work around
limitations in libpq's "standard" OAuth flows, whether we use iddawc or
not. And it's cheap in terms of implementation. But I have a feeling it
would fall apart rapidly with error cases, where the server is giving
libpq information via the OAUTHBEARER mechanism, but libpq can only
communicate to your wrapper through human-readable error messages on stderr.

For the providing token directly, that would be primarily used for
scenarios where the same party controls both the server and the client
side wrapper.
I.e. The client knows how to get a token for a particular principal
and doesn't need any additional information other than human readable
messages.
Please clarify the scenarios where you see this falling apart.

I can provide an example in the cloud world. We (Azure) as well as
other providers offer ways to obtain OAUTH tokens for
Service-to-Service communication at IAAS / PAAS level.
on Azure "Managed Identity" feature integrated in Compute VM allows a
client to make a local http call to get a token. VM itself manages the
certificate livecycle, as well as implements the corresponding OAUTH
flow.
This capability is used by both our 1st party PAAS offerings, as well
as 3rd party services deploying on VMs or managed K8S clusters.
Here, the client doesn't need libpq assistance in obtaining the token.

This seems like clear motivation for client-side SASL plugins (which
were also discussed on Samay's proposal thread). That's a lot more
expensive to implement in libpq, but if it were hypothetically
available, wouldn't you rather your provider-specific code be able to
speak OAUTHBEARER directly with the server?

I generally agree that pluggable auth layers in libpq could be
beneficial. However, as you pointed out in Samay's thread, that would
require a new distribution model for libpq / clients to optionally
include provider-specific logic.

My optimistic plan here would be to implement several core OAUTH flows
in libpq core which would be generic enough to support major
enterprise OAUTH providers:
1. Client Credentials flow (Client_id + Client_secret) for backend applications.
2. Authorization Code Flow with PKCE and/or Device code flow for GUI
applications.

(2.) above would require a protocol between libpq and upstream clients
to exchange several messages.
Your patch includes a way for libpq to deliver to the client a message
about the next authentication steps, so planned to build on top of
that.

A little about scenarios, we look at.
What we're trying to achieve here is an easy integration path for
multiple players in the ecosystem:
- Managed PaaS Postgres providers (both us and multi-cloud solutions)
- SaaS providers deploying postgres on IaaS/PaaS providers' clouds
- Tools - pg_admin, psql and other ones.
- BI, ETL, Federation and other scenarios where postgres is used as
the data source.

If we can offer a provider agnostic solution for Backend <=> libpq <=>
Upstreal client path, we can have all players above build support for
OAUTH credentials, managed by the cloud provider of their choice.

For us, that would mean:
- Better administrator experience with pg_admin / psql handling of the
AAD (Azure Active Directory) authentication flows.
- Path for integration solutions using Postgres to build AAD
authentication in their management experience.
- Ability to use AAD identity provider for any Postgres deployments
other than our 1st party PaaS offering.
- Ability to offer github as the identity provider for PaaS Postgres offering.

Other players in the ecosystem above would be able to get the same benefits.

Does that make sense and possible without provider specific libpq plugin?

-------------------------
On resource constrained scenarios.

I want to dig into this much more; resource-constrained systems are near
and dear to me. I can see two cases here:

I just referred to the ability to compile libpq without extra
dependencies to save some kilobytes.
Not sure if OAUTH is widely used in those cases. It involves overhead
anyway, and requires the device to talk to an additional party (OAUTH
provider).
Likely Cert authentication is easier.
If needed, it can get libpq with full OAUTH support and use a client
code. But I didn't think about this scenario.

Show quoted text

On Fri, Sep 23, 2022 at 3:39 PM Jacob Champion <jchampion@timescale.com> wrote:

On Fri, Mar 25, 2022 at 5:00 PM Jacob Champion <pchampion@vmware.com> wrote:

v4 rebases over the latest version of the pluggable auth patchset
(included as 0001-4). Note that there's a recent conflict as
of d4781d887; use an older commit as the base (or wait for the other
thread to be updated).

Here's a newly rebased v5. (They're all zipped now, which I probably
should have done a while back, sorry.)

- As before, 0001-4 are the pluggable auth set; they've now diverged
from the official version over on the other thread [1].
- I'm not sure that 0005 is still completely coherent after the
rebase, given the recent changes to jsonapi.c. But for now, the tests
are green, and that should be enough to keep the conversation going.
- 0008 will hopefully be obsoleted when the SYSTEM_USER proposal [2] lands.

Thanks,
--Jacob

[1] /messages/by-id/CAJxrbyxgFzfqby+VRCkeAhJnwVZE50+ZLPx0JT2TDg9LbZtkCg@mail.gmail.com
[2] /messages/by-id/7e692b8c-0b11-45db-1cad-3afc5b57409f@amazon.com

#27Jacob Champion
jchampion@timescale.com
In reply to: Andrey Chudnovsky (#26)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Sep 26, 2022 at 6:39 PM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

For the providing token directly, that would be primarily used for
scenarios where the same party controls both the server and the client
side wrapper.
I.e. The client knows how to get a token for a particular principal
and doesn't need any additional information other than human readable
messages.
Please clarify the scenarios where you see this falling apart.

The most concrete example I can see is with the OAUTHBEARER error
response. If you want to eventually handle differing scopes per role,
or different error statuses (which the proof-of-concept currently
hardcodes as `invalid_token`), then the client can't assume it knows
what the server is going to say there. I think that's true even if you
control both sides and are hardcoding the provider.

How should we communicate those pieces to a custom client when it's
passing a token directly? The easiest way I can see is for the custom
client to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).
If you had to parse the libpq error message, I don't think that'd be
particularly maintainable.

I can provide an example in the cloud world. We (Azure) as well as
other providers offer ways to obtain OAUTH tokens for
Service-to-Service communication at IAAS / PAAS level.
on Azure "Managed Identity" feature integrated in Compute VM allows a
client to make a local http call to get a token. VM itself manages the
certificate livecycle, as well as implements the corresponding OAUTH
flow.
This capability is used by both our 1st party PAAS offerings, as well
as 3rd party services deploying on VMs or managed K8S clusters.
Here, the client doesn't need libpq assistance in obtaining the token.

Cool. To me that's the strongest argument yet for directly providing
tokens to libpq.

My optimistic plan here would be to implement several core OAUTH flows
in libpq core which would be generic enough to support major
enterprise OAUTH providers:
1. Client Credentials flow (Client_id + Client_secret) for backend applications.
2. Authorization Code Flow with PKCE and/or Device code flow for GUI
applications.

As long as it's clear to DBAs when to use which flow (because existing
documentation for that is hit-and-miss), I think it's reasonable to
eventually support multiple flows. Personally my preference would be
to start with one or two core flows, and expand outward once we're
sure that we do those perfectly. Otherwise the explosion of knobs and
buttons might be overwhelming, both to users and devs.

Related to the question of flows is the client implementation library.
I've mentioned that I don't think iddawc is production-ready. As far
as I'm aware, there is only one certified OpenID relying party written
in C, and that's... an Apache server plugin. That leaves us either
choosing an untested library, scouring the web for a "tested" library
(and hoping we're right in our assessment), or implementing our own
(which is going to tamp down enthusiasm for supporting many flows,
though that has its own set of benefits). If you know of any reliable
implementations with a C API, please let me know.

(2.) above would require a protocol between libpq and upstream clients
to exchange several messages.
Your patch includes a way for libpq to deliver to the client a message
about the next authentication steps, so planned to build on top of
that.

Specifically it delivers that message to an end user. If you want a
generic machine client to be able to use that, then we'll need to talk
about how.

A little about scenarios, we look at.
What we're trying to achieve here is an easy integration path for
multiple players in the ecosystem:
- Managed PaaS Postgres providers (both us and multi-cloud solutions)
- SaaS providers deploying postgres on IaaS/PaaS providers' clouds
- Tools - pg_admin, psql and other ones.
- BI, ETL, Federation and other scenarios where postgres is used as
the data source.

If we can offer a provider agnostic solution for Backend <=> libpq <=>
Upstreal client path, we can have all players above build support for
OAUTH credentials, managed by the cloud provider of their choice.

Well... I don't quite understand why we'd go to the trouble of
providing a provider-agnostic communication solution only to have
everyone write their own provider-specific client support. Unless
you're saying Microsoft would provide an officially blessed plugin for
the *server* side only, and Google would provide one of their own, and
so on.

The server side authorization is the only place where I think it makes
sense to specialize by default. libpq should remain agnostic, with the
understanding that we'll need to make hard decisions when a major
provider decides not to follow a spec.

For us, that would mean:
- Better administrator experience with pg_admin / psql handling of the
AAD (Azure Active Directory) authentication flows.
- Path for integration solutions using Postgres to build AAD
authentication in their management experience.
- Ability to use AAD identity provider for any Postgres deployments
other than our 1st party PaaS offering.
- Ability to offer github as the identity provider for PaaS Postgres offering.

GitHub is unfortunately a bit tricky, unless they've started
supporting OpenID recently?

Other players in the ecosystem above would be able to get the same benefits.

Does that make sense and possible without provider specific libpq plugin?

If the players involved implement the flows and follow the specs, yes.
That's a big "if", unfortunately. I think GitHub and Google are two
major players who are currently doing things their own way.

I just referred to the ability to compile libpq without extra
dependencies to save some kilobytes.
Not sure if OAUTH is widely used in those cases. It involves overhead
anyway, and requires the device to talk to an additional party (OAUTH
provider).
Likely Cert authentication is easier.
If needed, it can get libpq with full OAUTH support and use a client
code. But I didn't think about this scenario.

Makes sense. Thanks!

--Jacob

#28Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#27)
Re: [PoC] Federated Authn/z with OAUTHBEARER

The most concrete example I can see is with the OAUTHBEARER error
response. If you want to eventually handle differing scopes per role,
or different error statuses (which the proof-of-concept currently
hardcodes as `invalid_token`), then the client can't assume it knows
what the server is going to say there. I think that's true even if you
control both sides and are hardcoding the provider.

Ok, I see the point. It's related to the topic of communication
between libpq and the upstream client.

How should we communicate those pieces to a custom client when it's
passing a token directly? The easiest way I can see is for the custom
client to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).
If you had to parse the libpq error message, I don't think that'd be
particularly maintainable.

I agree that parsing the message is not a sustainable way.
Could you provide more details on the SASL plugin approach you propose?

Specifically, is this basically a set of extension hooks for the client
side?
With the need for the client to be compiled with the plugins based on
the set of providers it needs.

Well... I don't quite understand why we'd go to the trouble of
providing a provider-agnostic communication solution only to have
everyone write their own provider-specific client support. Unless
you're saying Microsoft would provide an officially blessed plugin for
the *server* side only, and Google would provide one of their own, and
so on.

Yes, via extensions. Identity providers can open source extensions to
use their auth services outside of first party PaaS offerings.
For 3rd party Postgres PaaS or on premise deployments.

The server side authorization is the only place where I think it makes
sense to specialize by default. libpq should remain agnostic, with the
understanding that we'll need to make hard decisions when a major
provider decides not to follow a spec.

Completely agree with agnostic libpq. Though needs validation with
several major providers to know if this is possible.

Specifically it delivers that message to an end user. If you want a
generic machine client to be able to use that, then we'll need to talk
about how.

Yes, that's what needs to be decided.
In both Device code and Authorization code scenarios, libpq and the
client would need to exchange a couple of pieces of metadata.
Plus, after success, the client should be able to access a refresh token
for further use.

Can we implement a generic protocol like for this between libpq and the
clients?

#29Jacob Champion
jchampion@timescale.com
In reply to: Andrey Chudnovsky (#28)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Sep 30, 2022 at 7:47 AM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

How should we communicate those pieces to a custom client when it's
passing a token directly? The easiest way I can see is for the custom
client to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).
If you had to parse the libpq error message, I don't think that'd be
particularly maintainable.

I agree that parsing the message is not a sustainable way.
Could you provide more details on the SASL plugin approach you propose?

Specifically, is this basically a set of extension hooks for the client side?
With the need for the client to be compiled with the plugins based on
the set of providers it needs.

That's a good question. I can see two broad approaches, with maybe
some ability to combine them into a hybrid:

1. If there turns out to be serious interest in having libpq itself
handle OAuth natively (with all of the web-facing code that implies,
and all of the questions still left to answer), then we might be able
to provide a "token hook" in the same way that we currently provide a
passphrase hook for OpenSSL keys. By default, libpq would use its
internal machinery to take the provider details, navigate its builtin
flow, and return the Bearer token. If you wanted to override that
behavior as a client, you could replace the builtin flow with your
own, by registering a set of callbacks.

2. Alternatively, OAuth support could be provided via a mechanism
plugin for some third-party SASL library (GNU libgsasl, Cyrus
libsasl2). We could provide an OAuth plugin in contrib that handles
the default flow. Other providers could publish their alternative
plugins to completely replace the OAUTHBEARER mechanism handling.

Approach (2) would make for some duplicated effort since every
provider has to write code to speak the OAUTHBEARER protocol. It might
simplify provider-specific distribution, since (at least for Cyrus) I
think you could build a single plugin that supports both the client
and server side. But it would be a lot easier to unknowingly (or
knowingly) break the spec, since you'd control both the client and
server sides. There would be less incentive to interoperate.

Finally, we could potentially take pieces from both, by having an
official OAuth mechanism plugin that provides a client-side hook to
override the flow. I have no idea if the benefits would offset the
costs of a plugin-for-a-plugin style architecture. And providers would
still be free to ignore it and just provide a full mechanism plugin
anyway.

Well... I don't quite understand why we'd go to the trouble of
providing a provider-agnostic communication solution only to have
everyone write their own provider-specific client support. Unless
you're saying Microsoft would provide an officially blessed plugin for
the *server* side only, and Google would provide one of their own, and
so on.

Yes, via extensions. Identity providers can open source extensions to
use their auth services outside of first party PaaS offerings.
For 3rd party Postgres PaaS or on premise deployments.

Sounds reasonable.

The server side authorization is the only place where I think it makes
sense to specialize by default. libpq should remain agnostic, with the
understanding that we'll need to make hard decisions when a major
provider decides not to follow a spec.

Completely agree with agnostic libpq. Though needs validation with
several major providers to know if this is possible.

Agreed.

Specifically it delivers that message to an end user. If you want a
generic machine client to be able to use that, then we'll need to talk
about how.

Yes, that's what needs to be decided.
In both Device code and Authorization code scenarios, libpq and the
client would need to exchange a couple of pieces of metadata.
Plus, after success, the client should be able to access a refresh token for further use.

Can we implement a generic protocol like for this between libpq and the clients?

I think we can probably prototype a callback hook for approach (1)
pretty quickly. (2) is a lot more work and investigation, but it's
work that I'm interested in doing (when I get the time). I think there
are other very good reasons to consider a third-party SASL library,
and some good lessons to be learned, even if the community decides not
to go down that road.

Thanks,
--Jacob

#30Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#29)
Re: [PoC] Federated Authn/z with OAUTHBEARER

I think we can probably prototype a callback hook for approach (1)
pretty quickly. (2) is a lot more work and investigation, but it's
work that I'm interested in doing (when I get the time). I think there
are other very good reasons to consider a third-party SASL library,
and some good lessons to be learned, even if the community decides not
to go down that road.

Makes sense. We will work on (1.) and do some check if there are any
blockers for a shared solution to support github and google.

Show quoted text

On Fri, Sep 30, 2022 at 1:45 PM Jacob Champion <jchampion@timescale.com> wrote:

On Fri, Sep 30, 2022 at 7:47 AM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

How should we communicate those pieces to a custom client when it's
passing a token directly? The easiest way I can see is for the custom
client to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).
If you had to parse the libpq error message, I don't think that'd be
particularly maintainable.

I agree that parsing the message is not a sustainable way.
Could you provide more details on the SASL plugin approach you propose?

Specifically, is this basically a set of extension hooks for the client side?
With the need for the client to be compiled with the plugins based on
the set of providers it needs.

That's a good question. I can see two broad approaches, with maybe
some ability to combine them into a hybrid:

1. If there turns out to be serious interest in having libpq itself
handle OAuth natively (with all of the web-facing code that implies,
and all of the questions still left to answer), then we might be able
to provide a "token hook" in the same way that we currently provide a
passphrase hook for OpenSSL keys. By default, libpq would use its
internal machinery to take the provider details, navigate its builtin
flow, and return the Bearer token. If you wanted to override that
behavior as a client, you could replace the builtin flow with your
own, by registering a set of callbacks.

2. Alternatively, OAuth support could be provided via a mechanism
plugin for some third-party SASL library (GNU libgsasl, Cyrus
libsasl2). We could provide an OAuth plugin in contrib that handles
the default flow. Other providers could publish their alternative
plugins to completely replace the OAUTHBEARER mechanism handling.

Approach (2) would make for some duplicated effort since every
provider has to write code to speak the OAUTHBEARER protocol. It might
simplify provider-specific distribution, since (at least for Cyrus) I
think you could build a single plugin that supports both the client
and server side. But it would be a lot easier to unknowingly (or
knowingly) break the spec, since you'd control both the client and
server sides. There would be less incentive to interoperate.

Finally, we could potentially take pieces from both, by having an
official OAuth mechanism plugin that provides a client-side hook to
override the flow. I have no idea if the benefits would offset the
costs of a plugin-for-a-plugin style architecture. And providers would
still be free to ignore it and just provide a full mechanism plugin
anyway.

Well... I don't quite understand why we'd go to the trouble of
providing a provider-agnostic communication solution only to have
everyone write their own provider-specific client support. Unless
you're saying Microsoft would provide an officially blessed plugin for
the *server* side only, and Google would provide one of their own, and
so on.

Yes, via extensions. Identity providers can open source extensions to
use their auth services outside of first party PaaS offerings.
For 3rd party Postgres PaaS or on premise deployments.

Sounds reasonable.

The server side authorization is the only place where I think it makes
sense to specialize by default. libpq should remain agnostic, with the
understanding that we'll need to make hard decisions when a major
provider decides not to follow a spec.

Completely agree with agnostic libpq. Though needs validation with
several major providers to know if this is possible.

Agreed.

Specifically it delivers that message to an end user. If you want a
generic machine client to be able to use that, then we'll need to talk
about how.

Yes, that's what needs to be decided.
In both Device code and Authorization code scenarios, libpq and the
client would need to exchange a couple of pieces of metadata.
Plus, after success, the client should be able to access a refresh token for further use.

Can we implement a generic protocol like for this between libpq and the clients?

I think we can probably prototype a callback hook for approach (1)
pretty quickly. (2) is a lot more work and investigation, but it's
work that I'm interested in doing (when I get the time). I think there
are other very good reasons to consider a third-party SASL library,
and some good lessons to be learned, even if the community decides not
to go down that road.

Thanks,
--Jacob

#31mahendrakar s
mahendrakarforpg@gmail.com
In reply to: Andrey Chudnovsky (#30)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

We validated on  libpq handling OAuth natively with different flows
with different OIDC certified providers.

Flows: Device Code, Client Credentials and Refresh Token.
Providers: Microsoft, Google and Okta.
Also validated with OAuth provider Github.

We propose using OpenID Connect (OIDC) as the protocol, instead of
OAuth, as it is:
- Discovery mechanism to bridge the differences and provide metadata.
- Stricter protocol and certification process to reliably identify
which providers can be supported.
- OIDC is designed for authentication, while the main purpose of OAUTH is to
authorize applications on behalf of the user.

Github is not OIDC certified, so won’t be supported with this proposal.
However, it may be supported in the future through the ability for the
extension to provide custom discovery document content.

OpenID configuration has a well-known discovery mechanism
for the provider configuration URI which is
defined in OpenID Connect. It allows libpq to fetch
metadata about provider (i.e endpoints, supported grants, response types, etc).

In the attached patch (based on V2 patch in the thread and does not
contain Samay's changes):
- Provider can configure issuer url and scope through the options hook.)
- Server passes on an open discovery url and scope to libpq.
- Libpq handles OAuth flow based on the flow_type sent in the
connection string [1]connection string for refresh token flow: ./psql -U <user> -d 'dbname=postgres oauth_client_id=<client_id> oauth_flow_type=<flowtype> oauth_refresh_token=<refresh token>'.
- Added callbacks to notify a structure to client tools if OAuth flow
requires user interaction.
- Pg backend uses hooks to validate bearer token.

Note that authentication code flow with PKCE for GUI clients is not
implemented yet.

Proposed next steps:
- Broaden discussion to reach agreement on the approach.
- Implement libpq changes without iddawc
- Prototype GUI flow with pgAdmin

Thanks,
Mahendrakar.

[1]: connection string for refresh token flow: ./psql -U <user> -d 'dbname=postgres oauth_client_id=<client_id> oauth_flow_type=<flowtype> oauth_refresh_token=<refresh token>'
connection string for refresh token flow:
./psql -U <user> -d 'dbname=postgres oauth_client_id=<client_id>
oauth_flow_type=<flowtype> oauth_refresh_token=<refresh token>'

Show quoted text

On Mon, 3 Oct 2022 at 23:34, Andrey Chudnovsky <achudnovskij@gmail.com> wrote:

I think we can probably prototype a callback hook for approach (1)
pretty quickly. (2) is a lot more work and investigation, but it's
work that I'm interested in doing (when I get the time). I think there
are other very good reasons to consider a third-party SASL library,
and some good lessons to be learned, even if the community decides not
to go down that road.

Makes sense. We will work on (1.) and do some check if there are any
blockers for a shared solution to support github and google.

On Fri, Sep 30, 2022 at 1:45 PM Jacob Champion <jchampion@timescale.com> wrote:

On Fri, Sep 30, 2022 at 7:47 AM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

How should we communicate those pieces to a custom client when it's
passing a token directly? The easiest way I can see is for the custom
client to speak the OAUTHBEARER protocol directly (e.g. SASL plugin).
If you had to parse the libpq error message, I don't think that'd be
particularly maintainable.

I agree that parsing the message is not a sustainable way.
Could you provide more details on the SASL plugin approach you propose?

Specifically, is this basically a set of extension hooks for the client side?
With the need for the client to be compiled with the plugins based on
the set of providers it needs.

That's a good question. I can see two broad approaches, with maybe
some ability to combine them into a hybrid:

1. If there turns out to be serious interest in having libpq itself
handle OAuth natively (with all of the web-facing code that implies,
and all of the questions still left to answer), then we might be able
to provide a "token hook" in the same way that we currently provide a
passphrase hook for OpenSSL keys. By default, libpq would use its
internal machinery to take the provider details, navigate its builtin
flow, and return the Bearer token. If you wanted to override that
behavior as a client, you could replace the builtin flow with your
own, by registering a set of callbacks.

2. Alternatively, OAuth support could be provided via a mechanism
plugin for some third-party SASL library (GNU libgsasl, Cyrus
libsasl2). We could provide an OAuth plugin in contrib that handles
the default flow. Other providers could publish their alternative
plugins to completely replace the OAUTHBEARER mechanism handling.

Approach (2) would make for some duplicated effort since every
provider has to write code to speak the OAUTHBEARER protocol. It might
simplify provider-specific distribution, since (at least for Cyrus) I
think you could build a single plugin that supports both the client
and server side. But it would be a lot easier to unknowingly (or
knowingly) break the spec, since you'd control both the client and
server sides. There would be less incentive to interoperate.

Finally, we could potentially take pieces from both, by having an
official OAuth mechanism plugin that provides a client-side hook to
override the flow. I have no idea if the benefits would offset the
costs of a plugin-for-a-plugin style architecture. And providers would
still be free to ignore it and just provide a full mechanism plugin
anyway.

Well... I don't quite understand why we'd go to the trouble of
providing a provider-agnostic communication solution only to have
everyone write their own provider-specific client support. Unless
you're saying Microsoft would provide an officially blessed plugin for
the *server* side only, and Google would provide one of their own, and
so on.

Yes, via extensions. Identity providers can open source extensions to
use their auth services outside of first party PaaS offerings.
For 3rd party Postgres PaaS or on premise deployments.

Sounds reasonable.

The server side authorization is the only place where I think it makes
sense to specialize by default. libpq should remain agnostic, with the
understanding that we'll need to make hard decisions when a major
provider decides not to follow a spec.

Completely agree with agnostic libpq. Though needs validation with
several major providers to know if this is possible.

Agreed.

Specifically it delivers that message to an end user. If you want a
generic machine client to be able to use that, then we'll need to talk
about how.

Yes, that's what needs to be decided.
In both Device code and Authorization code scenarios, libpq and the
client would need to exchange a couple of pieces of metadata.
Plus, after success, the client should be able to access a refresh token for further use.

Can we implement a generic protocol like for this between libpq and the clients?

I think we can probably prototype a callback hook for approach (1)
pretty quickly. (2) is a lot more work and investigation, but it's
work that I'm interested in doing (when I get the time). I think there
are other very good reasons to consider a third-party SASL library,
and some good lessons to be learned, even if the community decides not
to go down that road.

Thanks,
--Jacob

Attachments:

v1-0001-oauth-flows-validation-hook-approach.patchapplication/octet-stream; name=v1-0001-oauth-flows-validation-hook-approach.patchDownload
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 3a625847f3..f213a40b65 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -24,15 +24,23 @@
 #include "libpq/hba.h"
 #include "libpq/oauth.h"
 #include "libpq/sasl.h"
+#include "miscadmin.h"
 #include "storage/fd.h"
 
 /* GUC */
 char *oauth_validator_command;
+static OAuthProvider* oauth_provider = NULL;
+
+/*----------------------------------------------------------------
+ * OAuth Authentication
+ *----------------------------------------------------------------
+ */
+static List *oauth_providers = NIL;
 
 static void  oauth_get_mechanisms(Port *port, StringInfo buf);
 static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
 static int   oauth_exchange(void *opaq, const char *input, int inputlen,
-							char **output, int *outputlen, char **logdetail);
+							char **output, int *outputlen, const char **logdetail);
 
 /* Mechanism declaration */
 const pg_be_sasl_mech pg_be_oauth_mech = {
@@ -43,7 +51,6 @@ const pg_be_sasl_mech pg_be_oauth_mech = {
 	PG_MAX_AUTH_TOKEN_LENGTH,
 };
 
-
 typedef enum
 {
 	OAUTH_STATE_INIT = 0,
@@ -62,7 +69,7 @@ struct oauth_ctx
 static char *sanitize_char(char c);
 static char *parse_kvpairs_for_auth(char **input);
 static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
-static bool validate(Port *port, const char *auth, char **logdetail);
+static bool validate(Port *port, const char *auth, const char **logdetail);
 static bool run_validator_command(Port *port, const char *token);
 static bool check_exit(FILE **fh, const char *command);
 static bool unset_cloexec(int fd);
@@ -72,6 +79,86 @@ static bool username_ok_for_shell(const char *username);
 #define AUTH_KEY "auth"
 #define BEARER_SCHEME "Bearer "
 
+#include "utils/memutils.h"
+
+/*----------------------------------------------------------------
+ * OAuth Token Validator
+ *----------------------------------------------------------------
+ */
+
+/*
+ * RegistorOAuthProvider registers a OAuth Token Validator to be
+ * used for oauth token validation. It validates the token and adds the valiator
+ * name and it's hooks to a list of loaded token validator. The right validator's
+ * hooks can then be called based on the validator name specified in
+ * pg_hba.conf.
+ *
+ * This function should be called in _PG_init() by any extension looking to
+ * add a custom authentication method.
+ */
+void
+RegistorOAuthProvider(
+	const char *provider_name,
+	OAuthProviderCheck_hook_type OAuthProviderCheck_hook,
+	OAuthProviderError_hook_type OAuthProviderError_hook,
+	OAuthProviderOptions_hook_type OAuthProviderOptions_hook
+)
+{	
+	if (!process_shared_preload_libraries_in_progress)
+	{
+		ereport(ERROR,
+			(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg("RegistorOAuthProvider can only be called by a shared_preload_library")));
+		return;
+	}
+
+	MemoryContext oldcxt;
+	if (oauth_provider == NULL)
+	{
+		oldcxt = MemoryContextSwitchTo(TopMemoryContext);
+		oauth_provider = palloc(sizeof(OAuthProvider));
+		oauth_provider->name = pstrdup(provider_name);
+		oauth_provider->oauth_provider_hook = OAuthProviderCheck_hook;
+		oauth_provider->oauth_error_hook = OAuthProviderError_hook;
+		oauth_provider->oauth_options_hook = OAuthProviderOptions_hook;
+		oauth_providers = lappend(oauth_providers, oauth_provider);
+		MemoryContextSwitchTo(oldcxt);	
+	}
+	else
+	{
+		if (oauth_provider && oauth_provider->name)
+		{
+			ereport(ERROR,
+				(errmsg("OAuth provider \"%s\" is already loaded.",
+					oauth_provider->name)));
+		}
+		else
+		{
+			ereport(ERROR,
+				(errmsg("OAuth provider is already loaded.")));
+		}
+	}
+}
+
+/*
+ * Returns the oauth provider (which includes it's
+ * callback functions) based on name specified.
+ */
+OAuthProvider *get_provider_by_name(const char *name)
+{
+	ListCell *lc;
+	foreach(lc, oauth_providers)
+	{
+		OAuthProvider *provider = (OAuthProvider *) lfirst(lc);		
+		if (strcmp(provider->name, name) == 0)
+		{
+			return provider;
+		}
+	}
+
+	return NULL;
+}
+
 static void
 oauth_get_mechanisms(Port *port, StringInfo buf)
 {
@@ -102,9 +189,32 @@ oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 	return ctx;
 }
 
+static void process_oauth_flow_type(pg_oauth_flow_type flow_type, struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData	buf;
+	initStringInfo(&buf);
+
+	OAuthProviderOptions *oauth_options = oauth_provider->oauth_options_hook(flow_type);
+	ctx->scope = oauth_options->scope;
+	ctx->issuer = oauth_options->issuer_url;
+	appendStringInfo(&buf,
+		"{ "
+			"\"status\": \"invalid_token\", "
+			"\"openid-configuration\": \"%s/.well-known/openid-configuration\","	
+			"\"scope\": \"%s\""
+		"}",
+		oauth_options->issuer_url,
+		oauth_options->scope);
+
+	*output = buf.data;
+	*outputlen = buf.len;
+	
+	pfree(oauth_options);
+}
+
 static int
 oauth_exchange(void *opaq, const char *input, int inputlen,
-			   char **output, int *outputlen, char **logdetail)
+			   char **output, int *outputlen, const char **logdetail)
 {
 	char   *p;
 	char	cbind_flag;
@@ -247,11 +357,17 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
 				 errmsg("malformed OAUTHBEARER message"),
 				 errdetail("Message contains additional data after the final terminator.")));
-
-	if (!validate(ctx->port, auth, logdetail))
+	
+	/* if not Bearer, process flow_type*/
+	if (strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		process_oauth_flow_type(atoi(auth), ctx, output, outputlen);
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else if(!validate(ctx->port, auth, logdetail))
 	{
 		generate_error_response(ctx, output, outputlen);
-
 		ctx->state = OAUTH_STATE_ERROR;
 		return PG_SASL_EXCHANGE_CONTINUE;
 	}
@@ -415,7 +531,7 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 }
 
 static bool
-validate(Port *port, const char *auth, char **logdetail)
+validate(Port *port, const char *auth, const char **logdetail)
 {
 	static const char * const b64_set = "abcdefghijklmnopqrstuvwxyz"
 										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
@@ -508,7 +624,7 @@ validate(Port *port, const char *auth, char **logdetail)
 		return true;
 	}
 
-		/* Make sure the validator authenticated the user. */
+	/* Make sure the validator authenticated the user. */
 	if (!MyClientConnectionInfo.authn_id)
 	{
 		/* TODO: use logdetail; reduce message duplication */
@@ -518,199 +634,22 @@ validate(Port *port, const char *auth, char **logdetail)
 		return false;
 	}
 
-	/* Finally, check the user map. */
-	ret = check_usermap(port->hba->usermap, port->user_name,
-						MyClientConnectionInfo.authn_id, false);
+	 /* Finally, check the user map. */
+        ret = check_usermap(port->hba->usermap, port->user_name,
+                                              MyClientConnectionInfo.authn_id, false);
 	return (ret == STATUS_OK);
 }
 
 static bool
 run_validator_command(Port *port, const char *token)
 {
-	bool		success = false;
-	int			rc;
-	int			pipefd[2];
-	int			rfd = -1;
-	int			wfd = -1;
-
-	StringInfoData command = { 0 };
-	char	   *p;
-	FILE	   *fh = NULL;
-
-	ssize_t		written;
-	char	   *line = NULL;
-	size_t		size = 0;
-	ssize_t		len;
-
-	Assert(oauth_validator_command);
-
-	if (!oauth_validator_command[0])
-	{
-		ereport(COMMERROR,
-				(errmsg("oauth_validator_command is not set"),
-				 errhint("To allow OAuth authenticated connections, set "
-						 "oauth_validator_command in postgresql.conf.")));
-		return false;
-	}
-
-	/*
-	 * Since popen() is unidirectional, open up a pipe for the other direction.
-	 * Use CLOEXEC to ensure that our write end doesn't accidentally get copied
-	 * into child processes, which would prevent us from closing it cleanly.
-	 *
-	 * XXX this is ugly. We should just read from the child process's stdout,
-	 * but that's a lot more code.
-	 * XXX by bypassing the popen API, we open the potential of process
-	 * deadlock. Clearly document child process requirements (i.e. the child
-	 * MUST read all data off of the pipe before writing anything).
-	 * TODO: port to Windows using _pipe().
-	 */
-	rc = pipe2(pipefd, O_CLOEXEC);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not create child pipe: %m")));
-		return false;
-	}
-
-	rfd = pipefd[0];
-	wfd = pipefd[1];
-
-	/* Allow the read pipe be passed to the child. */
-	if (!unset_cloexec(rfd))
-	{
-		/* error message was already logged */
-		goto cleanup;
-	}
-
-	/*
-	 * Construct the command, substituting any recognized %-specifiers:
-	 *
-	 *   %f: the file descriptor of the input pipe
-	 *   %r: the role that the client wants to assume (port->user_name)
-	 *   %%: a literal '%'
-	 */
-	initStringInfo(&command);
-
-	for (p = oauth_validator_command; *p; p++)
-	{
-		if (p[0] == '%')
-		{
-			switch (p[1])
-			{
-				case 'f':
-					appendStringInfo(&command, "%d", rfd);
-					p++;
-					break;
-				case 'r':
-					/*
-					 * TODO: decide how this string should be escaped. The role
-					 * is controlled by the client, so if we don't escape it,
-					 * command injections are inevitable.
-					 *
-					 * This is probably an indication that the role name needs
-					 * to be communicated to the validator process in some other
-					 * way. For this proof of concept, just be incredibly strict
-					 * about the characters that are allowed in user names.
-					 */
-					if (!username_ok_for_shell(port->user_name))
-						goto cleanup;
-
-					appendStringInfoString(&command, port->user_name);
-					p++;
-					break;
-				case '%':
-					appendStringInfoChar(&command, '%');
-					p++;
-					break;
-				default:
-					appendStringInfoChar(&command, p[0]);
-			}
-		}
-		else
-			appendStringInfoChar(&command, p[0]);
-	}
-
-	/* Execute the command. */
-	fh = OpenPipeStream(command.data, "re");
-	/* TODO: handle failures */
-
-	/* We don't need the read end of the pipe anymore. */
-	close(rfd);
-	rfd = -1;
-
-	/* Give the command the token to validate. */
-	written = write(wfd, token, strlen(token));
-	if (written != strlen(token))
-	{
-		/* TODO must loop for short writes, EINTR et al */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not write token to child pipe: %m")));
-		goto cleanup;
-	}
-
-	close(wfd);
-	wfd = -1;
-
-	/*
-	 * Read the command's response.
-	 *
-	 * TODO: getline() is probably too new to use, unfortunately.
-	 * TODO: loop over all lines
-	 */
-	if ((len = getline(&line, &size, fh)) >= 0)
-	{
-		/* TODO: fail if the authn_id doesn't end with a newline */
-		if (len > 0)
-			line[len - 1] = '\0';
-
-		set_authn_id(port, line);
-	}
-	else if (ferror(fh))
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not read from command \"%s\": %m",
-						command.data)));
-		goto cleanup;
-	}
-
-	/* Make sure the command exits cleanly. */
-	if (!check_exit(&fh, command.data))
+	int result = oauth_provider->oauth_provider_hook(port, token);
+	if(result == STATUS_OK)
 	{
-		/* error message already logged */
-		goto cleanup;
-	}
-
-	/* Done. */
-	success = true;
-
-cleanup:
-	if (line)
-		free(line);
-
-	/*
-	 * In the successful case, the pipe fds are already closed. For the error
-	 * case, always close out the pipe before waiting for the command, to
-	 * prevent deadlock.
-	 */
-	if (rfd >= 0)
-		close(rfd);
-	if (wfd >= 0)
-		close(wfd);
-
-	if (fh)
-	{
-		Assert(!success);
-		check_exit(&fh, command.data);
+		set_authn_id(port, port->user_name);
+		return true;
 	}
-
-	if (command.data)
-		pfree(command.data);
-
-	return success;
+	return false;
 }
 
 static bool
@@ -780,7 +719,7 @@ username_ok_for_shell(const char *username)
 	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
 	static const char * const allowed = "abcdefghijklmnopqrstuvwxyz"
 										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
-										"0123456789-_./:";
+										"0123456789-_./@:";
 	size_t	span;
 
 	Assert(username && username[0]); /* should have already been checked */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index b62457d57c..7b7b6ff9aa 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -28,6 +28,41 @@ extern void set_authn_id(Port *port, const char *id);
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
 extern PGDLLIMPORT ClientAuthentication_hook_type ClientAuthentication_hook;
+/* Declarations for oAuth authentication providers */
+typedef int (*OAuthProviderCheck_hook_type) (Port *, const char*);
+
+/* Hook for plugins to report error messages in validation_failed() */
+typedef const char * (*OAuthProviderError_hook_type) (Port *);
+
+/* Hook for plugins to validate oauth provider options */
+typedef bool (*OAuthProviderValidateOptions_hook_type)
+			 (char *, char *, HbaLine *, char **);
+
+typedef struct OAuthProviderOptions
+{
+	char				*issuer_url;
+	char 				*scope;
+} OAuthProviderOptions;
+
+/* Hook for plugins to get oauth params */
+typedef OAuthProviderOptions *(*OAuthProviderOptions_hook_type) (pg_oauth_flow_type);
+
+typedef struct OAuthProvider
+{
+	const char *name;
+	OAuthProviderCheck_hook_type oauth_provider_hook;
+	OAuthProviderError_hook_type oauth_error_hook;	
+	OAuthProviderOptions_hook_type oauth_options_hook;
+} OAuthProvider;
+
+extern void RegistorOAuthProvider
+		(const char *provider_name,
+		OAuthProviderCheck_hook_type OAuthProviderCheck_hook,
+		OAuthProviderError_hook_type OAuthProviderError_hook,		
+		OAuthProviderOptions_hook_type OAuthProviderParams_hook
+		);
+
+extern OAuthProvider *get_provider_by_name(const char *name);
 #define PG_MAX_AUTH_TOKEN_LENGTH	65535
 
 #endif							/* AUTH_H */
diff --git a/src/include/libpq/libpq-be.h b/src/include/libpq/libpq-be.h
index 6d452ec6d9..f7bbb9dcf4 100644
--- a/src/include/libpq/libpq-be.h
+++ b/src/include/libpq/libpq-be.h
@@ -68,6 +68,17 @@ typedef enum CAC_state
 	CAC_TOOMANY
 } CAC_state;
 
+/* OAuth flow types */
+typedef enum pg_oauth_flow_type
+{
+	OAUTH_DEVICE_CODE,
+	OAUTH_CLIENT_CREDENTIALS,
+	OAUTH_AUTH,
+	OAUTH_AUTH_PKCE,
+	OAUTH_REFRESH_TOKEN,
+	OAUTH_NONE
+} pg_oauth_flow_type;
+
 
 /*
  * GSSAPI specific state information
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 91d2c69f16..1ba2e033c4 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -142,6 +142,43 @@ iddawc_request_error(PGconn *conn, struct _i_session *i, int err, const char *ms
 	appendPQExpBuffer(&conn->errorMessage, "(%s)\n", error_code);
 }
 
+static pg_oauth_flow_type oauth_get_flow_type(const char *oauthflow)
+{
+	pg_oauth_flow_type flow_type;
+
+	if(!oauthflow)
+	{
+		return OAUTH_NONE;
+	}
+
+	/* client_secret, device_code, auth_code_pkce, refresh_token */
+	if(strcmp(oauthflow, "device_code") == 0)
+	{
+		flow_type = OAUTH_DEVICE_CODE;
+	}
+	else if(strcmp(oauthflow, "client_secret") == 0)
+	{
+		flow_type = OAUTH_CLIENT_CREDENTIALS;
+	}
+	else if(strcmp(oauthflow, "auth_code_pkce") == 0)
+	{
+		flow_type = OAUTH_AUTH_PKCE;
+	}
+	else if(strcmp(oauthflow, "refresh_token") == 0)
+	{
+		flow_type = OAUTH_REFRESH_TOKEN;
+	}
+	else if(strcmp(oauthflow, "auth_code"))
+	{
+		flow_type = OAUTH_AUTH_CODE;
+	}
+	else
+	{
+		flow_type = OAUTH_NONE;
+	}
+	return flow_type;
+}
+
 static char *
 get_auth_token(PGconn *conn)
 {
@@ -150,29 +187,44 @@ get_auth_token(PGconn *conn)
 	int			err;
 	int			auth_method;
 	bool		user_prompted = false;
-	const char *verification_uri;
-	const char *user_code;
-	const char *access_token;
-	const char *token_type;
-	char	   *token = NULL;
-
+	char 	*verification_uri;
+	char 	*user_code;
+	char 	*access_token;
+	char 	*refresh_token;	
+	char 	*token_type;
+	pg_oauth_flow_type flow_type;
+	char	   *token = NULL;	
+	uint session_response_type;
+	PGOAuthMsgObj oauthMsgObj;
+
+	MemSet(&oauthMsgObj, 0x00, sizeof(PGOAuthMsgObj));
+	
 	if (!conn->oauth_discovery_uri)
 		return strdup(""); /* ask the server for one */
 
-	i_init_session(&session);
-
 	if (!conn->oauth_client_id)
 	{
 		/* We can't talk to a server without a client identifier. */
 		appendPQExpBufferStr(&conn->errorMessage,
 							 libpq_gettext("no oauth_client_id is set for the connection"));
-		goto cleanup;
+		return NULL;
 	}
 
-	token_buf = createPQExpBuffer();
+	i_init_session(&session);
 
+	token_buf = createPQExpBuffer();
 	if (!token_buf)
 		goto cleanup;
+	
+	if(conn->oauth_bearer_token)
+	{
+		appendPQExpBufferStr(token_buf, "Bearer ");
+		appendPQExpBufferStr(token_buf, conn->oauth_bearer_token);
+		if (PQExpBufferBroken(token_buf))
+			goto cleanup;
+		token = strdup(token_buf->data);
+		goto cleanup;
+	}
 
 	err = i_set_str_parameter(&session, I_OPT_OPENID_CONFIG_ENDPOINT, conn->oauth_discovery_uri);
 	if (err)
@@ -181,6 +233,8 @@ get_auth_token(PGconn *conn)
 		goto cleanup;
 	}
 
+	flow_type = oauth_get_flow_type(conn->oauth_flow_type);
+
 	err = i_get_openid_config(&session);
 	if (err)
 	{
@@ -201,18 +255,64 @@ get_auth_token(PGconn *conn)
 							 libpq_gettext("issuer does not support device authorization"));
 		goto cleanup;
 	}
+	auth_method = I_TOKEN_AUTH_METHOD_NONE;
+
+	/* for refresh token flow, do not run auth request*/
+	if(flow_type == OAUTH_REFRESH_TOKEN && conn->oauth_refresh_token)
+	{
+		err = i_set_parameter_list(&session,
+		I_OPT_CLIENT_ID, conn->oauth_client_id,
+		I_OPT_REFRESH_TOKEN, conn->oauth_refresh_token,
+		I_OPT_RESPONSE_TYPE, I_RESPONSE_TYPE_REFRESH_TOKEN,
+		I_OPT_TOKEN_METHOD, auth_method,
+		I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
+		I_OPT_SCOPE, conn->oauth_scope,
+		I_OPT_NONE
+		);
+
+		if (err)
+		{
+			iddawc_error(conn, err, "failed to set refresh token flow parameters");
+			goto cleanup;
+		}
 
-	err = i_set_response_type(&session, I_RESPONSE_TYPE_DEVICE_CODE);
+		err = i_run_token_request(&session);
+		if (err)
+		{
+			iddawc_request_error(conn, &session, err,
+								 "failed to obtain token authorization with refresh token flow");
+			goto cleanup;
+		}
+
+		access_token = i_get_str_parameter(&session, I_OPT_ACCESS_TOKEN);
+		token_type = i_get_str_parameter(&session, I_OPT_TOKEN_TYPE);
+		
+		if (!access_token || !token_type || strcasecmp(token_type, "Bearer"))
+		{
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("issuer did not provide a bearer token"));
+			goto cleanup;
+		}
+
+		appendPQExpBufferStr(token_buf, "Bearer ");
+		appendPQExpBufferStr(token_buf, access_token);
+
+		if (PQExpBufferBroken(token_buf))
+			goto cleanup;
+
+		token = strdup(token_buf->data);
+		return token;
+	}
+
+	//default device flow
+	session_response_type = I_RESPONSE_TYPE_DEVICE_CODE; 	
+	err = i_set_response_type(&session, session_response_type);
 	if (err)
 	{
 		iddawc_error(conn, err, "failed to set device code response type");
 		goto cleanup;
 	}
 
-	auth_method = I_TOKEN_AUTH_METHOD_NONE;
-	if (conn->oauth_client_secret && *conn->oauth_client_secret)
-		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
-
 	err = i_set_parameter_list(&session,
 		I_OPT_CLIENT_ID, conn->oauth_client_id,
 		I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
@@ -225,7 +325,7 @@ get_auth_token(PGconn *conn)
 		iddawc_error(conn, err, "failed to set client identifier");
 		goto cleanup;
 	}
-
+	
 	err = i_run_device_auth_request(&session);
 	if (err)
 	{
@@ -278,14 +378,15 @@ get_auth_token(PGconn *conn)
 
 		if (!user_prompted)
 		{
+			oauthMsgObj.verification_uri = verification_uri;
+			oauthMsgObj.user_code = user_code;
+			conn->oauthNoticeHooks.noticeRecArg = (void*) &oauthMsgObj;
+
 			/*
 			 * Now that we know the token endpoint isn't broken, give the user
 			 * the login instructions.
-			 */
-			pqInternalNotice(&conn->noticeHooks,
-							 "Visit %s and enter the code: %s",
-							 verification_uri, user_code);
-
+			 */			
+			pqInternalOAuthNotice(&conn->oauthNoticeHooks, "");
 			user_prompted = true;
 		}
 
@@ -300,7 +401,7 @@ get_auth_token(PGconn *conn)
 		 * A slow_down error requires us to permanently increase our retry
 		 * interval by five seconds. RFC 8628, Sec. 3.5.
 		 */
-		if (!strcmp(error_code, "slow_down"))
+		//if (!strcmp(error_code, "slow_down"))
 		{
 			interval += 5;
 			i_set_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL, interval);
@@ -323,6 +424,14 @@ get_auth_token(PGconn *conn)
 
 	access_token = i_get_str_parameter(&session, I_OPT_ACCESS_TOKEN);
 	token_type = i_get_str_parameter(&session, I_OPT_TOKEN_TYPE);
+	refresh_token = i_get_str_parameter(&session, I_OPT_REFRESH_TOKEN);
+	
+	if(refresh_token)
+	{
+		MemSet(&oauthMsgObj, 0x00, sizeof(PGOAuthMsgObj));
+		oauthMsgObj.refresh_token = refresh_token;
+		pqInternalOAuthNotice(&conn->oauthNoticeHooks, "");
+	}
 
 	if (!access_token || !token_type || strcasecmp(token_type, "Bearer"))
 	{
@@ -358,6 +467,8 @@ client_initial_response(PGconn *conn)
 	PQExpBuffer	discovery_buf = NULL;
 	char	   *token = NULL;
 	char	   *response = NULL;
+	pg_oauth_flow_type flow_type;
+	char		oauth_flow_str[3];
 
 	token_buf = createPQExpBuffer();
 	if (!token_buf)
@@ -385,8 +496,26 @@ client_initial_response(PGconn *conn)
 	token = get_auth_token(conn);
 	if (!token)
 		goto cleanup;
-
+	
+	if(strcmp(token, "") == 0)
+	{
+		flow_type = oauth_get_flow_type(conn->oauth_flow_type);
+		if(flow_type == OAUTH_NONE)
+		{
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("value passed in oauth_flow_type is not valid."\
+								 "supported flows: client_secret, device_code, auth_code_pkce, refresh_token\n"));
+			goto cleanup;
+		}
+		else
+		{
+			sprintf(oauth_flow_str, "%d", flow_type);
+			token = strdup(oauth_flow_str);
+		}
+	}	
 	appendPQExpBuffer(token_buf, resp_format, token);
+//	elog(INFO, "fe-flowtype: %s", token);
+
 	if (PQExpBufferBroken(token_buf))
 		goto cleanup;
 
@@ -406,6 +535,9 @@ cleanup:
 #define ERROR_STATUS_FIELD "status"
 #define ERROR_SCOPE_FIELD "scope"
 #define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+#define ERROR_ISSUER_URL_FIELD  "issuer"
+#define ERROR_AUTH_ENDPOINT_FIELD  "authorization_endpoint"
+#define ERROR_TOKEN_ENDPOINT_FIELD  "token_endpoint"
 
 struct json_ctx
 {
@@ -420,6 +552,9 @@ struct json_ctx
 	char		   *status;
 	char		   *scope;
 	char		   *discovery_uri;
+	char		   *issuer_url;
+	char		   *auth_endpoint;
+	char		   *token_endpoint;
 };
 
 #define oauth_json_has_error(ctx) \
@@ -491,6 +626,21 @@ oauth_json_object_field_start(void *state, char *name, bool isnull)
 			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
 			ctx->target_field = &ctx->discovery_uri;
 		}
+		else if(!strcmp(name, ERROR_ISSUER_URL_FIELD))
+		{
+			ctx->target_field_name = ERROR_ISSUER_URL_FIELD;
+			ctx->target_field = &ctx->issuer_url;
+		}
+		else if(!strcmp(name, ERROR_AUTH_ENDPOINT_FIELD))
+		{
+			ctx->target_field_name = ERROR_AUTH_ENDPOINT_FIELD;
+			ctx->target_field = &ctx->auth_endpoint;
+		}
+		else if(!strcmp(name, ERROR_TOKEN_ENDPOINT_FIELD))
+		{
+			ctx->target_field_name = ERROR_TOKEN_ENDPOINT_FIELD;
+			ctx->target_field = &ctx->token_endpoint;
+		}
 	}
 
 	free(name);
@@ -627,6 +777,15 @@ handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
 
 		conn->oauth_scope = ctx.scope;
 	}
+
+	if(ctx.issuer_url)
+	{
+		if(conn->oauth_issuer)
+			free(conn->oauth_issuer);
+
+		conn->oauth_issuer = ctx.issuer_url;		
+	}
+
 	/* TODO: missing error scope should clear any existing connection scope */
 
 	if (!ctx.status)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 64f27fee18..e6e8dc48e2 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -358,6 +358,18 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"OAuth-Scope", "", 15,
 	offsetof(struct pg_conn, oauth_scope)},
 
+	{"oauth_bearer_token", NULL, NULL, NULL,
+		"OAuth-Bearer", "", 20,
+	offsetof(struct pg_conn, oauth_bearer_token)},
+
+	{"oauth_flow_type", NULL, NULL, NULL,
+		"OAuth-Flow-Type", "", 20,
+	offsetof(struct pg_conn, oauth_flow_type)},
+
+	{"oauth_refresh_token", NULL, NULL, NULL,
+		"OAuth-Refresh-Token", "", 40,
+	offsetof(struct pg_conn, oauth_refresh_token)},
+	
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -427,6 +439,7 @@ static PQconninfoOption *conninfo_find(PQconninfoOption *connOptions,
 									   const char *keyword);
 static void defaultNoticeReceiver(void *arg, const PGresult *res);
 static void defaultNoticeProcessor(void *arg, const char *message);
+static void OAuthMsgObjReceiver(void *arg, const PGresult *res);
 static int	parseServiceInfo(PQconninfoOption *options,
 							 PQExpBuffer errorMessage);
 static int	parseServiceFile(const char *serviceFile,
@@ -3926,6 +3939,7 @@ makeEmptyPGconn(void)
 	/* install default notice hooks */
 	conn->noticeHooks.noticeRec = defaultNoticeReceiver;
 	conn->noticeHooks.noticeProc = defaultNoticeProcessor;
+	conn->oauthNoticeHooks.noticeRec = OAuthMsgObjReceiver;
 
 	conn->status = CONNECTION_BAD;
 	conn->asyncStatus = PGASYNC_IDLE;
@@ -4073,6 +4087,12 @@ freePGconn(PGconn *conn)
 		free(conn->oauth_client_secret);
 	if (conn->oauth_scope)
 		free(conn->oauth_scope);
+	if(conn->oauth_bearer_token)
+		free(conn->oauth_bearer_token);
+	if(conn->oauth_flow_type)
+		free(conn->oauth_flow_type);	
+	if(conn->oauth_refresh_token)
+		free(conn->oauth_refresh_token);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -6991,6 +7011,32 @@ defaultNoticeProcessor(void *arg, const char *message)
 	fprintf(stderr, "%s", message);
 }
 
+static void
+OAuthMsgObjReceiver(void *arg, const PGresult *res)
+{
+	PGOAuthMsgObj *oauthMsg = (PGOAuthMsgObj *) arg;
+
+	if(oauthMsg->message)
+	{
+		fprintf(stderr, "%s\n", oauthMsg->message);
+	}
+
+	if(oauthMsg->verification_uri)
+	{
+		fprintf(stderr, "Visit: %s\n", oauthMsg->verification_uri);
+	}
+
+	if(oauthMsg->user_code)
+	{
+		fprintf(stderr, "Enter: %s\n", oauthMsg->user_code);
+	}
+
+	if(oauthMsg->refresh_token)
+	{
+		fprintf(stderr, "Refresh Token: %s\n", oauthMsg->refresh_token);
+	}
+}
+
 /*
  * returns a pointer to the next token or NULL if the current
  * token doesn't match
diff --git a/src/interfaces/libpq/fe-exec.c b/src/interfaces/libpq/fe-exec.c
index da229d632a..4789c1a1fe 100644
--- a/src/interfaces/libpq/fe-exec.c
+++ b/src/interfaces/libpq/fe-exec.c
@@ -976,6 +976,58 @@ pqInternalNotice(const PGNoticeHooks *hooks, const char *fmt,...)
 	PQclear(res);
 }
 
+/*
+ * pqInternalOAuthNotice - it is similar to pqInternalNotice
+ * except that OAuthNoticeHooks are invoked.
+ */
+void
+pqInternalOAuthNotice(const PGOAuthNoticeHooks *hooks, const char *fmt,...)
+{
+	char		msgBuf[1024];
+	va_list		args;
+	PGresult   *res;
+
+	if (hooks->noticeRec == NULL)
+		return;					/* nobody home to receive notice? */
+
+	/* Format the message */
+	va_start(args, fmt);
+	vsnprintf(msgBuf, sizeof(msgBuf), libpq_gettext(fmt), args);
+	va_end(args);
+	msgBuf[sizeof(msgBuf) - 1] = '\0';	/* make real sure it's terminated */
+
+	/* Make a PGresult to pass to the notice receiver */
+	res = PQmakeEmptyPGresult(NULL, PGRES_NONFATAL_ERROR);
+	if (!res)
+		return;
+	res->oauthNoticeHooks = *hooks;
+	res->oauthNoticeHooks.noticeRecArg = hooks->noticeRecArg;
+
+	/*
+	 * Set up fields of notice.
+	 */
+	pqSaveMessageField(res, PG_DIAG_MESSAGE_PRIMARY, msgBuf);
+	pqSaveMessageField(res, PG_DIAG_SEVERITY, libpq_gettext("NOTICE"));
+	pqSaveMessageField(res, PG_DIAG_SEVERITY_NONLOCALIZED, "NOTICE");
+	/* XXX should provide a SQLSTATE too? */
+
+	/*
+	 * Result text is always just the primary message + newline.  If we can't
+	 * allocate it, substitute "out of memory", as in pqSetResultError.
+	 */
+	res->errMsg = (char *) pqResultAlloc(res, strlen(msgBuf) + 2, false);
+	if (res->errMsg)
+		sprintf(res->errMsg, "%s\n", msgBuf);
+	else
+		res->errMsg = libpq_gettext("out of memory\n");
+
+	/*
+	 * Pass to receiver, then free it.
+	 */
+	res->oauthNoticeHooks.noticeRec(res->oauthNoticeHooks.noticeRecArg, res);
+	PQclear(res);
+}
+
 /*
  * pqAddTuple
  *	  add a row pointer to the PGresult structure, growing it if necessary
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index b7df3224c0..ee5b2e2b59 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -197,6 +197,9 @@ typedef struct pgNotify
 typedef void (*PQnoticeReceiver) (void *arg, const PGresult *res);
 typedef void (*PQnoticeProcessor) (void *arg, const char *message);
 
+/* Function types for notice-handling callbacks */
+typedef void (*PQOAuthNoticeReceiver) (void *arg, const PGresult *res);
+
 /* Print options for PQprint() */
 typedef char pqbool;
 
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index ae76ae0e8f..3155d81e00 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -157,6 +157,24 @@ typedef struct
 	void	   *noticeProcArg;
 } PGNoticeHooks;
 
+typedef struct
+{
+	char	*verification_uri; /* URI the user should go to with the user_code in order to sign in */
+	char	*user_code; /* used to identify the session on a secondary device */ 
+	char	*refresh_token;
+	char	*message;	/* string with instructions for the user. */
+	char	*response_error;	/*JSON error response (400 Bad Request) */
+	uint 	expires_in; /* number of seconds before the device_code expire */
+	uint 	interval; /* number of seconds the client should wait between polling requests */
+} PGOAuthMsgObj;
+
+/* Fields needed for oauth callback handling */
+typedef struct
+{
+	PQOAuthNoticeReceiver noticeRec; /* OAuth notice message receiver */
+	void	   *noticeRecArg;
+} PGOAuthNoticeHooks;
+
 typedef struct PGEvent
 {
 	PGEventProc proc;			/* the function to call on events */
@@ -186,6 +204,7 @@ struct pg_result
 	 * on the PGresult don't have to reference the PGconn.
 	 */
 	PGNoticeHooks noticeHooks;
+	PGOAuthNoticeHooks oauthNoticeHooks;
 	PGEvent    *events;
 	int			nEvents;
 	int			client_encoding;	/* encoding id */
@@ -343,6 +362,17 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef enum pg_oauth_flow_type
+{
+	OAUTH_DEVICE_CODE,
+	OAUTH_CLIENT_CREDENTIALS,
+	OAUTH_AUTH,
+	OAUTH_AUTH_PKCE,
+	OAUTH_REFRESH_TOKEN,
+	OAUTH_AUTH_CODE,
+	OAUTH_NONE
+} pg_oauth_flow_type;
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -403,6 +433,9 @@ struct pg_conn
 	char	   *oauth_client_id;		/* client identifier */
 	char	   *oauth_client_secret;	/* client secret */
 	char	   *oauth_scope;			/* access token scope */
+	char       *oauth_bearer_token;		/* oauth token */
+	char	   *oauth_flow_type;		/* oauth flow type */	
+	char	   *oauth_refresh_token;	/* refresh token */
 	bool		oauth_want_retry;		/* should we retry on failure? */
 
 	/* Optional file to write trace info to */
@@ -412,6 +445,9 @@ struct pg_conn
 	/* Callback procedures for notice message processing */
 	PGNoticeHooks noticeHooks;
 
+	/* Callback procedures for notifying messages during oauth flows*/
+	PGOAuthNoticeHooks oauthNoticeHooks;
+
 	/* Event procs registered via PQregisterEventProc */
 	PGEvent    *events;			/* expandable array of event data */
 	int			nEvents;		/* number of active events */
@@ -677,6 +713,7 @@ extern void pqClearAsyncResult(PGconn *conn);
 extern void pqSaveErrorResult(PGconn *conn);
 extern PGresult *pqPrepareAsyncResult(PGconn *conn);
 extern void pqInternalNotice(const PGNoticeHooks *hooks, const char *fmt,...) pg_attribute_printf(2, 3);
+extern void pqInternalOAuthNotice(const PGOAuthNoticeHooks *hooks, const char *fmt,...);
 extern void pqSaveMessageField(PGresult *res, char code,
 							   const char *value);
 extern void pqSaveParameterStatus(PGconn *conn, const char *name,
#32Jacob Champion
jchampion@timescale.com
In reply to: mahendrakar s (#31)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 11/23/22 01:58, mahendrakar s wrote:

We validated on  libpq handling OAuth natively with different flows
with different OIDC certified providers.

Flows: Device Code, Client Credentials and Refresh Token.
Providers: Microsoft, Google and Okta.

Great, thank you!

Also validated with OAuth provider Github.

(How did you get discovery working? I tried this and had to give up
eventually.)

We propose using OpenID Connect (OIDC) as the protocol, instead of
OAuth, as it is:
- Discovery mechanism to bridge the differences and provide metadata.
- Stricter protocol and certification process to reliably identify
which providers can be supported.
- OIDC is designed for authentication, while the main purpose of OAUTH is to
authorize applications on behalf of the user.

How does this differ from the previous proposal? The OAUTHBEARER SASL
mechanism already relies on OIDC for discovery. (I think that decision
is confusing from an architectural and naming standpoint, but I don't
think they really had an alternative...)

Github is not OIDC certified, so won’t be supported with this proposal.
However, it may be supported in the future through the ability for the
extension to provide custom discovery document content.

Right.

OpenID configuration has a well-known discovery mechanism
for the provider configuration URI which is
defined in OpenID Connect. It allows libpq to fetch
metadata about provider (i.e endpoints, supported grants, response types, etc).

Sure, but this is already how the original PoC works. The test suite
implements an OIDC provider, for instance. Is there something different
to this that I'm missing?

In the attached patch (based on V2 patch in the thread and does not
contain Samay's changes):
- Provider can configure issuer url and scope through the options hook.)
- Server passes on an open discovery url and scope to libpq.
- Libpq handles OAuth flow based on the flow_type sent in the
connection string [1].
- Added callbacks to notify a structure to client tools if OAuth flow
requires user interaction.
- Pg backend uses hooks to validate bearer token.

Thank you for the sample!

Note that authentication code flow with PKCE for GUI clients is not
implemented yet.

Proposed next steps:
- Broaden discussion to reach agreement on the approach.

High-level thoughts on this particular patch (I assume you're not
looking for low-level implementation comments yet):

0) The original hook proposal upthread, I thought, was about allowing
libpq's flow implementation to be switched out by the application. I
don't see that approach taken here. It's fine if that turned out to be a
bad idea, of course, but this patch doesn't seem to match what we were
talking about.

1) I'm really concerned about the sudden explosion of flows. We went
from one flow (Device Authorization) to six. It's going to be hard
enough to validate that *one* flow is useful and can be securely
deployed by end users; I don't think we're going to be able to maintain
six, especially in combination with my statement that iddawc is not an
appropriate dependency for us.

I'd much rather give applications the ability to use their own OAuth
code, and then maintain within libpq only the flows that are broadly
useful. This ties back to (0) above.

2) Breaking the refresh token into its own pseudoflow is, I think,
passing the buck onto the user for something that's incredibly security
sensitive. The refresh token is powerful; I don't really want it to be
printed anywhere, let alone copy-pasted by the user. Imagine the
phishing opportunities.

If we want to support refresh tokens, I believe we should be developing
a plan to cache and secure them within the client. They should be used
as an accelerator for other flows, not as their own flow.

3) I don't like the departure from the OAUTHBEARER mechanism that's
presented here. For one, since I can't see a sample plugin that makes
use of the "flow type" magic numbers that have been added, I don't
really understand why the extension to the mechanism is necessary.

For two, if we think OAUTHBEARER is insufficient, the people who wrote
it would probably like to hear about it. Claiming support for a spec,
and then implementing an extension without review from the people who
wrote the spec, is not something I'm personally interested in doing.

4) The test suite is still broken, so it's difficult to see these things
in practice for review purposes.

- Implement libpq changes without iddawc

This in particular will be much easier with a functioning test suite,
and with a smaller number of flows.

- Prototype GUI flow with pgAdmin

Cool!

Thanks,
--Jacob

#33Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#32)
Re: [PoC] Federated Authn/z with OAUTHBEARER

How does this differ from the previous proposal? The OAUTHBEARER SASL
mechanism already relies on OIDC for discovery. (I think that decision
is confusing from an architectural and naming standpoint, but I don't
think they really had an alternative...)

Mostly terminology questions here. OAUTHBEARER SASL appears to be the
spec about using OAUTH2 tokens for Authentication.
While any OAUTH2 can generally work, we propose to specifically
highlight that only OIDC providers can be supported, as we need the
discovery document.
And we won't be able to support Github under that requirement.
Since the original patch used that too - no change on that, just
confirmation that we need OIDC compliance.

0) The original hook proposal upthread, I thought, was about allowing
libpq's flow implementation to be switched out by the application. I
don't see that approach taken here. It's fine if that turned out to be a
bad idea, of course, but this patch doesn't seem to match what we were
talking about.

We still plan to allow the client to pass the token. Which is a
generic way to implement its own OAUTH flows.

1) I'm really concerned about the sudden explosion of flows. We went
from one flow (Device Authorization) to six. It's going to be hard
enough to validate that *one* flow is useful and can be securely
deployed by end users; I don't think we're going to be able to maintain
six, especially in combination with my statement that iddawc is not an
appropriate dependency for us.

I'd much rather give applications the ability to use their own OAuth
code, and then maintain within libpq only the flows that are broadly
useful. This ties back to (0) above.

We consider the following set of flows to be minimum required:
- Client Credentials - For Service to Service scenarios.
- Authorization Code with PKCE - For rich clients,including pgAdmin.
- Device code - for psql (and possibly other non-GUI clients).
- Refresh code (separate discussion)
Which is pretty much the list described here:
https://oauth.net/2/grant-types/ and in OAUTH2 specs.
Client Credentials is very simple, so does Refresh Code.
If you prefer to pick one of the richer flows, Authorization code for
GUI scenarios is probably much more widely used.
Plus it's easier to implement too, as interaction goes through a
series of callbacks. No polling required.

2) Breaking the refresh token into its own pseudoflow is, I think,
passing the buck onto the user for something that's incredibly security
sensitive. The refresh token is powerful; I don't really want it to be
printed anywhere, let alone copy-pasted by the user. Imagine the
phishing opportunities.

If we want to support refresh tokens, I believe we should be developing
a plan to cache and secure them within the client. They should be used
as an accelerator for other flows, not as their own flow.

It's considered a separate "grant_type" in the specs / APIs.
https://openid.net/specs/openid-connect-core-1_0.html#RefreshTokens

For the clients, it would be storing the token and using it to authenticate.
On the question of sensitivity, secure credentials stores are
different for each platform, with a lot of cloud offerings for this.
pgAdmin, for example, has its own way to secure credentials to avoid
asking users for passwords every time the app is opened.
I believe we should delegate the refresh token management to the clients.

3) I don't like the departure from the OAUTHBEARER mechanism that's
presented here. For one, since I can't see a sample plugin that makes
use of the "flow type" magic numbers that have been added, I don't
really understand why the extension to the mechanism is necessary.

I don't think it's much of a departure, but rather a separation of
responsibilities between libpq and upstream clients.
As libpq can be used in different apps, the client would need
different types of flows/grants.
I.e. those need to be provided to libpq at connection initialization
or some other point.
We will change to "grant_type" though and use string to be closer to the spec.
What do you think is the best way for the client to signal which OAUTH
flow should be used?

Show quoted text

On Wed, Nov 23, 2022 at 12:05 PM Jacob Champion <jchampion@timescale.com> wrote:

On 11/23/22 01:58, mahendrakar s wrote:

We validated on  libpq handling OAuth natively with different flows
with different OIDC certified providers.

Flows: Device Code, Client Credentials and Refresh Token.
Providers: Microsoft, Google and Okta.

Great, thank you!

Also validated with OAuth provider Github.

(How did you get discovery working? I tried this and had to give up
eventually.)

We propose using OpenID Connect (OIDC) as the protocol, instead of
OAuth, as it is:
- Discovery mechanism to bridge the differences and provide metadata.
- Stricter protocol and certification process to reliably identify
which providers can be supported.
- OIDC is designed for authentication, while the main purpose of OAUTH is to
authorize applications on behalf of the user.

How does this differ from the previous proposal? The OAUTHBEARER SASL
mechanism already relies on OIDC for discovery. (I think that decision
is confusing from an architectural and naming standpoint, but I don't
think they really had an alternative...)

Github is not OIDC certified, so won’t be supported with this proposal.
However, it may be supported in the future through the ability for the
extension to provide custom discovery document content.

Right.

OpenID configuration has a well-known discovery mechanism
for the provider configuration URI which is
defined in OpenID Connect. It allows libpq to fetch
metadata about provider (i.e endpoints, supported grants, response types, etc).

Sure, but this is already how the original PoC works. The test suite
implements an OIDC provider, for instance. Is there something different
to this that I'm missing?

In the attached patch (based on V2 patch in the thread and does not
contain Samay's changes):
- Provider can configure issuer url and scope through the options hook.)
- Server passes on an open discovery url and scope to libpq.
- Libpq handles OAuth flow based on the flow_type sent in the
connection string [1].
- Added callbacks to notify a structure to client tools if OAuth flow
requires user interaction.
- Pg backend uses hooks to validate bearer token.

Thank you for the sample!

Note that authentication code flow with PKCE for GUI clients is not
implemented yet.

Proposed next steps:
- Broaden discussion to reach agreement on the approach.

High-level thoughts on this particular patch (I assume you're not
looking for low-level implementation comments yet):

0) The original hook proposal upthread, I thought, was about allowing
libpq's flow implementation to be switched out by the application. I
don't see that approach taken here. It's fine if that turned out to be a
bad idea, of course, but this patch doesn't seem to match what we were
talking about.

1) I'm really concerned about the sudden explosion of flows. We went
from one flow (Device Authorization) to six. It's going to be hard
enough to validate that *one* flow is useful and can be securely
deployed by end users; I don't think we're going to be able to maintain
six, especially in combination with my statement that iddawc is not an
appropriate dependency for us.

I'd much rather give applications the ability to use their own OAuth
code, and then maintain within libpq only the flows that are broadly
useful. This ties back to (0) above.

2) Breaking the refresh token into its own pseudoflow is, I think,
passing the buck onto the user for something that's incredibly security
sensitive. The refresh token is powerful; I don't really want it to be
printed anywhere, let alone copy-pasted by the user. Imagine the
phishing opportunities.

If we want to support refresh tokens, I believe we should be developing
a plan to cache and secure them within the client. They should be used
as an accelerator for other flows, not as their own flow.

3) I don't like the departure from the OAUTHBEARER mechanism that's
presented here. For one, since I can't see a sample plugin that makes
use of the "flow type" magic numbers that have been added, I don't
really understand why the extension to the mechanism is necessary.

For two, if we think OAUTHBEARER is insufficient, the people who wrote
it would probably like to hear about it. Claiming support for a spec,
and then implementing an extension without review from the people who
wrote the spec, is not something I'm personally interested in doing.

4) The test suite is still broken, so it's difficult to see these things
in practice for review purposes.

- Implement libpq changes without iddawc

This in particular will be much easier with a functioning test suite,
and with a smaller number of flows.

- Prototype GUI flow with pgAdmin

Cool!

Thanks,
--Jacob

#34mahendrakar s
mahendrakarforpg@gmail.com
In reply to: Andrey Chudnovsky (#33)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi Jacob,

I had validated Github by skipping the discovery mechanism and letting
the provider extension pass on the endpoints. This is just for
validation purposes.
If it needs to be supported, then need a way to send the discovery
document from extension.

Thanks,
Mahendrakar.

Show quoted text

On Thu, 24 Nov 2022 at 09:16, Andrey Chudnovsky <achudnovskij@gmail.com> wrote:

How does this differ from the previous proposal? The OAUTHBEARER SASL
mechanism already relies on OIDC for discovery. (I think that decision
is confusing from an architectural and naming standpoint, but I don't
think they really had an alternative...)

Mostly terminology questions here. OAUTHBEARER SASL appears to be the
spec about using OAUTH2 tokens for Authentication.
While any OAUTH2 can generally work, we propose to specifically
highlight that only OIDC providers can be supported, as we need the
discovery document.
And we won't be able to support Github under that requirement.
Since the original patch used that too - no change on that, just
confirmation that we need OIDC compliance.

0) The original hook proposal upthread, I thought, was about allowing
libpq's flow implementation to be switched out by the application. I
don't see that approach taken here. It's fine if that turned out to be a
bad idea, of course, but this patch doesn't seem to match what we were
talking about.

We still plan to allow the client to pass the token. Which is a
generic way to implement its own OAUTH flows.

1) I'm really concerned about the sudden explosion of flows. We went
from one flow (Device Authorization) to six. It's going to be hard
enough to validate that *one* flow is useful and can be securely
deployed by end users; I don't think we're going to be able to maintain
six, especially in combination with my statement that iddawc is not an
appropriate dependency for us.

I'd much rather give applications the ability to use their own OAuth
code, and then maintain within libpq only the flows that are broadly
useful. This ties back to (0) above.

We consider the following set of flows to be minimum required:
- Client Credentials - For Service to Service scenarios.
- Authorization Code with PKCE - For rich clients,including pgAdmin.
- Device code - for psql (and possibly other non-GUI clients).
- Refresh code (separate discussion)
Which is pretty much the list described here:
https://oauth.net/2/grant-types/ and in OAUTH2 specs.
Client Credentials is very simple, so does Refresh Code.
If you prefer to pick one of the richer flows, Authorization code for
GUI scenarios is probably much more widely used.
Plus it's easier to implement too, as interaction goes through a
series of callbacks. No polling required.

2) Breaking the refresh token into its own pseudoflow is, I think,
passing the buck onto the user for something that's incredibly security
sensitive. The refresh token is powerful; I don't really want it to be
printed anywhere, let alone copy-pasted by the user. Imagine the
phishing opportunities.

If we want to support refresh tokens, I believe we should be developing
a plan to cache and secure them within the client. They should be used
as an accelerator for other flows, not as their own flow.

It's considered a separate "grant_type" in the specs / APIs.
https://openid.net/specs/openid-connect-core-1_0.html#RefreshTokens

For the clients, it would be storing the token and using it to authenticate.
On the question of sensitivity, secure credentials stores are
different for each platform, with a lot of cloud offerings for this.
pgAdmin, for example, has its own way to secure credentials to avoid
asking users for passwords every time the app is opened.
I believe we should delegate the refresh token management to the clients.

3) I don't like the departure from the OAUTHBEARER mechanism that's
presented here. For one, since I can't see a sample plugin that makes
use of the "flow type" magic numbers that have been added, I don't
really understand why the extension to the mechanism is necessary.

I don't think it's much of a departure, but rather a separation of
responsibilities between libpq and upstream clients.
As libpq can be used in different apps, the client would need
different types of flows/grants.
I.e. those need to be provided to libpq at connection initialization
or some other point.
We will change to "grant_type" though and use string to be closer to the spec.
What do you think is the best way for the client to signal which OAUTH
flow should be used?

On Wed, Nov 23, 2022 at 12:05 PM Jacob Champion <jchampion@timescale.com> wrote:

On 11/23/22 01:58, mahendrakar s wrote:

We validated on  libpq handling OAuth natively with different flows
with different OIDC certified providers.

Flows: Device Code, Client Credentials and Refresh Token.
Providers: Microsoft, Google and Okta.

Great, thank you!

Also validated with OAuth provider Github.

(How did you get discovery working? I tried this and had to give up
eventually.)

We propose using OpenID Connect (OIDC) as the protocol, instead of
OAuth, as it is:
- Discovery mechanism to bridge the differences and provide metadata.
- Stricter protocol and certification process to reliably identify
which providers can be supported.
- OIDC is designed for authentication, while the main purpose of OAUTH is to
authorize applications on behalf of the user.

How does this differ from the previous proposal? The OAUTHBEARER SASL
mechanism already relies on OIDC for discovery. (I think that decision
is confusing from an architectural and naming standpoint, but I don't
think they really had an alternative...)

Github is not OIDC certified, so won’t be supported with this proposal.
However, it may be supported in the future through the ability for the
extension to provide custom discovery document content.

Right.

OpenID configuration has a well-known discovery mechanism
for the provider configuration URI which is
defined in OpenID Connect. It allows libpq to fetch
metadata about provider (i.e endpoints, supported grants, response types, etc).

Sure, but this is already how the original PoC works. The test suite
implements an OIDC provider, for instance. Is there something different
to this that I'm missing?

In the attached patch (based on V2 patch in the thread and does not
contain Samay's changes):
- Provider can configure issuer url and scope through the options hook.)
- Server passes on an open discovery url and scope to libpq.
- Libpq handles OAuth flow based on the flow_type sent in the
connection string [1].
- Added callbacks to notify a structure to client tools if OAuth flow
requires user interaction.
- Pg backend uses hooks to validate bearer token.

Thank you for the sample!

Note that authentication code flow with PKCE for GUI clients is not
implemented yet.

Proposed next steps:
- Broaden discussion to reach agreement on the approach.

High-level thoughts on this particular patch (I assume you're not
looking for low-level implementation comments yet):

0) The original hook proposal upthread, I thought, was about allowing
libpq's flow implementation to be switched out by the application. I
don't see that approach taken here. It's fine if that turned out to be a
bad idea, of course, but this patch doesn't seem to match what we were
talking about.

1) I'm really concerned about the sudden explosion of flows. We went
from one flow (Device Authorization) to six. It's going to be hard
enough to validate that *one* flow is useful and can be securely
deployed by end users; I don't think we're going to be able to maintain
six, especially in combination with my statement that iddawc is not an
appropriate dependency for us.

I'd much rather give applications the ability to use their own OAuth
code, and then maintain within libpq only the flows that are broadly
useful. This ties back to (0) above.

2) Breaking the refresh token into its own pseudoflow is, I think,
passing the buck onto the user for something that's incredibly security
sensitive. The refresh token is powerful; I don't really want it to be
printed anywhere, let alone copy-pasted by the user. Imagine the
phishing opportunities.

If we want to support refresh tokens, I believe we should be developing
a plan to cache and secure them within the client. They should be used
as an accelerator for other flows, not as their own flow.

3) I don't like the departure from the OAUTHBEARER mechanism that's
presented here. For one, since I can't see a sample plugin that makes
use of the "flow type" magic numbers that have been added, I don't
really understand why the extension to the mechanism is necessary.

For two, if we think OAUTHBEARER is insufficient, the people who wrote
it would probably like to hear about it. Claiming support for a spec,
and then implementing an extension without review from the people who
wrote the spec, is not something I'm personally interested in doing.

4) The test suite is still broken, so it's difficult to see these things
in practice for review purposes.

- Implement libpq changes without iddawc

This in particular will be much easier with a functioning test suite,
and with a smaller number of flows.

- Prototype GUI flow with pgAdmin

Cool!

Thanks,
--Jacob

#35Jacob Champion
jchampion@timescale.com
In reply to: Andrey Chudnovsky (#33)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 11/23/22 19:45, Andrey Chudnovsky wrote:

Mostly terminology questions here. OAUTHBEARER SASL appears to be the
spec about using OAUTH2 tokens for Authentication.
While any OAUTH2 can generally work, we propose to specifically
highlight that only OIDC providers can be supported, as we need the
discovery document.

*If* you're using in-band discovery, yes. But I thought your use case
was explicitly tailored to out-of-band token retrieval:

The client knows how to get a token for a particular principal
and doesn't need any additional information other than human readable
messages.

In that case, isn't OAuth sufficient? There's definitely a need to
document the distinction, but I don't think we have to require OIDC as
long as the client application makes up for the missing information.
(OAUTHBEARER makes the openid-configuration error member optional,
presumably for this reason.)

0) The original hook proposal upthread, I thought, was about allowing
libpq's flow implementation to be switched out by the application. I
don't see that approach taken here. It's fine if that turned out to be a
bad idea, of course, but this patch doesn't seem to match what we were
talking about.

We still plan to allow the client to pass the token. Which is a
generic way to implement its own OAUTH flows.

Okay. But why push down the implementation into the server?

To illustrate what I mean, here's the architecture of my proposed patchset:

  +-------+                                          +----------+
  |       | -------------- Empty Token ------------> |          |
  | libpq | <----- Error Result (w/ Discovery ) ---- |          |
  |       |                                          |          |
  | +--------+                     +--------------+  |          |
  | | iddawc | <--- [ Flow ] ----> | Issuer/      |  | Postgres |
  | |        | <-- Access Token -- | Authz Server |  |          |
  | +--------+                     +--------------+  |   +-----------+
  |       |                                          |   |           |
  |       | -------------- Access Token -----------> | > | Validator |
  |       | <---- Authorization Success/Failure ---- | < |           |
  |       |                                          |   +-----------+
  +-------+                                          +----------+

In this implementation, there's only one black box: the validator, which
is responsible for taking an access token from an untrusted client,
verifying that it was issued correctly for the Postgres service, and
either 1) determining whether the bearer is authorized to access the
database, or 2) determining the authenticated ID of the bearer so that
the HBA can decide whether they're authorized. (Or both.)

This approach is limited by the flows that we explicitly enable within
libpq and its OAuth implementation library. You mentioned that you
wanted to support other flows, including clients with out-of-band
knowledge, and I suggested:

If you wanted to override [iddawc's]
behavior as a client, you could replace the builtin flow with your
own, by registering a set of callbacks.

In other words, the hooks would replace iddawc in the above diagram.
In my mind, something like this:

     +-------+                                       +----------+
  +------+   | ----------- Empty Token ------------> | Postgres |
  |      | < | <---------- Error Result ------------ |          |
  | Hook |   |                                       |   +-----------+
  |      |   |                                       |   |           |
  +------+ > | ------------ Access Token ----------> | > | Validator |
     |       | <--- Authorization Success/Failure -- | < |           |
     | libpq |                                       |   +-----------+
     +-------+                                       +----------+

Now there's a second black box -- the client hook -- which takes an
OAUTHBEARER error result (which may or may not have OIDC discovery
information) and returns the access token. How it does this is
unspecified -- it'll probably use some OAuth 2.0 flow, but maybe not.
Maybe it sends the user to a web browser; maybe it uses some of the
magic provider-specific libraries you mentioned upthread. It might have
a refresh token cached so it doesn't have to involve the user at all.

Crucially, though, the two black boxes remain independent of each other.
They have well-defined inputs and outputs (the client hook could be
roughly described as "implement get_auth_token()"). Their correctness
can be independently verified against published OAuth specs and/or
provider documentation. And the client application still makes a single
call to PQconnect*().

Compare this to the architecture proposed by your patch:

  Client App
  +----------------------+
  |             +-------+                                +----------+
  |             | libpq |                                | Postgres |
  | PQconnect > |       |                                |   +-------+
  |          +------+   | ------- Flow Type (!) -------> | > |       |
  |     +- < | Hook | < | <------- Error Result -------- | < |       |
  | [ get    +------+   |                                |   |       |
  |   token ]   |       |                                |   |       |
  |     |       |       |                                |   | Hooks |
  |     v       |       |                                |   |       |
  | PQconnect > | ----> | ------ Access Token ---------> | > |       |
  |             |       | <--- Authz Success/Failure --- | < |       |
  |             +-------+                                |   +-------+
  +----------------------+                               +----------+

Rather than decouple things, I think this proposal drives a spike
through the client app, libpq, and the server. Please correct me if I've
misunderstood pieces of the patch, but the following is my view of it:

What used to be a validator hook on the server side now actively
participates in the client-side flow for some reason. (I still don't
understand what the server is supposed to do with that knowledge.
Changing your authz requirements based on the flow the client wants to
use seems like a good way to introduce bugs.)

The client-side hook is now coupled to the application logic: you have
to know to expect an error from the first PQconnect*() call, then check
whatever magic your hook has done for you to be able to set up the
second call to PQconnect*() with the correctly scoped bearer token. So
if you want to switch between the internal libpq OAuth implementation
and your own hook, you have to rewrite your app logic.

On top of all that, the "flow type code" being sent is a custom
extension to OAUTHBEARER that appears to be incompatible with the RFC's
discovery exchange (which is done by sending an empty auth token during
the first round trip).

We consider the following set of flows to be minimum required:
- Client Credentials - For Service to Service scenarios.

Okay, that's simple enough that I think it could probably be maintained
inside libpq with minimal cost. At the same time, is it complicated
enough that you need libpq to do it for you?

Maybe once we get the hooks ironed out, it'll be more obvious what the
tradeoff is...

If you prefer to pick one of the richer flows, Authorization code for
GUI scenarios is probably much more widely used.
Plus it's easier to implement too, as interaction goes through a
series of callbacks. No polling required.

I don't think flows requiring the invocation of web browsers and custom
URL handlers are a clear fit for libpq. For a first draft, at least, I
think that use case should be pushed upward into the client application
via a custom hook.

If we want to support refresh tokens, I believe we should be developing
a plan to cache and secure them within the client. They should be used
as an accelerator for other flows, not as their own flow.

It's considered a separate "grant_type" in the specs / APIs.
https://openid.net/specs/openid-connect-core-1_0.html#RefreshTokens

Yes, but that doesn't mean we have to expose it to users via a
connection option. You don't get a refresh token out of the blue; you
get it by going through some other flow, and then you use it in
preference to going through that flow again later.

For the clients, it would be storing the token and using it to authenticate.
On the question of sensitivity, secure credentials stores are
different for each platform, with a lot of cloud offerings for this.
pgAdmin, for example, has its own way to secure credentials to avoid
asking users for passwords every time the app is opened.
I believe we should delegate the refresh token management to the clients.

Delegating to client apps would be fine (and implicitly handled by a
token hook, because the client app would receive the refresh token
directly rather than going through libpq). Delegating to end users, not
so much. Printing a refresh token to stderr as proposed here is, I
think, making things unnecessarily difficult (and/or dangerous) for users.

3) I don't like the departure from the OAUTHBEARER mechanism that's
presented here. For one, since I can't see a sample plugin that makes
use of the "flow type" magic numbers that have been added, I don't
really understand why the extension to the mechanism is necessary.

I don't think it's much of a departure, but rather a separation of
responsibilities between libpq and upstream clients.

Given the proposed architectures above, 1) I think this is further
coupling the components, not separating them; and 2) I can't agree that
an incompatible discovery mechanism is "not much of a departure". If
OAUTHBEARER's functionality isn't good enough for some reason, let's
talk about why.

As libpq can be used in different apps, the client would need
different types of flows/grants.
I.e. those need to be provided to libpq at connection initialization
or some other point.

Why do libpq (or the server!) need to know those things at all, if
they're not going to implement the flow?

We will change to "grant_type" though and use string to be closer to the spec.
What do you think is the best way for the client to signal which OAUTH
flow should be used?

libpq should not need to know the grant type in use if the client is
bypassing its internal implementation entirely.

Thanks,
--Jacob

#36Jacob Champion
jchampion@timescale.com
In reply to: mahendrakar s (#34)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 11/24/22 00:20, mahendrakar s wrote:

I had validated Github by skipping the discovery mechanism and letting
the provider extension pass on the endpoints. This is just for
validation purposes.
If it needs to be supported, then need a way to send the discovery
document from extension.

Yeah. I had originally bounced around the idea that we could send a
data:// URL, but I think that opens up problems.

You're supposed to be able to link the issuer URI with the URI you got
the configuration from, and if they're different, you bail out. If a
server makes up its own OpenID configuration, we'd have to bypass that
safety check, and decide what the risks and mitigations are... Not sure
it's worth it.

Especially if you could just lobby GitHub to, say, provide an OpenID
config. (Maybe there's a security-related reason they don't.)

--Jacob

#37Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#36)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob,
Thanks for your feedback.
I think we can focus on the roles and responsibilities of the components first.
Details of the patch can be elaborated. Like "flow type code" is a
mistake on our side, and we will use the term "grant_type" which is
defined by OIDC spec. As well as details of usage of refresh_token.

Rather than decouple things, I think this proposal drives a spike
through the client app, libpq, and the server. Please correct me if I've
misunderstood pieces of the patch, but the following is my view of it:

What used to be a validator hook on the server side now actively
participates in the client-side flow for some reason. (I still don't
understand what the server is supposed to do with that knowledge.
Changing your authz requirements based on the flow the client wants to
use seems like a good way to introduce bugs.)

The client-side hook is now coupled to the application logic: you have
to know to expect an error from the first PQconnect*() call, then check
whatever magic your hook has done for you to be able to set up the
second call to PQconnect*() with the correctly scoped bearer token. So
if you want to switch between the internal libpq OAuth implementation
and your own hook, you have to rewrite your app logic.

Basically Yes. We propose an increase of the server side hook responsibility.
From just validating the token, to also return the provider root URL
and required audience. And possibly provide more metadata in the
future.
Which is in our opinion aligned with SASL protocol, where the server
side is responsible for telling the client auth requirements based on
the requested role in the startup packet.

Our understanding is that in the original patch that information came
purely from hba, and we propose extension being able to control that
metadata.
As we see extension as being owned by the identity provider, compared
to HBA which is owned by the server administrator or cloud provider.

This change of the roles is based on the vision of 4 independent actor
types in the ecosystem:
1. Identity Providers (Okta, Google, Microsoft, other OIDC providers).
- Publish open source extensions for PostgreSQL.
- Don't have to own the server deployments, and must ensure their
extensions can work in any environment. This is where we think
additional hook responsibility helps.
2. Server Owners / PAAS providers (On premise admins, Cloud providers,
multi-cloud PAAS providers).
- Install extensions and configure HBA to allow clients to
authenticate with the identity providers of their choice.
3. Client Application Developers (Data Wis, integration tools,
PgAdmin, monitoring tools, e.t.c.)
- Independent from specific Identity providers or server providers.
Write one code for all identity providers.
- Rely on application deployment owners to configure which OIDC
provider to use across client and server setups.
4. Application Deployment Owners (End customers setting up applications)
- The only actor actually aware of which identity provider to use.
Configures the stack based on the Identity and PostgreSQL deployments
they have.

The critical piece of the vision is (3.) above is applications
agnostic of the identity providers. Those applications rely on
properly configured servers and rich driver logic (libpq,
com.postgresql, npgsql) to allow their application to popup auth
windows or do service-to-service authentication with any provider. In
our view that would significantly democratize the deployment of OAUTH
authentication in the community.

In order to allow this separation, we propose:
1. HBA + Extension is the single source of truth of Provider root URL
+ Required Audience for each role. If some backfill for missing OIDC
discovery is needed, the provider-specific extension would be
providing it.
2. Client Application knows which grant_type to use in which scenario.
But can be coded without knowledge of a specific provider. So can't
provide discovery details.
3. Driver (libpq, others) - coordinate the authentication flow based
on client grant_type and identity provider metadata to allow client
applications to use any flow with any provider in a unified way.

Yes, this would require a little more complicated flow between
components than in your original patch. And yes, more complexity comes
with more opportunity to make bugs.
However, I see PG Server and Libpq as the places which can have more
complexity. For the purpose of making work for the community
participants easier and simplify adoption.

Does this make sense to you?

Show quoted text

On Tue, Nov 29, 2022 at 1:20 PM Jacob Champion <jchampion@timescale.com> wrote:

On 11/24/22 00:20, mahendrakar s wrote:

I had validated Github by skipping the discovery mechanism and letting
the provider extension pass on the endpoints. This is just for
validation purposes.
If it needs to be supported, then need a way to send the discovery
document from extension.

Yeah. I had originally bounced around the idea that we could send a
data:// URL, but I think that opens up problems.

You're supposed to be able to link the issuer URI with the URI you got
the configuration from, and if they're different, you bail out. If a
server makes up its own OpenID configuration, we'd have to bypass that
safety check, and decide what the risks and mitigations are... Not sure
it's worth it.

Especially if you could just lobby GitHub to, say, provide an OpenID
config. (Maybe there's a security-related reason they don't.)

--Jacob

#38Jacob Champion
jchampion@timescale.com
In reply to: Andrey Chudnovsky (#37)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Dec 5, 2022 at 4:15 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:

I think we can focus on the roles and responsibilities of the components first.
Details of the patch can be elaborated. Like "flow type code" is a
mistake on our side, and we will use the term "grant_type" which is
defined by OIDC spec. As well as details of usage of refresh_token.

(For the record, whether we call it "flow type" or "grant type"
doesn't address my concern.)

Basically Yes. We propose an increase of the server side hook responsibility.
From just validating the token, to also return the provider root URL
and required audience. And possibly provide more metadata in the
future.

I think it's okay to have the extension and HBA collaborate to provide
discovery information. Your proposal goes further than that, though,
and makes the server aware of the chosen client flow. That appears to
be an architectural violation: why does an OAuth resource server need
to know the client flow at all?

Which is in our opinion aligned with SASL protocol, where the server
side is responsible for telling the client auth requirements based on
the requested role in the startup packet.

You've proposed an alternative SASL mechanism. There's nothing wrong
with that, per se, but I think it should be clear why we've chosen
something nonstandard.

Our understanding is that in the original patch that information came
purely from hba, and we propose extension being able to control that
metadata.
As we see extension as being owned by the identity provider, compared
to HBA which is owned by the server administrator or cloud provider.

That seems reasonable, considering how tightly coupled the Issuer and
the token validation process are.

2. Server Owners / PAAS providers (On premise admins, Cloud providers,
multi-cloud PAAS providers).
- Install extensions and configure HBA to allow clients to
authenticate with the identity providers of their choice.

(For a future conversation: they need to set up authorization, too,
with custom scopes or some other magic. It's not enough to check who
the token belongs to; even if Postgres is just using the verified
email from OpenID as an authenticator, you have to also know that the
user authorized the token -- and therefore the client -- to access
Postgres on their behalf.)

3. Client Application Developers (Data Wis, integration tools,
PgAdmin, monitoring tools, e.t.c.)
- Independent from specific Identity providers or server providers.
Write one code for all identity providers.

Ideally, yes, but that only works if all identity providers implement
the same flows in compatible ways. We're already seeing instances
where that's not the case and we'll necessarily have to deal with that
up front.

- Rely on application deployment owners to configure which OIDC
provider to use across client and server setups.
4. Application Deployment Owners (End customers setting up applications)
- The only actor actually aware of which identity provider to use.
Configures the stack based on the Identity and PostgreSQL deployments
they have.

(I have doubts that the roles will be as decoupled in practice as you
have described them, but I'd rather defer that for now.)

The critical piece of the vision is (3.) above is applications
agnostic of the identity providers. Those applications rely on
properly configured servers and rich driver logic (libpq,
com.postgresql, npgsql) to allow their application to popup auth
windows or do service-to-service authentication with any provider. In
our view that would significantly democratize the deployment of OAUTH
authentication in the community.

That seems to be restating the goal of OAuth and OIDC. Can you explain
how the incompatible change allows you to accomplish this better than
standard implementations?

In order to allow this separation, we propose:
1. HBA + Extension is the single source of truth of Provider root URL
+ Required Audience for each role. If some backfill for missing OIDC
discovery is needed, the provider-specific extension would be
providing it.
2. Client Application knows which grant_type to use in which scenario.
But can be coded without knowledge of a specific provider. So can't
provide discovery details.
3. Driver (libpq, others) - coordinate the authentication flow based
on client grant_type and identity provider metadata to allow client
applications to use any flow with any provider in a unified way.

Yes, this would require a little more complicated flow between
components than in your original patch.

Why? I claim that standard OAUTHBEARER can handle all of that. What
does your proposed architecture (the third diagram) enable that my
proposed hook (the second diagram) doesn't?

And yes, more complexity comes
with more opportunity to make bugs.
However, I see PG Server and Libpq as the places which can have more
complexity. For the purpose of making work for the community
participants easier and simplify adoption.

Does this make sense to you?

Some of it, but it hasn't really addressed the questions from my last mail.

Thanks,
--Jacob

#39Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#38)
Re: [PoC] Federated Authn/z with OAUTHBEARER

I think it's okay to have the extension and HBA collaborate to provide
discovery information. Your proposal goes further than that, though,
and makes the server aware of the chosen client flow. That appears to
be an architectural violation: why does an OAuth resource server need
to know the client flow at all?

Ok. It may have left there from intermediate iterations. We did
consider making extension drive the flow for specific grant_type, but
decided against that idea. For the same reason you point to.
Is it correct that your main concern about use of grant_type was that
it's propagated to the server? Then yes, we will remove sending it to
the server.

Ideally, yes, but that only works if all identity providers implement
the same flows in compatible ways. We're already seeing instances
where that's not the case and we'll necessarily have to deal with that
up front.

Yes, based on our analysis OIDC spec is detailed enough, that
providers implementing that one, can be supported with generic code in
libpq / client.
Github specifically won't fit there though. Microsoft Azure AD,
Google, Okta (including Auth0) will.
Theoretically discovery documents can be returned from the extension
(server-side) which is provider specific. Though we didn't plan to
prioritize that.

That seems to be restating the goal of OAuth and OIDC. Can you explain
how the incompatible change allows you to accomplish this better than
standard implementations?

Do you refer to passing grant_type to the server? Which we will get
rid of in the next iteration. Or other incompatible changes as well?

Why? I claim that standard OAUTHBEARER can handle all of that. What
does your proposed architecture (the third diagram) enable that my
proposed hook (the second diagram) doesn't?

The hook proposed on the 2nd diagram effectively delegates all Oauth
flows implementations to the client.
We propose libpq takes care of pulling OpenId discovery and coordination.
Which is effectively Diagram 1 + more flows + server hook providing
root url/audience.

Created the diagrams with all components for 3 flows:
1. Authorization code grant (Clients with Browser access):
  +----------------------+                                         +----------+
  |             +-------+                                          |
       |
  | PQconnect   |       |                                          |
       |
  | [auth_code] |       |                                          |
+-----------+
  |          -> |       | -------------- Empty Token ------------> | >
|           |
  |             | libpq | <----- Error(w\ Root URL + Audience ) -- | <
| Pre-Auth  |
  |             |       |                                          |
|  Hook     |
  |             |       |                                          |
+-----------+
  |             |       |                        +--------------+  |          |
  |             |       | -------[GET]---------> | OIDC         |  | Postgres |
  |          +------+   | <--Provider Metadata-- | Discovery    |  |          |
  |     +- < | Hook | < |                        +--------------+  |
       |
  |     |    +------+   |                                          |
       |
  |     v       |       |                                          |
       |
  |  [get auth  |       |                                          |
       |
  |    code]    |       |                                          |
       |
  |<user action>|       |                                          |
       |
  |     |       |       |                                          |
       |
  |     +       |       |                                          |
       |
  | PQconnect > | +--------+                     +--------------+  |
       |
  |             | | iddawc | <-- [ Auth code ]-> | Issuer/      |  |          |
  |             | |        | <-- Access Token -- | Authz Server |  |          |
  |             | +--------+                     +--------------+  |          |
  |             |       |                                          |
+-----------+
  |             |       | -------------- Access Token -----------> | >
| Validator |
  |             |       | <---- Authorization Success/Failure ---- | <
|   Hook    |
  |          +------+   |                                          |
+-----------+
  |      +-< | Hook |   |                                          |
       |
  |      v   +------+   |                                          |
       |
  |[store       +-------+                                          |
       |
  |  refresh_token]                                                +----------+
  +----------------------+
2. Device code grant
  +----------------------+                                         +----------+
  |             +-------+                                          |
       |
  | PQconnect   |       |                                          |
       |
  | [auth_code] |       |                                          |
+-----------+
  |          -> |       | -------------- Empty Token ------------> | >
|           |
  |             | libpq | <----- Error(w\ Root URL + Audience ) -- | <
| Pre-Auth  |
  |             |       |                                          |
|  Hook     |
  |             |       |                                          |
+-----------+
  |             |       |                        +--------------+  |          |
  |             |       | -------[GET]---------> | OIDC         |  | Postgres |
  |          +------+   | <--Provider Metadata-- | Discovery    |  |          |
  |     +- < | Hook | < |                        +--------------+  |
       |
  |     |    +------+   |                                          |
       |
  |     v       |       |                                          |
       |
  |  [device    | +---------+                     +--------------+ |
       |
  |    code]    | | iddawc  |                     | Issuer/      | |
       |
  |<user action>| |         | --[ Device code ]-> | Authz Server | |
       |
  |             | |<polling>| --[ Device code ]-> |              | |
       |
  |             | |         | --[ Device code ]-> |              | |
       |
  |             | |         |                     |              | |          |
  |             | |         | <-- Access Token -- |              | |          |
  |             | +---------+                     +--------------+ |          |
  |             |       |                                          |
+-----------+
  |             |       | -------------- Access Token -----------> | >
| Validator |
  |             |       | <---- Authorization Success/Failure ---- | <
|   Hook    |
  |          +------+   |                                          |
+-----------+
  |      +-< | Hook |   |                                          |
       |
  |      v   +------+   |                                          |
       |
  |[store       +-------+                                          |
       |
  |  refresh_token]                                                +----------+
  +----------------------+
3. Non-interactive flows (Client Secret / Refresh_Token)
  +----------------------+                                         +----------+
  |             +-------+                                          |
       |
  | PQconnect   |       |                                          |
       |
  | [grant_type]|       |                                          |          |
  |          -> |       |                                          |
+-----------+
  |             |       | -------------- Empty Token ------------> | >
|           |
  |             | libpq | <----- Error(w\ Root URL + Audience ) -- | <
| Pre-Auth  |
  |             |       |                                          |
|  Hook     |
  |             |       |                                          |
+-----------+
  |             |       |                        +--------------+  |          |
  |             |       | -------[GET]---------> | OIDC         |  | Postgres |
  |             |       | <--Provider Metadata-- | Discovery    |  |          |
  |             |       |                        +--------------+  |
       |
  |             |       |                                          |
       |
  |             | +--------+                     +--------------+  |
       |
  |             | | iddawc | <-- [ Secret ]----> | Issuer/      |  |          |
  |             | |        | <-- Access Token -- | Authz Server |  |          |
  |             | +--------+                     +--------------+  |          |
  |             |       |                                          |
+-----------+
  |             |       | -------------- Access Token -----------> | >
| Validator |
  |             |       | <---- Authorization Success/Failure ---- | <
|   Hook    |
  |             |       |                                          |
+-----------+
  |             +-------+                                          +----------+
  +----------------------+

I think what was the most confusing in our latest patch is that
flow_type was passed to the server.
We are not proposing this going forward.

(For a future conversation: they need to set up authorization, too,
with custom scopes or some other magic. It's not enough to check who
the token belongs to; even if Postgres is just using the verified
email from OpenID as an authenticator, you have to also know that the
user authorized the token -- and therefore the client -- to access
Postgres on their behalf.)

My understanding is that metadata in the tokens is provider specific,
so server side hook would be the right place to handle that.
Plus I can envision for some providers it can make sense to make a
remote call to pull some information.

The way we implement Azure AD auth today in PAAS PostgreSQL offering:
- Server administrator uses special extension functions to create
Azure AD enabled PostgreSQL roles.
- PostgreSQL extension maps Roles to unique identity Ids (UID) in the Directory.
- Connection flow: If the token is valid and Role => UID mapping
matches, we authenticate as the Role.
- Then its native PostgreSQL role based access control takes care of privileges.

This is the same for both User- and System-to-system authorization.
Though I assume different providers may treat user- and system-
identities differently. So their extension would handle that.

Thanks!
Andrey.

Show quoted text

On Wed, Dec 7, 2022 at 11:06 AM Jacob Champion <jchampion@timescale.com> wrote:

On Mon, Dec 5, 2022 at 4:15 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:

I think we can focus on the roles and responsibilities of the components first.
Details of the patch can be elaborated. Like "flow type code" is a
mistake on our side, and we will use the term "grant_type" which is
defined by OIDC spec. As well as details of usage of refresh_token.

(For the record, whether we call it "flow type" or "grant type"
doesn't address my concern.)

Basically Yes. We propose an increase of the server side hook responsibility.
From just validating the token, to also return the provider root URL
and required audience. And possibly provide more metadata in the
future.

I think it's okay to have the extension and HBA collaborate to provide
discovery information. Your proposal goes further than that, though,
and makes the server aware of the chosen client flow. That appears to
be an architectural violation: why does an OAuth resource server need
to know the client flow at all?

Which is in our opinion aligned with SASL protocol, where the server
side is responsible for telling the client auth requirements based on
the requested role in the startup packet.

You've proposed an alternative SASL mechanism. There's nothing wrong
with that, per se, but I think it should be clear why we've chosen
something nonstandard.

Our understanding is that in the original patch that information came
purely from hba, and we propose extension being able to control that
metadata.
As we see extension as being owned by the identity provider, compared
to HBA which is owned by the server administrator or cloud provider.

That seems reasonable, considering how tightly coupled the Issuer and
the token validation process are.

2. Server Owners / PAAS providers (On premise admins, Cloud providers,
multi-cloud PAAS providers).
- Install extensions and configure HBA to allow clients to
authenticate with the identity providers of their choice.

(For a future conversation: they need to set up authorization, too,
with custom scopes or some other magic. It's not enough to check who
the token belongs to; even if Postgres is just using the verified
email from OpenID as an authenticator, you have to also know that the
user authorized the token -- and therefore the client -- to access
Postgres on their behalf.)

3. Client Application Developers (Data Wis, integration tools,
PgAdmin, monitoring tools, e.t.c.)
- Independent from specific Identity providers or server providers.
Write one code for all identity providers.

Ideally, yes, but that only works if all identity providers implement
the same flows in compatible ways. We're already seeing instances
where that's not the case and we'll necessarily have to deal with that
up front.

- Rely on application deployment owners to configure which OIDC
provider to use across client and server setups.
4. Application Deployment Owners (End customers setting up applications)
- The only actor actually aware of which identity provider to use.
Configures the stack based on the Identity and PostgreSQL deployments
they have.

(I have doubts that the roles will be as decoupled in practice as you
have described them, but I'd rather defer that for now.)

The critical piece of the vision is (3.) above is applications
agnostic of the identity providers. Those applications rely on
properly configured servers and rich driver logic (libpq,
com.postgresql, npgsql) to allow their application to popup auth
windows or do service-to-service authentication with any provider. In
our view that would significantly democratize the deployment of OAUTH
authentication in the community.

That seems to be restating the goal of OAuth and OIDC. Can you explain
how the incompatible change allows you to accomplish this better than
standard implementations?

In order to allow this separation, we propose:
1. HBA + Extension is the single source of truth of Provider root URL
+ Required Audience for each role. If some backfill for missing OIDC
discovery is needed, the provider-specific extension would be
providing it.
2. Client Application knows which grant_type to use in which scenario.
But can be coded without knowledge of a specific provider. So can't
provide discovery details.
3. Driver (libpq, others) - coordinate the authentication flow based
on client grant_type and identity provider metadata to allow client
applications to use any flow with any provider in a unified way.

Yes, this would require a little more complicated flow between
components than in your original patch.

Why? I claim that standard OAUTHBEARER can handle all of that. What
does your proposed architecture (the third diagram) enable that my
proposed hook (the second diagram) doesn't?

And yes, more complexity comes
with more opportunity to make bugs.
However, I see PG Server and Libpq as the places which can have more
complexity. For the purpose of making work for the community
participants easier and simplify adoption.

Does this make sense to you?

Some of it, but it hasn't really addressed the questions from my last mail.

Thanks,
--Jacob

#40Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Andrey Chudnovsky (#39)
Re: [PoC] Federated Authn/z with OAUTHBEARER
That being said, the Diagram 2 would look like this with our proposal:
  +----------------------+                                         +----------+
  |             +-------+                                          | Postgres |
  | PQconnect ->|       |                                          |
       |
  |             |       |                                          |
+-----------+
  |             |       | -------------- Empty Token ------------> | >
|           |
  |             | libpq | <----- Error(w\ Root URL + Audience ) -- | <
| Pre-Auth  |
  |          +------+   |                                          |
|  Hook     |
  |     +- < | Hook |   |                                          |
+-----------+
  |     |    +------+   |                                          |          |
  |     v       |       |                                          |
       |
  |  [get token]|       |                                          |
       |
  |     |       |       |                                          |
       |
  |     +       |       |                                          |
+-----------+
  | PQconnect > |       | -------------- Access Token -----------> | >
| Validator |
  |             |       | <---- Authorization Success/Failure ---- | <
|   Hook    |
  |             |       |                                          |
+-----------+
  |             +-------+                                          |
       | +----------------------+
+----------+

With the application taking care of all Token acquisition logic. While
the server-side hook is participating in the pre-authentication reply.

That is definitely a required scenario for the long term and the
easiest to implement in the client core.
And if we can do at least that flow in PG16 it will be a strong
foundation to provide more support for specific grants in libpq going
forward.

Does the diagram above look good to you? We can then start cleaning up
the patch to get that in first.

Thanks!
Andrey.

Show quoted text

On Wed, Dec 7, 2022 at 3:22 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:

I think it's okay to have the extension and HBA collaborate to provide
discovery information. Your proposal goes further than that, though,
and makes the server aware of the chosen client flow. That appears to
be an architectural violation: why does an OAuth resource server need
to know the client flow at all?

Ok. It may have left there from intermediate iterations. We did
consider making extension drive the flow for specific grant_type, but
decided against that idea. For the same reason you point to.
Is it correct that your main concern about use of grant_type was that
it's propagated to the server? Then yes, we will remove sending it to
the server.

Ideally, yes, but that only works if all identity providers implement
the same flows in compatible ways. We're already seeing instances
where that's not the case and we'll necessarily have to deal with that
up front.

Yes, based on our analysis OIDC spec is detailed enough, that
providers implementing that one, can be supported with generic code in
libpq / client.
Github specifically won't fit there though. Microsoft Azure AD,
Google, Okta (including Auth0) will.
Theoretically discovery documents can be returned from the extension
(server-side) which is provider specific. Though we didn't plan to
prioritize that.

That seems to be restating the goal of OAuth and OIDC. Can you explain
how the incompatible change allows you to accomplish this better than
standard implementations?

Do you refer to passing grant_type to the server? Which we will get
rid of in the next iteration. Or other incompatible changes as well?

Why? I claim that standard OAUTHBEARER can handle all of that. What
does your proposed architecture (the third diagram) enable that my
proposed hook (the second diagram) doesn't?

The hook proposed on the 2nd diagram effectively delegates all Oauth
flows implementations to the client.
We propose libpq takes care of pulling OpenId discovery and coordination.
Which is effectively Diagram 1 + more flows + server hook providing
root url/audience.

Created the diagrams with all components for 3 flows:
1. Authorization code grant (Clients with Browser access):
+----------------------+                                         +----------+
|             +-------+                                          |
|
| PQconnect   |       |                                          |
|
| [auth_code] |       |                                          |
+-----------+
|          -> |       | -------------- Empty Token ------------> | >
|           |
|             | libpq | <----- Error(w\ Root URL + Audience ) -- | <
| Pre-Auth  |
|             |       |                                          |
|  Hook     |
|             |       |                                          |
+-----------+
|             |       |                        +--------------+  |          |
|             |       | -------[GET]---------> | OIDC         |  | Postgres |
|          +------+   | <--Provider Metadata-- | Discovery    |  |          |
|     +- < | Hook | < |                        +--------------+  |
|
|     |    +------+   |                                          |
|
|     v       |       |                                          |
|
|  [get auth  |       |                                          |
|
|    code]    |       |                                          |
|
|<user action>|       |                                          |
|
|     |       |       |                                          |
|
|     +       |       |                                          |
|
| PQconnect > | +--------+                     +--------------+  |
|
|             | | iddawc | <-- [ Auth code ]-> | Issuer/      |  |          |
|             | |        | <-- Access Token -- | Authz Server |  |          |
|             | +--------+                     +--------------+  |          |
|             |       |                                          |
+-----------+
|             |       | -------------- Access Token -----------> | >
| Validator |
|             |       | <---- Authorization Success/Failure ---- | <
|   Hook    |
|          +------+   |                                          |
+-----------+
|      +-< | Hook |   |                                          |
|
|      v   +------+   |                                          |
|
|[store       +-------+                                          |
|
|  refresh_token]                                                +----------+
+----------------------+
2. Device code grant
+----------------------+                                         +----------+
|             +-------+                                          |
|
| PQconnect   |       |                                          |
|
| [auth_code] |       |                                          |
+-----------+
|          -> |       | -------------- Empty Token ------------> | >
|           |
|             | libpq | <----- Error(w\ Root URL + Audience ) -- | <
| Pre-Auth  |
|             |       |                                          |
|  Hook     |
|             |       |                                          |
+-----------+
|             |       |                        +--------------+  |          |
|             |       | -------[GET]---------> | OIDC         |  | Postgres |
|          +------+   | <--Provider Metadata-- | Discovery    |  |          |
|     +- < | Hook | < |                        +--------------+  |
|
|     |    +------+   |                                          |
|
|     v       |       |                                          |
|
|  [device    | +---------+                     +--------------+ |
|
|    code]    | | iddawc  |                     | Issuer/      | |
|
|<user action>| |         | --[ Device code ]-> | Authz Server | |
|
|             | |<polling>| --[ Device code ]-> |              | |
|
|             | |         | --[ Device code ]-> |              | |
|
|             | |         |                     |              | |          |
|             | |         | <-- Access Token -- |              | |          |
|             | +---------+                     +--------------+ |          |
|             |       |                                          |
+-----------+
|             |       | -------------- Access Token -----------> | >
| Validator |
|             |       | <---- Authorization Success/Failure ---- | <
|   Hook    |
|          +------+   |                                          |
+-----------+
|      +-< | Hook |   |                                          |
|
|      v   +------+   |                                          |
|
|[store       +-------+                                          |
|
|  refresh_token]                                                +----------+
+----------------------+
3. Non-interactive flows (Client Secret / Refresh_Token)
+----------------------+                                         +----------+
|             +-------+                                          |
|
| PQconnect   |       |                                          |
|
| [grant_type]|       |                                          |          |
|          -> |       |                                          |
+-----------+
|             |       | -------------- Empty Token ------------> | >
|           |
|             | libpq | <----- Error(w\ Root URL + Audience ) -- | <
| Pre-Auth  |
|             |       |                                          |
|  Hook     |
|             |       |                                          |
+-----------+
|             |       |                        +--------------+  |          |
|             |       | -------[GET]---------> | OIDC         |  | Postgres |
|             |       | <--Provider Metadata-- | Discovery    |  |          |
|             |       |                        +--------------+  |
|
|             |       |                                          |
|
|             | +--------+                     +--------------+  |
|
|             | | iddawc | <-- [ Secret ]----> | Issuer/      |  |          |
|             | |        | <-- Access Token -- | Authz Server |  |          |
|             | +--------+                     +--------------+  |          |
|             |       |                                          |
+-----------+
|             |       | -------------- Access Token -----------> | >
| Validator |
|             |       | <---- Authorization Success/Failure ---- | <
|   Hook    |
|             |       |                                          |
+-----------+
|             +-------+                                          +----------+
+----------------------+

I think what was the most confusing in our latest patch is that
flow_type was passed to the server.
We are not proposing this going forward.

(For a future conversation: they need to set up authorization, too,
with custom scopes or some other magic. It's not enough to check who
the token belongs to; even if Postgres is just using the verified
email from OpenID as an authenticator, you have to also know that the
user authorized the token -- and therefore the client -- to access
Postgres on their behalf.)

My understanding is that metadata in the tokens is provider specific,
so server side hook would be the right place to handle that.
Plus I can envision for some providers it can make sense to make a
remote call to pull some information.

The way we implement Azure AD auth today in PAAS PostgreSQL offering:
- Server administrator uses special extension functions to create
Azure AD enabled PostgreSQL roles.
- PostgreSQL extension maps Roles to unique identity Ids (UID) in the Directory.
- Connection flow: If the token is valid and Role => UID mapping
matches, we authenticate as the Role.
- Then its native PostgreSQL role based access control takes care of privileges.

This is the same for both User- and System-to-system authorization.
Though I assume different providers may treat user- and system-
identities differently. So their extension would handle that.

Thanks!
Andrey.

On Wed, Dec 7, 2022 at 11:06 AM Jacob Champion <jchampion@timescale.com> wrote:

On Mon, Dec 5, 2022 at 4:15 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:

I think we can focus on the roles and responsibilities of the components first.
Details of the patch can be elaborated. Like "flow type code" is a
mistake on our side, and we will use the term "grant_type" which is
defined by OIDC spec. As well as details of usage of refresh_token.

(For the record, whether we call it "flow type" or "grant type"
doesn't address my concern.)

Basically Yes. We propose an increase of the server side hook responsibility.
From just validating the token, to also return the provider root URL
and required audience. And possibly provide more metadata in the
future.

I think it's okay to have the extension and HBA collaborate to provide
discovery information. Your proposal goes further than that, though,
and makes the server aware of the chosen client flow. That appears to
be an architectural violation: why does an OAuth resource server need
to know the client flow at all?

Which is in our opinion aligned with SASL protocol, where the server
side is responsible for telling the client auth requirements based on
the requested role in the startup packet.

You've proposed an alternative SASL mechanism. There's nothing wrong
with that, per se, but I think it should be clear why we've chosen
something nonstandard.

Our understanding is that in the original patch that information came
purely from hba, and we propose extension being able to control that
metadata.
As we see extension as being owned by the identity provider, compared
to HBA which is owned by the server administrator or cloud provider.

That seems reasonable, considering how tightly coupled the Issuer and
the token validation process are.

2. Server Owners / PAAS providers (On premise admins, Cloud providers,
multi-cloud PAAS providers).
- Install extensions and configure HBA to allow clients to
authenticate with the identity providers of their choice.

(For a future conversation: they need to set up authorization, too,
with custom scopes or some other magic. It's not enough to check who
the token belongs to; even if Postgres is just using the verified
email from OpenID as an authenticator, you have to also know that the
user authorized the token -- and therefore the client -- to access
Postgres on their behalf.)

3. Client Application Developers (Data Wis, integration tools,
PgAdmin, monitoring tools, e.t.c.)
- Independent from specific Identity providers or server providers.
Write one code for all identity providers.

Ideally, yes, but that only works if all identity providers implement
the same flows in compatible ways. We're already seeing instances
where that's not the case and we'll necessarily have to deal with that
up front.

- Rely on application deployment owners to configure which OIDC
provider to use across client and server setups.
4. Application Deployment Owners (End customers setting up applications)
- The only actor actually aware of which identity provider to use.
Configures the stack based on the Identity and PostgreSQL deployments
they have.

(I have doubts that the roles will be as decoupled in practice as you
have described them, but I'd rather defer that for now.)

The critical piece of the vision is (3.) above is applications
agnostic of the identity providers. Those applications rely on
properly configured servers and rich driver logic (libpq,
com.postgresql, npgsql) to allow their application to popup auth
windows or do service-to-service authentication with any provider. In
our view that would significantly democratize the deployment of OAUTH
authentication in the community.

That seems to be restating the goal of OAuth and OIDC. Can you explain
how the incompatible change allows you to accomplish this better than
standard implementations?

In order to allow this separation, we propose:
1. HBA + Extension is the single source of truth of Provider root URL
+ Required Audience for each role. If some backfill for missing OIDC
discovery is needed, the provider-specific extension would be
providing it.
2. Client Application knows which grant_type to use in which scenario.
But can be coded without knowledge of a specific provider. So can't
provide discovery details.
3. Driver (libpq, others) - coordinate the authentication flow based
on client grant_type and identity provider metadata to allow client
applications to use any flow with any provider in a unified way.

Yes, this would require a little more complicated flow between
components than in your original patch.

Why? I claim that standard OAUTHBEARER can handle all of that. What
does your proposed architecture (the third diagram) enable that my
proposed hook (the second diagram) doesn't?

And yes, more complexity comes
with more opportunity to make bugs.
However, I see PG Server and Libpq as the places which can have more
complexity. For the purpose of making work for the community
participants easier and simplify adoption.

Does this make sense to you?

Some of it, but it hasn't really addressed the questions from my last mail.

Thanks,
--Jacob

#41Jacob Champion
jchampion@timescale.com
In reply to: Andrey Chudnovsky (#40)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Dec 7, 2022 at 3:22 PM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

I think it's okay to have the extension and HBA collaborate to
provide discovery information. Your proposal goes further than
that, though, and makes the server aware of the chosen client flow.
That appears to be an architectural violation: why does an OAuth
resource server need to know the client flow at all?

Ok. It may have left there from intermediate iterations. We did
consider making extension drive the flow for specific grant_type,
but decided against that idea. For the same reason you point to. Is
it correct that your main concern about use of grant_type was that
it's propagated to the server? Then yes, we will remove sending it
to the server.

Okay. Yes, that was my primary concern.

Ideally, yes, but that only works if all identity providers
implement the same flows in compatible ways. We're already seeing
instances where that's not the case and we'll necessarily have to
deal with that up front.

Yes, based on our analysis OIDC spec is detailed enough, that
providers implementing that one, can be supported with generic code
in libpq / client. Github specifically won't fit there though.
Microsoft Azure AD, Google, Okta (including Auth0) will.
Theoretically discovery documents can be returned from the extension
(server-side) which is provider specific. Though we didn't plan to
prioritize that.

As another example, Google's device authorization grant is incompatible
with the spec (which they co-authored). I want to say I had problems
with Azure AD not following that spec either, but I don't remember
exactly what they were. I wouldn't be surprised to find more tiny
departures once we get deeper into implementation.

That seems to be restating the goal of OAuth and OIDC. Can you
explain how the incompatible change allows you to accomplish this
better than standard implementations?

Do you refer to passing grant_type to the server? Which we will get
rid of in the next iteration. Or other incompatible changes as well?

Just the grant type, yeah.

Why? I claim that standard OAUTHBEARER can handle all of that.
What does your proposed architecture (the third diagram) enable
that my proposed hook (the second diagram) doesn't?

The hook proposed on the 2nd diagram effectively delegates all Oauth
flows implementations to the client. We propose libpq takes care of
pulling OpenId discovery and coordination. Which is effectively
Diagram 1 + more flows + server hook providing root url/audience.

Created the diagrams with all components for 3 flows: [snip]

(I'll skip ahead to your later mail on this.)

(For a future conversation: they need to set up authorization,
too, with custom scopes or some other magic. It's not enough to
check who the token belongs to; even if Postgres is just using the
verified email from OpenID as an authenticator, you have to also
know that the user authorized the token -- and therefore the client
-- to access Postgres on their behalf.)

My understanding is that metadata in the tokens is provider
specific, so server side hook would be the right place to handle
that. Plus I can envision for some providers it can make sense to
make a remote call to pull some information.

The server hook is the right place to check the scopes, yes, but I think
the DBA should be able to specify what those scopes are to begin with.
The provider of the extension shouldn't be expected by the architecture
to hardcode those decisions, even if Azure AD chooses to short-circuit
that choice and provide magic instead.

On 12/7/22 20:25, Andrey Chudnovsky wrote:

That being said, the Diagram 2 would look like this with our proposal:
[snip]

With the application taking care of all Token acquisition logic. While
the server-side hook is participating in the pre-authentication reply.

That is definitely a required scenario for the long term and the
easiest to implement in the client core.> And if we can do at least that flow in PG16 it will be a strong
foundation to provide more support for specific grants in libpq going
forward.

Agreed.

Does the diagram above look good to you? We can then start cleaning up
the patch to get that in first.

I maintain that the hook doesn't need to hand back artifacts to the
client for a second PQconnect call. It can just use those artifacts to
obtain the access token and hand that right back to libpq. (I think any
requirement that clients be rewritten to call PQconnect twice will
probably be a sticking point for adoption of an OAuth patch.)

That said, now that your proposal is also compatible with OAUTHBEARER, I
can pony up some code to hopefully prove my point. (I don't know if I'll
be able to do that by the holidays though.)

Thanks!
--Jacob

#42Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#41)
Re: [PoC] Federated Authn/z with OAUTHBEARER

The server hook is the right place to check the scopes, yes, but I think
the DBA should be able to specify what those scopes are to begin with.
The provider of the extension shouldn't be expected by the architecture
to hardcode those decisions, even if Azure AD chooses to short-circuit
that choice and provide magic instead.

Hardcode is definitely not expected, but customization for identity
provider specific, I think, should be allowed.
I can provide a couple of advanced use cases which happen in the cloud
deployments world, and require per-role management:
- Multi-tenant deployments, when root provider URL would be different
for different roles, based on which tenant they come from.
- Federation to multiple providers. Solutions like Amazon Cognito
which offer a layer of abstraction with several providers
transparently supported.

If your concern is extension not honoring the DBA configured values:
Would a server-side logic to prefer HBA value over extension-provided
resolve this concern?
We are definitely biased towards the cloud deployment scenarios, where
direct access to .hba files is usually not offered at all.
Let's find the middle ground here.

A separate reason for creating this pre-authentication hook is further
extensibility to support more metadata.
Specifically when we add support for OAUTH flows to libpq, server-side
extensions can help bridge the gap between the identity provider
implementation and OAUTH/OIDC specs.
For example, that could allow the Github extension to provide an OIDC
discovery document.

I definitely see identity providers as institutional actors here which
can be given some power through the extension hooks to customize the
behavior within the framework.

I maintain that the hook doesn't need to hand back artifacts to the
client for a second PQconnect call. It can just use those artifacts to
obtain the access token and hand that right back to libpq. (I think any
requirement that clients be rewritten to call PQconnect twice will
probably be a sticking point for adoption of an OAuth patch.)

Obtaining a token is an asynchronous process with a human in the loop.
Not sure if expecting a hook function to return a token synchronously
is the best option here.
Can that be an optional return value of the hook in cases when a token
can be obtained synchronously?

Show quoted text

On Thu, Dec 8, 2022 at 4:41 PM Jacob Champion <jchampion@timescale.com> wrote:

On Wed, Dec 7, 2022 at 3:22 PM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

I think it's okay to have the extension and HBA collaborate to
provide discovery information. Your proposal goes further than
that, though, and makes the server aware of the chosen client flow.
That appears to be an architectural violation: why does an OAuth
resource server need to know the client flow at all?

Ok. It may have left there from intermediate iterations. We did
consider making extension drive the flow for specific grant_type,
but decided against that idea. For the same reason you point to. Is
it correct that your main concern about use of grant_type was that
it's propagated to the server? Then yes, we will remove sending it
to the server.

Okay. Yes, that was my primary concern.

Ideally, yes, but that only works if all identity providers
implement the same flows in compatible ways. We're already seeing
instances where that's not the case and we'll necessarily have to
deal with that up front.

Yes, based on our analysis OIDC spec is detailed enough, that
providers implementing that one, can be supported with generic code
in libpq / client. Github specifically won't fit there though.
Microsoft Azure AD, Google, Okta (including Auth0) will.
Theoretically discovery documents can be returned from the extension
(server-side) which is provider specific. Though we didn't plan to
prioritize that.

As another example, Google's device authorization grant is incompatible
with the spec (which they co-authored). I want to say I had problems
with Azure AD not following that spec either, but I don't remember
exactly what they were. I wouldn't be surprised to find more tiny
departures once we get deeper into implementation.

That seems to be restating the goal of OAuth and OIDC. Can you
explain how the incompatible change allows you to accomplish this
better than standard implementations?

Do you refer to passing grant_type to the server? Which we will get
rid of in the next iteration. Or other incompatible changes as well?

Just the grant type, yeah.

Why? I claim that standard OAUTHBEARER can handle all of that.
What does your proposed architecture (the third diagram) enable
that my proposed hook (the second diagram) doesn't?

The hook proposed on the 2nd diagram effectively delegates all Oauth
flows implementations to the client. We propose libpq takes care of
pulling OpenId discovery and coordination. Which is effectively
Diagram 1 + more flows + server hook providing root url/audience.

Created the diagrams with all components for 3 flows: [snip]

(I'll skip ahead to your later mail on this.)

(For a future conversation: they need to set up authorization,
too, with custom scopes or some other magic. It's not enough to
check who the token belongs to; even if Postgres is just using the
verified email from OpenID as an authenticator, you have to also
know that the user authorized the token -- and therefore the client
-- to access Postgres on their behalf.)

My understanding is that metadata in the tokens is provider
specific, so server side hook would be the right place to handle
that. Plus I can envision for some providers it can make sense to
make a remote call to pull some information.

The server hook is the right place to check the scopes, yes, but I think
the DBA should be able to specify what those scopes are to begin with.
The provider of the extension shouldn't be expected by the architecture
to hardcode those decisions, even if Azure AD chooses to short-circuit
that choice and provide magic instead.

On 12/7/22 20:25, Andrey Chudnovsky wrote:

That being said, the Diagram 2 would look like this with our proposal:
[snip]

With the application taking care of all Token acquisition logic. While
the server-side hook is participating in the pre-authentication reply.

That is definitely a required scenario for the long term and the
easiest to implement in the client core.> And if we can do at least that flow in PG16 it will be a strong
foundation to provide more support for specific grants in libpq going
forward.

Agreed.

Does the diagram above look good to you? We can then start cleaning up
the patch to get that in first.

I maintain that the hook doesn't need to hand back artifacts to the
client for a second PQconnect call. It can just use those artifacts to
obtain the access token and hand that right back to libpq. (I think any
requirement that clients be rewritten to call PQconnect twice will
probably be a sticking point for adoption of an OAuth patch.)

That said, now that your proposal is also compatible with OAUTHBEARER, I
can pony up some code to hopefully prove my point. (I don't know if I'll
be able to do that by the holidays though.)

Thanks!
--Jacob

#43Jacob Champion
jchampion@timescale.com
In reply to: Andrey Chudnovsky (#42)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Dec 12, 2022 at 9:06 PM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

If your concern is extension not honoring the DBA configured values:
Would a server-side logic to prefer HBA value over extension-provided
resolve this concern?

Yeah. It also seals the role of the extension here as "optional".

We are definitely biased towards the cloud deployment scenarios, where
direct access to .hba files is usually not offered at all.
Let's find the middle ground here.

Sure. I don't want to make this difficult in cloud scenarios --
obviously I'd like for Timescale Cloud to be able to make use of this
too. But if we make this easy for a lone DBA (who doesn't have any
institutional power with the providers) to use correctly and securely,
then it should follow that the providers who _do_ have power and
resources will have an easy time of it as well. The reverse isn't
necessarily true. So I'm definitely planning to focus on the DBA case
first.

A separate reason for creating this pre-authentication hook is further
extensibility to support more metadata.
Specifically when we add support for OAUTH flows to libpq, server-side
extensions can help bridge the gap between the identity provider
implementation and OAUTH/OIDC specs.
For example, that could allow the Github extension to provide an OIDC
discovery document.

I definitely see identity providers as institutional actors here which
can be given some power through the extension hooks to customize the
behavior within the framework.

We'll probably have to make some compromises in this area, but I think
they should be carefully considered exceptions and not a core feature
of the mechanism. The gaps you point out are just fragmentation, and
adding custom extensions to deal with it leads to further
fragmentation instead of providing pressure on providers to just
implement the specs. Worst case, we open up new exciting security
flaws, and then no one can analyze them independently because no one
other than the provider knows how the two sides work together anymore.

Don't get me wrong; it would be naive to proceed as if the OAUTHBEARER
spec were perfect, because it's clearly not. But if we need to make
extensions to it, we can participate in IETF discussions and make our
case publicly for review, rather than enshrining MS/GitHub/Google/etc.
versions of the RFC and enabling that proliferation as a Postgres core
feature.

Obtaining a token is an asynchronous process with a human in the loop.
Not sure if expecting a hook function to return a token synchronously
is the best option here.
Can that be an optional return value of the hook in cases when a token
can be obtained synchronously?

I don't think the hook is generally going to be able to return a token
synchronously, and I expect the final design to be async-first. As far
as I know, this will need to be solved for the builtin flows as well
(you don't want a synchronous HTTP call to block your PQconnectPoll
architecture), so the hook should be able to make use of whatever
solution we land on for that.

This is hand-wavy, and I don't expect it to be easy to solve. I just
don't think we have to solve it twice.

Have a good end to the year!
--Jacob

#44mahendrakar s
mahendrakarforpg@gmail.com
In reply to: Jacob Champion (#43)
4 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi All,

Changes added to Jacob's patch(v2) as per the discussion in the thread.

The changes allow the customer to send the OAUTH BEARER token through psql
connection string.

Example:
psql -U user@example.com -d 'dbname=postgres oauth_bearer_token=abc'

To configure OAUTH, the pg_hba.conf line look like:
local all all oauth
provider=oauth_provider issuer="https://example.com&quot; scope="openid email"

We also added hook to libpq to pass on the metadata about the issuer.

Thanks,
Mahendrakar.

On Sat, 17 Dec 2022 at 04:48, Jacob Champion <jchampion@timescale.com>
wrote:

Show quoted text

On Mon, Dec 12, 2022 at 9:06 PM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

If your concern is extension not honoring the DBA configured values:
Would a server-side logic to prefer HBA value over extension-provided
resolve this concern?

Yeah. It also seals the role of the extension here as "optional".

We are definitely biased towards the cloud deployment scenarios, where
direct access to .hba files is usually not offered at all.
Let's find the middle ground here.

Sure. I don't want to make this difficult in cloud scenarios --
obviously I'd like for Timescale Cloud to be able to make use of this
too. But if we make this easy for a lone DBA (who doesn't have any
institutional power with the providers) to use correctly and securely,
then it should follow that the providers who _do_ have power and
resources will have an easy time of it as well. The reverse isn't
necessarily true. So I'm definitely planning to focus on the DBA case
first.

A separate reason for creating this pre-authentication hook is further
extensibility to support more metadata.
Specifically when we add support for OAUTH flows to libpq, server-side
extensions can help bridge the gap between the identity provider
implementation and OAUTH/OIDC specs.
For example, that could allow the Github extension to provide an OIDC
discovery document.

I definitely see identity providers as institutional actors here which
can be given some power through the extension hooks to customize the
behavior within the framework.

We'll probably have to make some compromises in this area, but I think
they should be carefully considered exceptions and not a core feature
of the mechanism. The gaps you point out are just fragmentation, and
adding custom extensions to deal with it leads to further
fragmentation instead of providing pressure on providers to just
implement the specs. Worst case, we open up new exciting security
flaws, and then no one can analyze them independently because no one
other than the provider knows how the two sides work together anymore.

Don't get me wrong; it would be naive to proceed as if the OAUTHBEARER
spec were perfect, because it's clearly not. But if we need to make
extensions to it, we can participate in IETF discussions and make our
case publicly for review, rather than enshrining MS/GitHub/Google/etc.
versions of the RFC and enabling that proliferation as a Postgres core
feature.

Obtaining a token is an asynchronous process with a human in the loop.
Not sure if expecting a hook function to return a token synchronously
is the best option here.
Can that be an optional return value of the hook in cases when a token
can be obtained synchronously?

I don't think the hook is generally going to be able to return a token
synchronously, and I expect the final design to be async-first. As far
as I know, this will need to be solved for the builtin flows as well
(you don't want a synchronous HTTP call to block your PQconnectPoll
architecture), so the hook should be able to make use of whatever
solution we land on for that.

This is hand-wavy, and I don't expect it to be easy to solve. I just
don't think we have to solve it twice.

Have a good end to the year!
--Jacob

Attachments:

v3-0001-libpq-add-OAUTHBEARER-SASL-mechanism-and-call-back-hooks.patch.gzapplication/gzip; name=v3-0001-libpq-add-OAUTHBEARER-SASL-mechanism-and-call-back-hooks.patch.gzDownload
v3-0002-backend-add-OAUTHBEARER-SASL-mechanishm.patch.gzapplication/gzip; name=v3-0002-backend-add-OAUTHBEARER-SASL-mechanishm.patch.gzDownload
�+N�cv3-0002-backend-add-OAUTHBEARER-SASL-mechanishm.patch�<�w�F�?�_1e��`�v��Mv�7�q
n���=!
����d�v����H	p�6�ksN0������w�p��L��s7VWEvwj�7�w��;
?t��9s=)�tV]�����y����9�:�gO�t�z��/�z����j��~p�j��|x���]���E>�xt}�K)���d�����Q����:s/�Z^U�ELe{�T�L�p�X�H�Y���� rm���T���#�4�V0����S��TEU_}?/��j�b%���g'��
�R^������@_��Qo@�7"�P��� B-@��9�����tfr�~�/���D��#o�~�y[�_X)�k����O���n�k��6�.��5��"�HF�2j+�.CO.�[��"��x!��xt&�������W�+�����]��d�GR��Y�y������I�x`382�\O=��m�{r����X�q��w�qx���x�	�yw/�n4�qP��e�����*r��X����{�������k��#9��L�!���-�Y�go� 	?��%��F������Qn������p*���m}��z�W�e���K*����:�Z�it�5�Ng��l��~�Q�	���������@��p���#������N}������=�:�����������)u,]e[��-�8����K��/;
Y�n��/���>���I��A�\\�����/��#���`-���}_�+43�S���n�0
Os����n���sO2qVu��gK����@m1Mf���|����qa�
b�]%=i���J]�	�&����)Mw&�qWsY�����"�X��WO�-0����I�
�8�g������LAU�� ����'��>
H���5h{V������d*'���(��X���W8�&���v$D��"��[�\����<A
>�
.&g��7�����6Ym��P:r&��,�M\��d4>&��c�E�Uj\]
���'0z�v��kZ"R�f�(�c�l����1���� ��>1���D�>�R���}(�t�j�Dn��B����8��_���f,��B+RrrsZn�&`
&���&>1bYT��H ���(����C����q!v��%d���(�'E����qp��AI�]_�w?���w��������5[�;��OF��������V�������)~d����&q���xP�2*�pQ�U	k�VD�P�DI��B� �
�.@�;�4N�"?��X�#,���[��������x|� �Q��%<T��{x��������#�����qGHb[>.������������6x*��;s��%��-8��,w�����Y�����"H<�X����������`+!�c�+����I�vK�����K�Qs+	�f���&!
T���~����1�S��2F���� �&������y��-����y��<�m3��j�j�������R0���0��p�"+r�$����`bj�)��LCW*uP?6�0���L ���������{�����t8i���iK5��6
2W�S8�1P[l����h��T"'��_?��:�'������s�{�����w��YvT��ct���b��AX����<���u�?��gll��~I�A�;IX/����g�5��y@�����������2�7�@o���b�Y �/60
6��I�8�0�+�H�����}'6!9��lbw�wb\�����o���),/������SK����i.�H����
�oX���?�J	64�����PG;���A!�p4S�������O�jQ�w����t�"����U#O�!G_z���l<�Di���g��!�M��!�������8`�J� q�2�8&P��@��38�M�7���{u���$U�_$1R���J4s&@x��8	�NX����[���������HyB�T������p4�_/P��mt[q{[L^��k�d��Z�����.�r
E�-�o��]^
����l�����x|:����"R��=sF�����������5/E��g�E5�]���qeh��p��p1�:J�iN�$b2Dj���l�"��]�pJ!r�6�^�b�|`� /)��xb���3�R3��L�	���1��*��������G�p�4�n#�#v�<B�%OU��Ac{���V���>���f1�u��8����N1�<��W:b��[
3
���6T�����'���0^!f�3�|Fl928zw��Yz��CL#�B���OVaO�`�D�c�f��;����L�_s���� ��8�������U���,��gx�D�d�s�������?��^\r�3%
�_|V<>oQ�����S��!E���t1��pj�Vj����ZZ�%��bz���fg���0V��h��=2���o^��7}]���?����v���bi���t��*X��)�����h�����`��B������3����;m�+�)897�!�nRH���?�R���>��"�� !�*���p�D�9X0
�xs�d�����2������+>R����D�/����H�~������L�zm�����4�\�����f
=��V[�l��<2��5Uqr|zv}5��;rf%^����k��Z��gOM��������2���a�;�uY�j������� ������mf�d'Q@�Su��B����h"�-�*��D�k���g������p��m���d�G��������ah���fR� �*��l�:��(���4��1���I@��8	�'0�89�b�}�T�(L��J�v�e`������w�9y���� ���o���d:*Y�Q���[��/�/����`L�����5AzQ ���(6u[fy���<=�<"^���$�E� ,^DA2_hA�~_#���f�(��kd@��������`��2~b��$
]S#�����!//����p�
3&��j����k?,��v�f(TED���*����wDSP���NE��4��K���>�0Lq���:�	�aO��-�{��~��������x��2�������x�z[��(����e�qc��q��0�������[18�p	����7���P;��9|�0l6��--0��j���2�
.��Q�]���(��R��kt��������\��aI�������qL�LL�l�N�n
\���`��$��\o�Ha��[��@m��d:�Do�� �J��F���Z�U�����&%��	���OK��W�9&��YYE��j	���s7���������P�~���m^?!���Bl7�����Q1��|��b�0A1^��F���X�����"�%���N<~J!�_������2���������-����1�'`"@���U�<�����/��xP��0 �������6!��4�n��7�Y���|$��������/_����������d@R�|�����:���x�v��wj-���.n������S�������G�� 4�a�:IU�i1�M�F�6����t �[��{MI���5
���[p�9�)��h`J����O�.���<yk���8i�Y,�H��������X���1��B|K�&Y7�W��{��������|���[���Jt��>���_��>6�����8b�����^�4H�[:��h���i�����V,V}����v@P���1������l+_���b�!v�#e�5��&P�-�4�R]��y��u�S����T�u!x��%P��������?�s��Aie>,��[`�sY#�E5[��:2�a+�g�o�����2��������tn�q�A>E���B�5��vjO������dF''0oi/�z�NQ�H\�.�"gli���M
F4���g���"�
4������,n������<����H��k�O�cu��wg+*�1	$�)���t"�K)�{����:�h���%	$��j
�������%�����������H9����� ��bu�:��(1E����1(�R�g�8��Q���S��*��B=_�!�i�3S�`rTY3L��,���-	,Oz����m�'������R�a��<�?>����e��n�-��U!���^��6|�i�1���A�p�����@�DJh��d�J�#N�D�C�F���<[1�
�>N�V'���!5�-*�+�� 0cI;��|���{i���J&��<�%�c]+h�~
���9�g[��e������,"{(1�?0���d����C���|�R^���r��EU������{���
|�

�\|?^�l+��Y�\(B�	i�_�b�W{_��%�}��x�^_L���=8���@�N�?�y���<��Z94,>�;�N@�u�>���������w�\�Q�t��u0]�x��V*J���/��fr!
���F��
��.+�
YO��	a��h2|��YJ������s������-��|'���M3PFr����������=��fMmP�����o���3�����/�TK��������������;;�^�p5_����������������O�>kw&�ivk���L#�O��2D�����g��bXr|���O����������wAtC7D6�@@�=��
�uu%<����|���
�H���}��h��O�Z����z��8�r�42A<�����Xt����j���u�sB����&}vk�!v!2��`
0y<�����`e���������95�T������<8�to����]F��7r��=�����i9���)z;_�����IE��f����	m���������opx��Ny�Y=������������A`{�VQ�"x�\�kAX�L_��cm&�������t����:�4L��\��-���*��W�����qt�X�/q��
�Bk����6a��L��
c����4��7����/���y��������1�� �6�i0
N5`�~�UPC���a���T,��/�w�7-L�PJj��u�O�����@�6����\�_�Y��c��Q��0�Q��\|��*<�K{��k����vA��'n��#�2��N�K�����6���rV3fA	��.�-�P�aJ�?<?'/
���L��d}F���"����Mf�od��_3����7��,���e�(2�y�
~�i����h�S+u��N��/�0�a,��`�)8R�:s=�����N�D����t��S��Vs��	�$NA(�q�H�)G��! c��#��b��ST�3N�4e�T5������"/��d5u�����O���������"��gTOI'[�$���k�����(���I]/�Zc��9G �Nb��tNz)�u�I������w�
-sa���$v��g�l�qNv�������O1�	�G��e�<�,���OG�)��������lG��t�7e�������;�9��@/�?���������������L;��|����~o��S?�a�y����i��^k�@4��!���(��~`��o��u�/��j�C[ ����d���t��&�r�����F��!���X����S%v����X7��z�
��Z�)����VKZ�[m�������6�|0����+{�}�ob����c`��������V����z��;�<2���K�	V�D������D4���6�_\���F6��������-���o$c�����
��9�o���|�DO�X�����6��~���'�}�U�0K`�J����D�E��b�|<����J8�����6l����~��!i�Z�d�y�����osW��HnD?k~E���H�F�}�H�x!�a{6�@hI-[^�5o��=��H6�b�4��Y�l�b���������q���������D����h"�W(p�g�+y����������w����4\�'��C.���OA���X���2����[a���Lk�^w6ht����u��B�Y�DD�=/���>>9�<=�9(�C�<�%�0�Q	�����}��y/��Y,����d@Sb�sI����/i�_�'}#e������^`5��#S�1�� &����y�
��f�6�@f��t�������o�+;}�+���:�����&\�!�%7�
'[:�mBb���'�4=n�T@vO$�
��������6���\
�������h�������O�%�] 3�Id��5d�U
�u��=�������9*�}J������q��|�{��R|�����_����ql\'	}��~���*&3|Y����y�|��%b'&���LX��h�(����_��)WU��q�h�����t�%��i�\��Nt�-i�������I�,68j5`��� X�������Yn�Ma<��w�f%(E_�D���
=A�W����X�Ij�3m\�%~���g��.����H��KE?���wn��S�.A�D��2��9��D�Cl'��p��d~" b�$�����	��z&n�%�PS%+Z�Da���{%rK�N�����X9�a������r���*B����*�E�E����VS�7���b~QQA��0�6�:G�{�z��~�����>d1t���,����j1�|�,�D��j�~(��a&�S��7�aeU"���������1{�eKiJ�p�����&I���c�=;����k�H��x���8P��yD^��"1q�X�h�������hO,���c����w���N���tq��FL���}����;	�b��
�

�&2�k����K(�X�V�,�`R��6v\���^��Z�l��V�����2]�e���E�3���S�K�%U�=�������j^I7j�1����x��R�%��T}�]�������+�s�����������Z-l��Y#��I��B��/�,��A�?}K4�W[�����^k�������������o�P��Q���K��0U��@M�q/�������OI�-�l*�AJ��]��i8��C\d���'�Qg�0��B1�#���,�@wTW��;�h����`�Qi��CY���Zxy�2�����h����Y����}u�={��.�$��`�����2�*S�9�����X��aJ�.�*ey�2'����
<�<?��t�OS���;O��-��ZL�Xaip��4A#��y��&�b��h���%��J	.��S�)R
>�CY����K����*b��=������
�����3xoc �2�e���/g�s��OW��UD�?u��� )��]�s:�b|��
F�S����Z��N��5&e&��ri����w6��d1�f#3It�]������i�&L7�%*p�st�#����gF�w��<K
S�z,^�_m��C9���Sm�WPm>Yx�z
*��z%���t��K���q
��fetd�JL5�w�W����'�j�a`;����4z�LD����/,k��N�#�]�u�f$��^>�V�9�h\��D����sLZ�*����:�@��s����b^Edg�������R�����������J������+�BC��$=��~����R��������=��RK����N���+��J�F��j�dV���\Z~��	5=���vUIH{����
�7���ON���r���4��|�oG�Pv?�z�d|_�V"Z�q�&�N����:a�o��]%��D�Ex�Y��s�>�M6�g�^�#q�����k+�������k+���V�k���^[9���r,{m�h���������k�
��kI��U<��~�������l�#q���DO`����Z���������_�i��S,C��P����f���'u=v��3&������_�Q�b0Z�c��<��
���c(j�J�?�g�����;�z{�+��a���M��P���;�2�15`<��t���`��!���K��Dv1�c��U"��2I{-����B3WH�����*����?{����#���N�����N�`���LV���4�[R9�S��X��\���No��T�R(��[�=q)��fh�����z��v��yT��?c������t�<��O;&N��Qw�Y���)R�1���,�^���^Lo����;H���R��hN��S/d��*��)`^ph}!�p�
���%J����i'�
��Zm:����Y��o��,�VJ������"��;�U��mxq~vU�o�����a
+_�n�|��Z����[e��Z�#��G�G��+~'k�R���x�`�a.?���w3&�@��[�Z�j�n������ECfV�t�u
�(W[�	r���>������`$(���65��3n�%`����q��7�0x�{�u�����C/6���Q��
g������[w�5���f�mr�����p�~��������O�v��l�"!^�m>8���m7r�V����2��}mObT	8Zj��������nJ�v��C�Mp��.{	Z������q#��C��F��u���4�m"k��b�P���E�8�?&��*9��h��<���
�Q�>^�	������>����u��e[�"������'�G�=CwM������55s���-��� �������q*`D!�a�g[g�I��n�7,��iE�
�����q��hj��?��ql�t
v3-0004-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/gzip; name=v3-0004-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v3-0003-simple-oauth_provider-extension.patch.gzapplication/gzip; name=v3-0003-simple-oauth_provider-extension.patch.gzDownload
#45Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: mahendrakar s (#44)
Re: [PoC] Federated Authn/z with OAUTHBEARER

More information on the latest patch.

1. We aligned the implementation with the barebone SASL for OAUTH
described here - https://www.rfc-editor.org/rfc/rfc7628
The flow can be explained in the diagram below:

  +----------------------+                                 +----------+
  |             +-------+                                  | Postgres |
  | PQconnect ->|       |                                  |          |
  |             |       |                                  |   +-----------+
  |             |       | ---------- Empty Token---------> | > |           |
  |             | libpq | <-- Error(Discovery + Scope ) -- | < | Pre-Auth  |
  |          +------+   |                                  |   |  Hook     |
  |     +- < | Hook |   |                                  |   +-----------+
  |     |    +------+   |                                  |          |
  |     v       |       |                                  |          |
  |  [get token]|       |                                  |          |
  |     |       |       |                                  |          |
  |     +       |       |                                  |   +-----------+
  | PQconnect > |       | --------- Access Token --------> | > | Validator |
  |             |       | <---------- Auth Result -------- | < |   Hook    |
  |             |       |                                  |   +-----------+
  |             +-------+                                  |          |
  +----------------------+                                 +----------+

2. Removed Device Code implementation in libpq. Several reasons:
- Reduce scope and focus on the protocol first.
- Device code implementation uses iddawc dependency. Taking this
dependency is a controversial step which requires broader discussion.
- Device code implementation without iddaws would significantly
increase the scope of the patch, as libpq needs to poll the token
endpoint, setup different API calls, e.t.c.
- That flow should canonically only be used for clients which can't
invoke browsers. If it is the only flow to be implemented, it can be
used in the context when it's not expected by the OAUTH protocol.

3. Temporarily removed test suite. We are actively working on aligning
the tests with the latest changes. Will add a patch with tests soon.

We will change the "V3" prefix to make it the next after the previous
iterations.

Thanks!
Andrey.

On Thu, Jan 12, 2023 at 11:08 AM mahendrakar s
<mahendrakarforpg@gmail.com> wrote:

Show quoted text

Hi All,

Changes added to Jacob's patch(v2) as per the discussion in the thread.

The changes allow the customer to send the OAUTH BEARER token through psql connection string.

Example:
psql -U user@example.com -d 'dbname=postgres oauth_bearer_token=abc'

To configure OAUTH, the pg_hba.conf line look like:
local all all oauth provider=oauth_provider issuer="https://example.com&quot; scope="openid email"

We also added hook to libpq to pass on the metadata about the issuer.

Thanks,
Mahendrakar.

On Sat, 17 Dec 2022 at 04:48, Jacob Champion <jchampion@timescale.com> wrote:

On Mon, Dec 12, 2022 at 9:06 PM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

If your concern is extension not honoring the DBA configured values:
Would a server-side logic to prefer HBA value over extension-provided
resolve this concern?

Yeah. It also seals the role of the extension here as "optional".

We are definitely biased towards the cloud deployment scenarios, where
direct access to .hba files is usually not offered at all.
Let's find the middle ground here.

Sure. I don't want to make this difficult in cloud scenarios --
obviously I'd like for Timescale Cloud to be able to make use of this
too. But if we make this easy for a lone DBA (who doesn't have any
institutional power with the providers) to use correctly and securely,
then it should follow that the providers who _do_ have power and
resources will have an easy time of it as well. The reverse isn't
necessarily true. So I'm definitely planning to focus on the DBA case
first.

A separate reason for creating this pre-authentication hook is further
extensibility to support more metadata.
Specifically when we add support for OAUTH flows to libpq, server-side
extensions can help bridge the gap between the identity provider
implementation and OAUTH/OIDC specs.
For example, that could allow the Github extension to provide an OIDC
discovery document.

I definitely see identity providers as institutional actors here which
can be given some power through the extension hooks to customize the
behavior within the framework.

We'll probably have to make some compromises in this area, but I think
they should be carefully considered exceptions and not a core feature
of the mechanism. The gaps you point out are just fragmentation, and
adding custom extensions to deal with it leads to further
fragmentation instead of providing pressure on providers to just
implement the specs. Worst case, we open up new exciting security
flaws, and then no one can analyze them independently because no one
other than the provider knows how the two sides work together anymore.

Don't get me wrong; it would be naive to proceed as if the OAUTHBEARER
spec were perfect, because it's clearly not. But if we need to make
extensions to it, we can participate in IETF discussions and make our
case publicly for review, rather than enshrining MS/GitHub/Google/etc.
versions of the RFC and enabling that proliferation as a Postgres core
feature.

Obtaining a token is an asynchronous process with a human in the loop.
Not sure if expecting a hook function to return a token synchronously
is the best option here.
Can that be an optional return value of the hook in cases when a token
can be obtained synchronously?

I don't think the hook is generally going to be able to return a token
synchronously, and I expect the final design to be async-first. As far
as I know, this will need to be solved for the builtin flows as well
(you don't want a synchronous HTTP call to block your PQconnectPoll
architecture), so the hook should be able to make use of whatever
solution we land on for that.

This is hand-wavy, and I don't expect it to be easy to solve. I just
don't think we have to solve it twice.

Have a good end to the year!
--Jacob

#46Jacob Champion
jchampion@timescale.com
In reply to: Andrey Chudnovsky (#45)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sun, Jan 15, 2023 at 12:03 PM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

2. Removed Device Code implementation in libpq. Several reasons:
- Reduce scope and focus on the protocol first.
- Device code implementation uses iddawc dependency. Taking this
dependency is a controversial step which requires broader discussion.
- Device code implementation without iddaws would significantly
increase the scope of the patch, as libpq needs to poll the token
endpoint, setup different API calls, e.t.c.
- That flow should canonically only be used for clients which can't
invoke browsers. If it is the only flow to be implemented, it can be
used in the context when it's not expected by the OAUTH protocol.

I'm not understanding the concern in the final point -- providers
generally require you to opt into device authorization, at least as far
as I can tell. So if you decide that it's not appropriate for your use
case... don't enable it. (And I haven't seen any claims that opting into
device authorization weakens the other flows in any way. So if we're
going to implement a flow in libpq, I still think device authorization
is the best choice, since it works on headless machines as well as those
with browsers.)

All of this points at a bigger question to the community: if we choose
not to provide a flow implementation in libpq, is adding OAUTHBEARER
worth the additional maintenance cost?

My personal vote would be "no". I think the hook-only approach proposed
here would ensure that only larger providers would implement it in
practice, and in that case I'd rather spend cycles on generic SASL.

3. Temporarily removed test suite. We are actively working on aligning
the tests with the latest changes. Will add a patch with tests soon.

Okay. Case in point, the following change to the patch appears to be
invalid JSON:

+   appendStringInfo(&buf,
+       "{ "
+           "\"status\": \"invalid_token\", "
+           "\"openid-configuration\": \"%s\","
+           "\"scope\": \"%s\" ",
+           "\"issuer\": \"%s\" ",
+       "}",

Additionally, the "issuer" field added here is not part of the RFC. I've
written my thoughts about unofficial extensions upthread but haven't
received a response, so I'm going to start being more strident: Please,
for the sake of reviewers, call out changes you've made to the spec, and
why they're justified.

The patches seem to be out of order now (and the documentation in the
commit messages has been removed).

Thanks,
--Jacob

#47Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#46)
Re: [PoC] Federated Authn/z with OAUTHBEARER

All of this points at a bigger question to the community: if we choose
not to provide a flow implementation in libpq, is adding OAUTHBEARER
worth the additional maintenance cost?

My personal vote would be "no". I think the hook-only approach proposed
here would ensure that only larger providers would implement it in
practice

Flow implementations in libpq are definitely a long term plan, and I
agree that it would democratise the adoption.
In the previous posts in this conversation I outlined the ones I think
we should support.

However, I don't see why it's strictly necessary to couple those.
As long as the SASL exchange for OAUTHBEARER mechanism is supported by
the protocol, the Client side can evolve at its own pace.

At the same time, the current implementation allows clients to start
building provider-agnostic OAUTH support. By using iddawc or OAUTH
client implementations in the respective platforms.
So I wouldn't refer to "larger providers", but rather "more motivated
clients" here. Which definitely overlaps, but keeps the system open.

I'm not understanding the concern in the final point -- providers
generally require you to opt into device authorization, at least as far
as I can tell. So if you decide that it's not appropriate for your use
case... don't enable it. (And I haven't seen any claims that opting into
device authorization weakens the other flows in any way. So if we're
going to implement a flow in libpq, I still think device authorization
is the best choice, since it works on headless machines as well as those
with browsers.)

I agree with the statement that Device code is the best first choice
if we absolutely have to pick one.
Though I don't think we have to.

While device flow can be used for all kinds of user-facing
applications, it's specifically designed for input-constrained
scenarios. As clearly stated in the Abstract here -
https://www.rfc-editor.org/rfc/rfc8628
The authorization code with pkce flow is recommended by the RFSc and
major providers for cases when it's feasible.
The long term goal is to provide both, though I don't see why the
backbone protocol implementation first wouldn't add value.

Another point is user authentication is one side of the whole story
and the other critical one is system-to-system authentication. Where
we have Client Credentials and Certificates.
With the latter it is much harder to get generically implemented, as
provider-specific tokens need to be signed.

Adding the other reasoning, I think libpq support for specific flows
can get in the further iterations, after the protocol support.

in that case I'd rather spend cycles on generic SASL.

I see 2 approaches to generic SASL:
(a). Generic SASL is a framework used in the protocol, with the
mechanisms implemented on top and exposed to the DBAs as auth types to
configure in hba.
This is the direction we're going here, which is well aligned with the
existing hba-based auth configuration.
(b). Generic SASL exposed to developers on the server- and client-
side to extend on. It seems to be a much longer shot.
The specific points of large ambiguity are libpq distribution model
(which you pointed to) and potential pluggability of insecure
mechanisms.

I do see (a) as a sweet spot with a lot of value for various
participants with much less ambiguity.

Additionally, the "issuer" field added here is not part of the RFC. I've
written my thoughts about unofficial extensions upthread but haven't
received a response, so I'm going to start being more strident: Please,
for the sake of reviewers, call out changes you've made to the spec, and
why they're justified.

Thanks for your feedback on this. We had this discussion as well, and
added that as a convenience for the client to identify the provider.
I don't see a reason why an issuer would be absolutely necessary, so
we will get your point that sticking to RFCs is a safer choice.

The patches seem to be out of order now (and the documentation in the
commit messages has been removed).

Feedback taken. Work in progress.

Show quoted text

On Tue, Jan 17, 2023 at 2:44 PM Jacob Champion <jchampion@timescale.com> wrote:

On Sun, Jan 15, 2023 at 12:03 PM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

2. Removed Device Code implementation in libpq. Several reasons:
- Reduce scope and focus on the protocol first.
- Device code implementation uses iddawc dependency. Taking this
dependency is a controversial step which requires broader discussion.
- Device code implementation without iddaws would significantly
increase the scope of the patch, as libpq needs to poll the token
endpoint, setup different API calls, e.t.c.
- That flow should canonically only be used for clients which can't
invoke browsers. If it is the only flow to be implemented, it can be
used in the context when it's not expected by the OAUTH protocol.

I'm not understanding the concern in the final point -- providers
generally require you to opt into device authorization, at least as far
as I can tell. So if you decide that it's not appropriate for your use
case... don't enable it. (And I haven't seen any claims that opting into
device authorization weakens the other flows in any way. So if we're
going to implement a flow in libpq, I still think device authorization
is the best choice, since it works on headless machines as well as those
with browsers.)

All of this points at a bigger question to the community: if we choose
not to provide a flow implementation in libpq, is adding OAUTHBEARER
worth the additional maintenance cost?

My personal vote would be "no". I think the hook-only approach proposed
here would ensure that only larger providers would implement it in
practice, and in that case I'd rather spend cycles on generic SASL.

3. Temporarily removed test suite. We are actively working on aligning
the tests with the latest changes. Will add a patch with tests soon.

Okay. Case in point, the following change to the patch appears to be
invalid JSON:

+   appendStringInfo(&buf,
+       "{ "
+           "\"status\": \"invalid_token\", "
+           "\"openid-configuration\": \"%s\","
+           "\"scope\": \"%s\" ",
+           "\"issuer\": \"%s\" ",
+       "}",

Additionally, the "issuer" field added here is not part of the RFC. I've
written my thoughts about unofficial extensions upthread but haven't
received a response, so I'm going to start being more strident: Please,
for the sake of reviewers, call out changes you've made to the spec, and
why they're justified.

The patches seem to be out of order now (and the documentation in the
commit messages has been removed).

Thanks,
--Jacob

#48mahendrakar s
mahendrakarforpg@gmail.com
In reply to: Andrey Chudnovsky (#47)
4 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi All,

The "issuer" field has been removed to align with the RFC
implementation - https://www.rfc-editor.org/rfc/rfc7628.
This patch "v6" is a single patch to support the OAUTH BEARER token
through psql connection string.
Below flow is supported. Added the documentation in the commit messages.

 +----------------------+                                 +----------+
  |             +-------+                                  | Postgres |
  | PQconnect ->|       |                                  |          |
  |             |       |                                  |   +-----------+
  |             |       | ---------- Empty Token---------> | > |           |
  |             | libpq | <-- Error(Discovery + Scope ) -- | < | Pre-Auth  |
  |          +------+   |                                  |   |  Hook     |
  |     +- < | Hook |   |                                  |   +-----------+
  |     |    +------+   |                                  |          |
  |     v       |       |                                  |          |
  |  [get token]|       |                                  |          |
  |     |       |       |                                  |          |
  |     +       |       |                                  |   +-----------+
  | PQconnect > |       | --------- Access Token --------> | > | Validator |
  |             |       | <---------- Auth Result -------- | < |   Hook    |
  |             |       |                                  |   +-----------+
  |             +-------+                                  |          |
  +----------------------+                                 +----------+

Please note that we are working on modifying/adding new tests (from
Jacob's Patch) with the latest changes. Will add a patch with tests
soon.

Thanks,
Mahendrakar.

Show quoted text

On Wed, 18 Jan 2023 at 07:24, Andrey Chudnovsky <achudnovskij@gmail.com> wrote:

All of this points at a bigger question to the community: if we choose
not to provide a flow implementation in libpq, is adding OAUTHBEARER
worth the additional maintenance cost?

My personal vote would be "no". I think the hook-only approach proposed
here would ensure that only larger providers would implement it in
practice

Flow implementations in libpq are definitely a long term plan, and I
agree that it would democratise the adoption.
In the previous posts in this conversation I outlined the ones I think
we should support.

However, I don't see why it's strictly necessary to couple those.
As long as the SASL exchange for OAUTHBEARER mechanism is supported by
the protocol, the Client side can evolve at its own pace.

At the same time, the current implementation allows clients to start
building provider-agnostic OAUTH support. By using iddawc or OAUTH
client implementations in the respective platforms.
So I wouldn't refer to "larger providers", but rather "more motivated
clients" here. Which definitely overlaps, but keeps the system open.

I'm not understanding the concern in the final point -- providers
generally require you to opt into device authorization, at least as far
as I can tell. So if you decide that it's not appropriate for your use
case... don't enable it. (And I haven't seen any claims that opting into
device authorization weakens the other flows in any way. So if we're
going to implement a flow in libpq, I still think device authorization
is the best choice, since it works on headless machines as well as those
with browsers.)

I agree with the statement that Device code is the best first choice
if we absolutely have to pick one.
Though I don't think we have to.

While device flow can be used for all kinds of user-facing
applications, it's specifically designed for input-constrained
scenarios. As clearly stated in the Abstract here -
https://www.rfc-editor.org/rfc/rfc8628
The authorization code with pkce flow is recommended by the RFSc and
major providers for cases when it's feasible.
The long term goal is to provide both, though I don't see why the
backbone protocol implementation first wouldn't add value.

Another point is user authentication is one side of the whole story
and the other critical one is system-to-system authentication. Where
we have Client Credentials and Certificates.
With the latter it is much harder to get generically implemented, as
provider-specific tokens need to be signed.

Adding the other reasoning, I think libpq support for specific flows
can get in the further iterations, after the protocol support.

in that case I'd rather spend cycles on generic SASL.

I see 2 approaches to generic SASL:
(a). Generic SASL is a framework used in the protocol, with the
mechanisms implemented on top and exposed to the DBAs as auth types to
configure in hba.
This is the direction we're going here, which is well aligned with the
existing hba-based auth configuration.
(b). Generic SASL exposed to developers on the server- and client-
side to extend on. It seems to be a much longer shot.
The specific points of large ambiguity are libpq distribution model
(which you pointed to) and potential pluggability of insecure
mechanisms.

I do see (a) as a sweet spot with a lot of value for various
participants with much less ambiguity.

Additionally, the "issuer" field added here is not part of the RFC. I've
written my thoughts about unofficial extensions upthread but haven't
received a response, so I'm going to start being more strident: Please,
for the sake of reviewers, call out changes you've made to the spec, and
why they're justified.

Thanks for your feedback on this. We had this discussion as well, and
added that as a convenience for the client to identify the provider.
I don't see a reason why an issuer would be absolutely necessary, so
we will get your point that sticking to RFCs is a safer choice.

The patches seem to be out of order now (and the documentation in the
commit messages has been removed).

Feedback taken. Work in progress.

On Tue, Jan 17, 2023 at 2:44 PM Jacob Champion <jchampion@timescale.com> wrote:

On Sun, Jan 15, 2023 at 12:03 PM Andrey Chudnovsky
<achudnovskij@gmail.com> wrote:

2. Removed Device Code implementation in libpq. Several reasons:
- Reduce scope and focus on the protocol first.
- Device code implementation uses iddawc dependency. Taking this
dependency is a controversial step which requires broader discussion.
- Device code implementation without iddaws would significantly
increase the scope of the patch, as libpq needs to poll the token
endpoint, setup different API calls, e.t.c.
- That flow should canonically only be used for clients which can't
invoke browsers. If it is the only flow to be implemented, it can be
used in the context when it's not expected by the OAUTH protocol.

I'm not understanding the concern in the final point -- providers
generally require you to opt into device authorization, at least as far
as I can tell. So if you decide that it's not appropriate for your use
case... don't enable it. (And I haven't seen any claims that opting into
device authorization weakens the other flows in any way. So if we're
going to implement a flow in libpq, I still think device authorization
is the best choice, since it works on headless machines as well as those
with browsers.)

All of this points at a bigger question to the community: if we choose
not to provide a flow implementation in libpq, is adding OAUTHBEARER
worth the additional maintenance cost?

My personal vote would be "no". I think the hook-only approach proposed
here would ensure that only larger providers would implement it in
practice, and in that case I'd rather spend cycles on generic SASL.

3. Temporarily removed test suite. We are actively working on aligning
the tests with the latest changes. Will add a patch with tests soon.

Okay. Case in point, the following change to the patch appears to be
invalid JSON:

+   appendStringInfo(&buf,
+       "{ "
+           "\"status\": \"invalid_token\", "
+           "\"openid-configuration\": \"%s\","
+           "\"scope\": \"%s\" ",
+           "\"issuer\": \"%s\" ",
+       "}",

Additionally, the "issuer" field added here is not part of the RFC. I've
written my thoughts about unofficial extensions upthread but haven't
received a response, so I'm going to start being more strident: Please,
for the sake of reviewers, call out changes you've made to the spec, and
why they're justified.

The patches seem to be out of order now (and the documentation in the
commit messages has been removed).

Thanks,
--Jacob

Attachments:

v6-0004-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/x-gzip; name=v6-0004-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v6-0002-backend-add-OAUTHBEARER-SASL-mechanishm.patch.gzapplication/x-gzip; name=v6-0002-backend-add-OAUTHBEARER-SASL-mechanishm.patch.gzDownload
v6-0003-Add-a-very-simple-oauth_provider-extension.patch.gzapplication/x-gzip; name=v6-0003-Add-a-very-simple-oauth_provider-extension.patch.gzDownload
v6-0001-libpq-add-OAUTHBEARER-SASL-mechanism-and-call-back-hooks.patch.gzapplication/x-gzip; name=v6-0001-libpq-add-OAUTHBEARER-SASL-mechanism-and-call-back-hooks.patch.gzDownload
#49Stephen Frost
sfrost@snowman.net
In reply to: mahendrakar s (#48)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Greetings,

* mahendrakar s (mahendrakarforpg@gmail.com) wrote:

The "issuer" field has been removed to align with the RFC
implementation - https://www.rfc-editor.org/rfc/rfc7628.
This patch "v6" is a single patch to support the OAUTH BEARER token
through psql connection string.
Below flow is supported. Added the documentation in the commit messages.

+----------------------+                                 +----------+
|             +-------+                                  | Postgres |
| PQconnect ->|       |                                  |          |
|             |       |                                  |   +-----------+
|             |       | ---------- Empty Token---------> | > |           |
|             | libpq | <-- Error(Discovery + Scope ) -- | < | Pre-Auth  |
|          +------+   |                                  |   |  Hook     |
|     +- < | Hook |   |                                  |   +-----------+
|     |    +------+   |                                  |          |
|     v       |       |                                  |          |
|  [get token]|       |                                  |          |
|     |       |       |                                  |          |
|     +       |       |                                  |   +-----------+
| PQconnect > |       | --------- Access Token --------> | > | Validator |
|             |       | <---------- Auth Result -------- | < |   Hook    |
|             |       |                                  |   +-----------+
|             +-------+                                  |          |
+----------------------+                                 +----------+

Please note that we are working on modifying/adding new tests (from
Jacob's Patch) with the latest changes. Will add a patch with tests
soon.

Having skimmed back through this thread again, I still feel that the
direction that was originally being taken (actually support something in
libpq and the backend, be it with libiddawc or something else or even
our own code, and not just throw hooks in various places) makes a lot
more sense and is a lot closer to how Kerberos and client-side certs and
even LDAP auth work today. That also seems like a much better answer
for our users when it comes to new authentication methods than having
extensions and making libpq developers have to write their own custom
code, not to mention that we'd still need to implement something in psql
to provide such a hook if we are to have psql actually usefully exercise
this, no?

In the Kerberos test suite we have today, we actually bring up a proper
Kerberos server, set things up, and then test end-to-end installing a
keytab for the server, getting a TGT, getting a service ticket, testing
authentication and encryption, etc. Looking around, it seems like the
equivilant would perhaps be to use Glewlwyd and libiddawc or libcurl and
our own code to really be able to test this and show that it works and
that we're doing it correctly, and to let us know if we break something.

Thanks,

Stephen

#50Jacob Champion
jchampion@timescale.com
In reply to: Stephen Frost (#49)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Feb 20, 2023 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:

Having skimmed back through this thread again, I still feel that the
direction that was originally being taken (actually support something in
libpq and the backend, be it with libiddawc or something else or even
our own code, and not just throw hooks in various places) makes a lot
more sense and is a lot closer to how Kerberos and client-side certs and
even LDAP auth work today.

Cool, that helps focus the effort. Thanks!

That also seems like a much better answer
for our users when it comes to new authentication methods than having
extensions and making libpq developers have to write their own custom
code, not to mention that we'd still need to implement something in psql
to provide such a hook if we are to have psql actually usefully exercise
this, no?

I don't mind letting clients implement their own flows... as long as
it's optional. So even if we did use a hook in the end, I agree that
we've got to exercise it ourselves.

In the Kerberos test suite we have today, we actually bring up a proper
Kerberos server, set things up, and then test end-to-end installing a
keytab for the server, getting a TGT, getting a service ticket, testing
authentication and encryption, etc. Looking around, it seems like the
equivilant would perhaps be to use Glewlwyd and libiddawc or libcurl and
our own code to really be able to test this and show that it works and
that we're doing it correctly, and to let us know if we break something.

The original patchset includes a test server in Python -- a major
advantage being that you can test the client and server independently
of each other, since the implementation is so asymmetric. Additionally
testing against something like Glewlwyd would be a great way to stack
coverage. (If we *only* test against a packaged server, though, it'll
be harder to test our stuff in the presence of malfunctions and other
corner cases.)

Thanks,
--Jacob

#51Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#50)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Thanks for the feedback,

Having skimmed back through this thread again, I still feel that the
direction that was originally being taken (actually support something in
libpq and the backend, be it with libiddawc or something else or even
our own code, and not just throw hooks in various places) makes a lot
more sense and is a lot closer to how Kerberos and client-side certs and
even LDAP auth work today. That also seems like a much better answer
for our users when it comes to new authentication methods than having
extensions and making libpq developers have to write their own custom
code, not to mention that we'd still need to implement something in psql
to provide such a hook if we are to have psql actually usefully exercise
this, no?

libpq implementation is the long term plan. However, our intention is
to start with the protocol implementation which allows us to build on
top of.

While device code is the right solution for psql, having that as the
only one can result in incentive to use it in the cases it's not
intended to.
Reasonably good implementation should support all of the following:
(1.) authorization code with pkce (for GUI applications)
(2.) device code (for console user logins)
(3.) client secret
(4.) some support for client certificate flow

(1.) and (4.) require more work to get implemented, though necessary
for encouraging the most secure grant types.
As we didn't have those pieces, we're proposing starting with the
protocol, which can be used by the ecosystem to build token flow
implementations.
Then add the libpq support for individual grant types.

We originally looked at starting with bare bone protocol for PG16 and
adding libpq support in PG17.
That plan won't happen, though still splitting the work into separate
stages would make more sense in my opinion.

Several questions to follow up:
(a.) Would you support committing the protocol first? or you see libpq
implementation for grants as the prerequisite to consider the auth
type?
(b.) As of today, the server side core does not validate that the
token is actually a valid jwt token. Instead relies on the extensions
to do the validation.
Do you think server core should do the basic validation before passing
to extensions to prevent the auth type being used for anything other
than OAUTH flows?

Tests are the plan for the commit-ready implementation.

Thanks!
Andrey.

Show quoted text

On Tue, Feb 21, 2023 at 2:24 PM Jacob Champion <jchampion@timescale.com> wrote:

On Mon, Feb 20, 2023 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:

Having skimmed back through this thread again, I still feel that the
direction that was originally being taken (actually support something in
libpq and the backend, be it with libiddawc or something else or even
our own code, and not just throw hooks in various places) makes a lot
more sense and is a lot closer to how Kerberos and client-side certs and
even LDAP auth work today.

Cool, that helps focus the effort. Thanks!

That also seems like a much better answer
for our users when it comes to new authentication methods than having
extensions and making libpq developers have to write their own custom
code, not to mention that we'd still need to implement something in psql
to provide such a hook if we are to have psql actually usefully exercise
this, no?

I don't mind letting clients implement their own flows... as long as
it's optional. So even if we did use a hook in the end, I agree that
we've got to exercise it ourselves.

In the Kerberos test suite we have today, we actually bring up a proper
Kerberos server, set things up, and then test end-to-end installing a
keytab for the server, getting a TGT, getting a service ticket, testing
authentication and encryption, etc. Looking around, it seems like the
equivilant would perhaps be to use Glewlwyd and libiddawc or libcurl and
our own code to really be able to test this and show that it works and
that we're doing it correctly, and to let us know if we break something.

The original patchset includes a test server in Python -- a major
advantage being that you can test the client and server independently
of each other, since the implementation is so asymmetric. Additionally
testing against something like Glewlwyd would be a great way to stack
coverage. (If we *only* test against a packaged server, though, it'll
be harder to test our stuff in the presence of malfunctions and other
corner cases.)

Thanks,
--Jacob

#52Stephen Frost
sfrost@snowman.net
In reply to: Jacob Champion (#50)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Greetings,

* Jacob Champion (jchampion@timescale.com) wrote:

On Mon, Feb 20, 2023 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:

Having skimmed back through this thread again, I still feel that the
direction that was originally being taken (actually support something in
libpq and the backend, be it with libiddawc or something else or even
our own code, and not just throw hooks in various places) makes a lot
more sense and is a lot closer to how Kerberos and client-side certs and
even LDAP auth work today.

Cool, that helps focus the effort. Thanks!

Great, glad to hear that.

That also seems like a much better answer
for our users when it comes to new authentication methods than having
extensions and making libpq developers have to write their own custom
code, not to mention that we'd still need to implement something in psql
to provide such a hook if we are to have psql actually usefully exercise
this, no?

I don't mind letting clients implement their own flows... as long as
it's optional. So even if we did use a hook in the end, I agree that
we've got to exercise it ourselves.

This really doesn't feel like a great area to try and do hooks or
similar in, not the least because that approach has been tried and tried
again (PAM, GSSAPI, SASL would all be examples..) and frankly none of
them has turned out great (which is why we can't just tell people "well,
install the pam_oauth2 and watch everything work!") and this strikes me
as trying to do that yet again but worse as it's not even a dedicated
project trying to solve the problem but more like a side project. SCRAM
was good, we've come a long way thanks to that, this feels like it
should be more in line with that rather than trying to invent yet
another new "generic" set of hooks/APIs that will just cause DBAs and
our users headaches trying to make work.

In the Kerberos test suite we have today, we actually bring up a proper
Kerberos server, set things up, and then test end-to-end installing a
keytab for the server, getting a TGT, getting a service ticket, testing
authentication and encryption, etc. Looking around, it seems like the
equivilant would perhaps be to use Glewlwyd and libiddawc or libcurl and
our own code to really be able to test this and show that it works and
that we're doing it correctly, and to let us know if we break something.

The original patchset includes a test server in Python -- a major
advantage being that you can test the client and server independently
of each other, since the implementation is so asymmetric. Additionally
testing against something like Glewlwyd would be a great way to stack
coverage. (If we *only* test against a packaged server, though, it'll
be harder to test our stuff in the presence of malfunctions and other
corner cases.)

Oh, that's even better- I agree entirely that having test code that can
be instructed to return specific errors so that we can test that our
code responds properly is great (and is why pgbackrest has things like
a stub'd out libpq, fake s3, GCS, and Azure servers, and more) and would
certainly want to keep that, even if we also build out a test that uses
a real server to provide integration testing with not-our-code too.

Thanks!

Stephen

#53Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Stephen Frost (#52)
Re: [PoC] Federated Authn/z with OAUTHBEARER

This really doesn't feel like a great area to try and do hooks or
similar in, not the least because that approach has been tried and tried
again (PAM, GSSAPI, SASL would all be examples..) and frankly none of
them has turned out great (which is why we can't just tell people "well,
install the pam_oauth2 and watch everything work!") and this strikes me
as trying to do that yet again but worse as it's not even a dedicated
project trying to solve the problem but more like a side project.

In this case it's not intended to be an open-ended hook, but rather an
implementation of a specific rfc (rfc-7628) which defines a
client-server communication for the authentication flow.
The rfc itself does leave a lot of flexibility on specific parts of
the implementation. Which do require hooks:
(1.) Server side hook to validate the token, which is specific to the
OAUTH provider.
(2.) Client side hook to request the client to obtain the token.

On (1.), we would need a hook for the OAUTH provider extension to do
validation. We can though do some basic check that the credential is
indeed a JWT token signed by the requested issuer.

Specifically (2.) is where we can provide a layer in libpq to simplify
the integration. i.e. implement some OAUTH flows.
Though we would need some flexibility for the clients to bring their own token:
For example there are cases where the credential to obtain the token
is stored in a separate secure location and the token is returned from
a separate service or pushed from a more secure environment.

another new "generic" set of hooks/APIs that will just cause DBAs and
our users headaches trying to make work.

As I mentioned above, it's an rfc implementation, rather than our invention.
When it comes to DBAs and the users.
Builtin libpq implementations which allows psql and pgadmin to
seamlessly connect should suffice those needs.
While extensibility would allow the ecosystem to be open for OAUTH
providers, SAAS developers, PAAS providers and other institutional
players.

Thanks!
Andrey.

Show quoted text

On Thu, Feb 23, 2023 at 10:47 AM Stephen Frost <sfrost@snowman.net> wrote:

Greetings,

* Jacob Champion (jchampion@timescale.com) wrote:

On Mon, Feb 20, 2023 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:

Having skimmed back through this thread again, I still feel that the
direction that was originally being taken (actually support something in
libpq and the backend, be it with libiddawc or something else or even
our own code, and not just throw hooks in various places) makes a lot
more sense and is a lot closer to how Kerberos and client-side certs and
even LDAP auth work today.

Cool, that helps focus the effort. Thanks!

Great, glad to hear that.

That also seems like a much better answer
for our users when it comes to new authentication methods than having
extensions and making libpq developers have to write their own custom
code, not to mention that we'd still need to implement something in psql
to provide such a hook if we are to have psql actually usefully exercise
this, no?

I don't mind letting clients implement their own flows... as long as
it's optional. So even if we did use a hook in the end, I agree that
we've got to exercise it ourselves.

This really doesn't feel like a great area to try and do hooks or
similar in, not the least because that approach has been tried and tried
again (PAM, GSSAPI, SASL would all be examples..) and frankly none of
them has turned out great (which is why we can't just tell people "well,
install the pam_oauth2 and watch everything work!") and this strikes me
as trying to do that yet again but worse as it's not even a dedicated
project trying to solve the problem but more like a side project. SCRAM
was good, we've come a long way thanks to that, this feels like it
should be more in line with that rather than trying to invent yet
another new "generic" set of hooks/APIs that will just cause DBAs and
our users headaches trying to make work.

In the Kerberos test suite we have today, we actually bring up a proper
Kerberos server, set things up, and then test end-to-end installing a
keytab for the server, getting a TGT, getting a service ticket, testing
authentication and encryption, etc. Looking around, it seems like the
equivilant would perhaps be to use Glewlwyd and libiddawc or libcurl and
our own code to really be able to test this and show that it works and
that we're doing it correctly, and to let us know if we break something.

The original patchset includes a test server in Python -- a major
advantage being that you can test the client and server independently
of each other, since the implementation is so asymmetric. Additionally
testing against something like Glewlwyd would be a great way to stack
coverage. (If we *only* test against a packaged server, though, it'll
be harder to test our stuff in the presence of malfunctions and other
corner cases.)

Oh, that's even better- I agree entirely that having test code that can
be instructed to return specific errors so that we can test that our
code responds properly is great (and is why pgbackrest has things like
a stub'd out libpq, fake s3, GCS, and Azure servers, and more) and would
certainly want to keep that, even if we also build out a test that uses
a real server to provide integration testing with not-our-code too.

Thanks!

Stephen

#54Stephen Frost
sfrost@snowman.net
In reply to: Andrey Chudnovsky (#53)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Greetings,

* Andrey Chudnovsky (achudnovskij@gmail.com) wrote:

This really doesn't feel like a great area to try and do hooks or
similar in, not the least because that approach has been tried and tried
again (PAM, GSSAPI, SASL would all be examples..) and frankly none of
them has turned out great (which is why we can't just tell people "well,
install the pam_oauth2 and watch everything work!") and this strikes me
as trying to do that yet again but worse as it's not even a dedicated
project trying to solve the problem but more like a side project.

In this case it's not intended to be an open-ended hook, but rather an
implementation of a specific rfc (rfc-7628) which defines a
client-server communication for the authentication flow.
The rfc itself does leave a lot of flexibility on specific parts of
the implementation. Which do require hooks:

Color me skeptical on an RFC that requires hooks.

(1.) Server side hook to validate the token, which is specific to the
OAUTH provider.
(2.) Client side hook to request the client to obtain the token.

Perhaps I'm missing it... but weren't these handled with what the
original patch that Jacob had was doing?

On (1.), we would need a hook for the OAUTH provider extension to do
validation. We can though do some basic check that the credential is
indeed a JWT token signed by the requested issuer.

Specifically (2.) is where we can provide a layer in libpq to simplify
the integration. i.e. implement some OAUTH flows.
Though we would need some flexibility for the clients to bring their own token:
For example there are cases where the credential to obtain the token
is stored in a separate secure location and the token is returned from
a separate service or pushed from a more secure environment.

In those cases... we could, if we wanted, simply implement the code to
actually pull the token, no? We don't *have* to have a hook here for
this, we could just make it work.

another new "generic" set of hooks/APIs that will just cause DBAs and
our users headaches trying to make work.

As I mentioned above, it's an rfc implementation, rather than our invention.

While I only took a quick look, I didn't see anything in that RFC that
explicitly says that hooks or a plugin or a library or such is required
to meet the RFC. Sure, there are places which say that the
implementation is specific to a particular server or client but that's
not the same thing.

When it comes to DBAs and the users.
Builtin libpq implementations which allows psql and pgadmin to
seamlessly connect should suffice those needs.
While extensibility would allow the ecosystem to be open for OAUTH
providers, SAAS developers, PAAS providers and other institutional
players.

Each to end up writing their own code to do largely the same thing
without the benefit of the larger community to be able to review and
ensure that it's done properly?

That doesn't sound like a great approach to me.

Thanks,

Stephen

#55Jacob Champion
jchampion@timescale.com
In reply to: Jacob Champion (#25)
6 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Sep 23, 2022 at 3:39 PM Jacob Champion <jchampion@timescale.com> wrote:

Here's a newly rebased v5. (They're all zipped now, which I probably
should have done a while back, sorry.)

To keep this current, v7 is rebased over latest, without the pluggable
authentication patches. This doesn't yet address the architectural
feedback that was discussed previously, so if you're primarily
interested in that, you can safely ignore this version of the
patchset.

The key changes here include
- Meson support, for both the build and the pytest suite
- Cirrus support (and unsurprisingly, Mac and Windows builds fail due
to the Linux-oriented draft code)
- A small tweak to support iddawc down to 0.9.8 (shipped with e.g.
Debian Bullseye)
- Removal of the authn_id test extension in favor of SYSTEM_USER

The meson+pytest support was big enough that I split it into its own
patch. It's not very polished yet, but it mostly works, and when
running tests via Meson it'll now spin up a test server for you. My
virtualenv approach apparently interacts poorly with the multiarch
Cirrus setup (64-bit tests pass, 32-bit tests fail).

Moving forward, the first thing I plan to tackle is asynchronous
operation, so that polling clients can still operate sanely. If I can
find a good solution there, the conversations about possible extension
points should get a lot easier.

Thanks,
--Jacob

Attachments:

since-v5.diff.txttext/plain; charset=US-ASCII; name=since-v5.diff.txtDownload
 1:  a94a11d56c <  -:  ---------- Add support for custom authentication methods
 2:  973f622fea <  -:  ---------- Add sample extension to test custom auth provider hooks
 3:  b4a0ab5a4e <  -:  ---------- Add tests for test_auth_provider extension
 4:  49715c3c98 <  -:  ---------- Add support for "map" and custom auth options
 5:  2b2e8d3050 !  1:  c3698bbc3d common/jsonapi: support FRONTEND clients
    @@ src/common/jsonapi.c: parse_object(JsonLexContext *lex, JsonSemAction *sem)
      #endif
      
     @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
    - 	char	   *const end = lex->input + lex->input_length;
    - 	int			hi_surrogate = -1;
    + 		return code; \
    + 	} while (0)
      
     -	if (lex->strval != NULL)
     -		resetStringInfo(lex->strval);
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      	Assert(lex->input_length > 0);
      	s = lex->token_start;
     @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
    - 						return JSON_UNICODE_ESCAPE_FORMAT;
    - 					}
    + 					else
    + 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
      				}
     -				if (lex->strval != NULL)
     +				if (lex->parse_strval)
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
     +						appendPQExpBufferChar(lex->strval, (char) ch);
      					}
      					else
    - 						return JSON_UNICODE_HIGH_ESCAPE;
    + 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
      #endif							/* FRONTEND */
      				}
      			}
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
     +			else if (lex->parse_strval)
      			{
      				if (hi_surrogate != -1)
    - 					return JSON_UNICODE_LOW_SURROGATE;
    + 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
     @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      					case '"':
      					case '\\':
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
     +						appendStrValChar(lex->strval, '\t');
      						break;
      					default:
    - 						/* Not a valid string escape, so signal error. */
    + 
     @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      
      			/*
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      			/*
      			 * s will be incremented at the top of the loop, so set it to just
     @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
    - 	if (hi_surrogate != -1)
      		return JSON_UNICODE_LOW_SURROGATE;
    + 	}
      
     +#ifdef FRONTEND
     +	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
    @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *l
     -}
     -
      /*
    -  * Construct a detail message for a JSON error.
    +  * Construct an (already translated) detail message for a JSON error.
       *
     - * Note that the error message generated by this routine may not be
     - * palloc'd, making it unsafe for frontend code as there is no way to
    -- * know if this can be safery pfree'd or not.
    +- * know if this can be safely pfree'd or not.
     + * The returned allocation is either static or owned by the JsonLexContext and
     + * should not be freed.
       */
    @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *l
      			return _("\\u0000 cannot be converted to text.");
      		case JSON_UNICODE_ESCAPE_FORMAT:
     @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
    - 			return _("Unicode low surrogate must follow a high surrogate.");
    + 			/* note: this case is only reachable in frontend not backend */
    + 			return _("Unicode escape values cannot be used for code point values above 007F when the encoding is not UTF8.");
    + 		case JSON_UNICODE_UNTRANSLATABLE:
    +-			/* note: this case is only reachable in backend not frontend */
    ++			/*
    ++			 * note: this case is only reachable in backend not frontend.
    ++			 * #ifdef it away so the frontend doesn't try to link against
    ++			 * backend functionality.
    ++			 */
    ++#ifndef FRONTEND
    + 			return psprintf(_("Unicode escape value could not be translated to the server's encoding %s."),
    + 							GetDatabaseEncodingName());
    ++#else
    ++			Assert(false);
    ++			break;
    ++#endif
    + 		case JSON_UNICODE_HIGH_SURROGATE:
    + 			return _("Unicode high surrogate must not follow a high surrogate.");
    + 		case JSON_UNICODE_LOW_SURROGATE:
    +@@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
    + 			break;
      	}
      
     -	/*
    @@ src/include/common/jsonapi.h
      
     -#include "lib/stringinfo.h"
     -
    - typedef enum
    + typedef enum JsonTokenType
      {
      	JSON_TOKEN_INVALID,
    -@@ src/include/common/jsonapi.h: typedef enum
    +@@ src/include/common/jsonapi.h: typedef enum JsonParseErrorType
      	JSON_EXPECTED_OBJECT_NEXT,
      	JSON_EXPECTED_STRING,
      	JSON_INVALID_TOKEN,
    @@ src/include/common/jsonapi.h: typedef enum
      	JSON_UNICODE_CODE_POINT_ZERO,
      	JSON_UNICODE_ESCAPE_FORMAT,
      	JSON_UNICODE_HIGH_ESCAPE,
    -@@ src/include/common/jsonapi.h: typedef enum
    - 	JSON_UNICODE_LOW_SURROGATE
    +@@ src/include/common/jsonapi.h: typedef enum JsonParseErrorType
    + 	JSON_SEM_ACTION_FAILED		/* error should already be reported */
      } JsonParseErrorType;
      
     +/*
    @@ src/include/common/jsonapi.h: typedef struct JsonLexContext
     +	StrValType *errormsg;
      } JsonLexContext;
      
    - typedef void (*json_struct_action) (void *state);
    + typedef JsonParseErrorType (*json_struct_action) (void *state);
     @@ src/include/common/jsonapi.h: extern PGDLLIMPORT JsonSemAction nullSemAction;
       */
      extern JsonParseErrorType json_count_array_elements(JsonLexContext *lex,
 6:  c18d7da6cc !  2:  0cd726fd55 libpq: add OAUTHBEARER SASL mechanism
    @@ Commit message
         - handle cases where the client has been set up with an issuer and
           scope, but the Postgres server wants to use something different
         - improve error debuggability during the OAuth handshake
    +    - migrate JSON parsing to the new JSON_SEM_ACTION_FAILED API convention
         - ...and more.
     
      ## configure ##
    @@ configure: fi
     +  as_fn_error $? "library 'iddawc' is required for OAuth support" "$LINENO" 5
     +fi
     +
    ++  # Check for an older spelling of i_get_openid_config
    ++  for ac_func in i_load_openid_config
    ++do :
    ++  ac_fn_c_check_func "$LINENO" "i_load_openid_config" "ac_cv_func_i_load_openid_config"
    ++if test "x$ac_cv_func_i_load_openid_config" = xyes; then :
    ++  cat >>confdefs.h <<_ACEOF
    ++#define HAVE_I_LOAD_OPENID_CONFIG 1
    ++_ACEOF
    ++
    ++fi
    ++done
    ++
     +fi
     +
      # for contrib/sepgsql
    @@ configure.ac: fi
      
     +if test "$with_oauth" = yes ; then
     +  AC_CHECK_LIB(iddawc, i_init_session, [], [AC_MSG_ERROR([library 'iddawc' is required for OAuth support])])
    ++  # Check for an older spelling of i_get_openid_config
    ++  AC_CHECK_FUNCS([i_load_openid_config])
     +fi
     +
      # for contrib/sepgsql
    @@ configure.ac: elif test "$with_uuid" = ossp ; then
         AC_CHECK_HEADERS(crtdefs.h)
      fi
     
    + ## meson.build ##
    +@@ meson.build: endif
    + 
    + 
    + 
    ++###############################################################
    ++# Library: oauth
    ++###############################################################
    ++
    ++oauthopt = get_option('oauth')
    ++if not oauthopt.disabled()
    ++  oauth = dependency('libiddawc', required: oauthopt)
    ++
    ++  if oauth.found()
    ++    cdata.set('USE_OAUTH', 1)
    ++    # Check for an older spelling of i_get_openid_config
    ++    if cc.has_function('i_load_openid_config',
    ++                       dependencies: oauth, args: test_c_args)
    ++      cdata.set('HAVE_I_LOAD_OPENID_CONFIG', 1)
    ++    endif
    ++  endif
    ++else
    ++  oauth = not_found_dep
    ++endif
    ++
    ++
    + ###############################################################
    + # Library: Tcl (for pltcl)
    + #
    +@@ meson.build: libpq_deps += [
    +   gssapi,
    +   ldap_r,
    +   libintl,
    ++  oauth,
    +   ssl,
    + ]
    + 
    +@@ meson.build: if meson.version().version_compare('>=0.57')
    +       'llvm': llvm,
    +       'lz4': lz4,
    +       'nls': libintl,
    ++      'oauth': oauth,
    +       'openssl': ssl,
    +       'pam': pam,
    +       'plperl': perl_dep,
    +
    + ## meson_options.txt ##
    +@@ meson_options.txt: option('lz4', type : 'feature', value: 'auto',
    + option('nls', type: 'feature', value: 'auto',
    +   description: 'native language support')
    + 
    ++option('oauth', type : 'feature', value: 'auto',
    ++  description: 'OAuth 2.0 support')
    ++
    + option('pam', type : 'feature', value: 'auto',
    +   description: 'build with PAM support')
    + 
    +
      ## src/Makefile.global.in ##
     @@ src/Makefile.global.in: with_ldap	= @with_ldap@
      with_libxml	= @with_libxml@
    @@ src/Makefile.global.in: with_ldap	= @with_ldap@
      with_uuid	= @with_uuid@
      with_zlib	= @with_zlib@
     
    + ## src/common/meson.build ##
    +@@ src/common/meson.build: common_sources_frontend_static += files(
    + # For the server build of pgcommon, depend on lwlocknames_h, because at least
    + # cryptohash_openssl.c, hmac_openssl.c depend on it. That's arguably a
    + # layering violation, but ...
    ++#
    ++# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
    ++# appropriately. This seems completely broken.
    + pgcommon = {}
    + pgcommon_variants = {
    +   '_srv': internal_lib_args + {
    ++    'include_directories': include_directories('.'),
    +     'sources': common_sources + [lwlocknames_h],
    +     'dependencies': [backend_common_code],
    +   },
    +   '': default_lib_args + {
    ++    'include_directories': include_directories('../interfaces/libpq', '.'),
    +     'sources': common_sources_frontend_static,
    +     'dependencies': [frontend_common_code],
    +   },
    +   '_shlib': default_lib_args + {
    +     'pic': true,
    ++    'include_directories': include_directories('../interfaces/libpq', '.'),
    +     'sources': common_sources_frontend_shlib,
    +     'dependencies': [frontend_common_code],
    +   },
    +@@ src/common/meson.build: foreach name, opts : pgcommon_variants
    +     c_args = opts.get('c_args', []) + common_cflags[cflagname]
    +     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
    +       c_pch: pch_c_h,
    +-      include_directories: include_directories('.'),
    +       kwargs: opts + {
    +         'sources': sources,
    +         'c_args': c_args,
    +@@ src/common/meson.build: foreach name, opts : pgcommon_variants
    +   lib = static_library('libpgcommon@0@'.format(name),
    +       link_with: cflag_libs,
    +       c_pch: pch_c_h,
    +-      include_directories: include_directories('.'),
    +       kwargs: opts + {
    +         'dependencies': opts['dependencies'] + [ssl],
    +       }
    +
      ## src/include/common/oauth-common.h (new) ##
     @@
     +/*-------------------------------------------------------------------------
    @@ src/include/common/oauth-common.h (new)
     +#endif /* OAUTH_COMMON_H */
     
      ## src/include/pg_config.h.in ##
    +@@
    + /* Define to 1 if __builtin_constant_p(x) implies "i"(x) acceptance. */
    + #undef HAVE_I_CONSTRAINT__BUILTIN_CONSTANT_P
    + 
    ++/* Define to 1 if you have the `i_load_openid_config' function. */
    ++#undef HAVE_I_LOAD_OPENID_CONFIG
    ++
    + /* Define to 1 if you have the `kqueue' function. */
    + #undef HAVE_KQUEUE
    + 
     @@
      /* Define to 1 if you have the `crypto' library (-lcrypto). */
      #undef HAVE_LIBCRYPTO
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +#include "fe-auth.h"
     +#include "mb/pg_wchar.h"
     +
    ++#ifdef HAVE_I_LOAD_OPENID_CONFIG
    ++/* Older versions of iddawc used 'load' instead of 'get' for some APIs. */
    ++#define i_get_openid_config i_load_openid_config
    ++#endif
    ++
     +/* The exported OAuth callback mechanism. */
     +static void *oauth_init(PGconn *conn, const char *password,
     +						const char *sasl_mechanism);
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		(ctx)->errmsg = (ctx)->errbuf.data; \
     +	} while (0)
     +
    -+static void
    ++static JsonParseErrorType
     +oauth_json_object_start(void *state)
     +{
     +	struct json_ctx	   *ctx = state;
     +
     +	if (oauth_json_has_error(ctx))
    -+		return; /* short-circuit */
    ++		return JSON_SUCCESS; /* short-circuit */
     +
     +	if (ctx->target_field)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	}
     +
     +	++ctx->nested;
    ++	return JSON_SUCCESS; /* TODO: switch all of these to JSON_SEM_ACTION_FAILED */
     +}
     +
    -+static void
    ++static JsonParseErrorType
     +oauth_json_object_end(void *state)
     +{
     +	struct json_ctx	   *ctx = state;
     +
     +	if (oauth_json_has_error(ctx))
    -+		return; /* short-circuit */
    ++		return JSON_SUCCESS; /* short-circuit */
     +
     +	--ctx->nested;
    ++	return JSON_SUCCESS;
     +}
     +
    -+static void
    ++static JsonParseErrorType
     +oauth_json_object_field_start(void *state, char *name, bool isnull)
     +{
     +	struct json_ctx	   *ctx = state;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	{
     +		/* short-circuit */
     +		free(name);
    -+		return;
    ++		return JSON_SUCCESS;
     +	}
     +
     +	if (ctx->nested == 1)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	}
     +
     +	free(name);
    ++	return JSON_SUCCESS;
     +}
     +
    -+static void
    ++static JsonParseErrorType
     +oauth_json_array_start(void *state)
     +{
     +	struct json_ctx	   *ctx = state;
     +
     +	if (oauth_json_has_error(ctx))
    -+		return; /* short-circuit */
    ++		return JSON_SUCCESS; /* short-circuit */
     +
     +	if (!ctx->nested)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +							 libpq_gettext("field \"%s\" must be a string"),
     +							 ctx->target_field_name);
     +	}
    ++
    ++	return JSON_SUCCESS;
     +}
     +
    -+static void
    ++static JsonParseErrorType
     +oauth_json_scalar(void *state, char *token, JsonTokenType type)
     +{
     +	struct json_ctx	   *ctx = state;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	{
     +		/* short-circuit */
     +		free(token);
    -+		return;
    ++		return JSON_SUCCESS;
     +	}
     +
     +	if (!ctx->nested)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			ctx->target_field = NULL;
     +			ctx->target_field_name = NULL;
     +
    -+			return; /* don't free the token we're using */
    ++			return JSON_SUCCESS; /* don't free the token we're using */
     +		}
     +
     +		oauth_json_set_error(ctx,
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	}
     +
     +	free(token);
    ++	return JSON_SUCCESS;
     +}
     +
     +static bool
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_continue(PGconn *conn, int payloadlen, b
      						 &done, &success);
     
      ## src/interfaces/libpq/fe-auth.h ##
    -@@ src/interfaces/libpq/fe-auth.h: extern const pg_fe_sasl_mech pg_scram_mech;
    - extern char *pg_fe_scram_build_secret(const char *password,
    +@@ src/interfaces/libpq/fe-auth.h: extern char *pg_fe_scram_build_secret(const char *password,
    + 									  int iterations,
      									  const char **errstr);
      
     +/* Mechanisms in fe-auth-oauth.c */
    @@ src/interfaces/libpq/fe-auth.h: extern const pg_fe_sasl_mech pg_scram_mech;
     
      ## src/interfaces/libpq/fe-connect.c ##
     @@ src/interfaces/libpq/fe-connect.c: static const internalPQconninfoOption PQconninfoOptions[] = {
    - 		"Target-Session-Attrs", "", 15, /* sizeof("prefer-standby") = 15 */
    - 	offsetof(struct pg_conn, target_session_attrs)},
    + 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
    + 	offsetof(struct pg_conn, load_balance_hosts)},
      
     +	/* OAuth v2 */
     +	{"oauth_issuer", NULL, NULL, NULL,
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
      
      					/*
     @@ src/interfaces/libpq/fe-connect.c: freePGconn(PGconn *conn)
    - 	free(conn->outBuffer);
      	free(conn->rowBuf);
      	free(conn->target_session_attrs);
    + 	free(conn->load_balance_hosts);
     +	free(conn->oauth_issuer);
     +	free(conn->oauth_discovery_uri);
     +	free(conn->oauth_client_id);
    @@ src/interfaces/libpq/fe-connect.c: freePGconn(PGconn *conn)
     
      ## src/interfaces/libpq/libpq-int.h ##
     @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
    - 	char	   *ssl_max_protocol_version;	/* maximum TLS protocol version */
    - 	char	   *target_session_attrs;	/* desired session properties */
    + 	char	   *require_auth;	/* name of the expected auth method */
    + 	char	   *load_balance_hosts; /* load balance over hosts */
      
     +	/* OAuth v2 */
     +	char	   *oauth_issuer;			/* token issuer URL */
    @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
      	/* Optional file to write trace info to */
      	FILE	   *Pfdebug;
      	int			traceFlags;
    +
    + ## src/interfaces/libpq/meson.build ##
    +@@ src/interfaces/libpq/meson.build: if gssapi.found()
    +   )
    + endif
    + 
    ++if oauth.found()
    ++  libpq_sources += files('fe-auth-oauth.c')
    ++endif
    ++
    + export_file = custom_target('libpq.exports',
    +   kwargs: gen_export_kwargs,
    + )
    +
    + ## src/makefiles/meson.build ##
    +@@ src/makefiles/meson.build: pgxs_deps = {
    +   'llvm': llvm,
    +   'lz4': lz4,
    +   'nls': libintl,
    ++  'oauth': oauth,
    +   'pam': pam,
    +   'perl': perl_dep,
    +   'python': python3_dep,
 7:  f6a81f50f2 !  3:  77889eb986 backend: add OAUTHBEARER SASL mechanism
    @@ src/backend/libpq/auth.c
      #include "libpq/pqformat.h"
      #include "libpq/sasl.h"
      #include "libpq/scram.h"
    +@@
    +  */
    + static void auth_failed(Port *port, int status, const char *logdetail);
    + static char *recv_password_packet(Port *port);
    +-static void set_authn_id(Port *port, const char *id);
    + 
    + 
    + /*----------------------------------------------------------------
    +@@ src/backend/libpq/auth.c: static int	CheckRADIUSAuth(Port *port);
    + static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
    + 
    + 
    +-/*
    +- * Maximum accepted size of GSS and SSPI authentication tokens.
    +- * We also use this as a limit on ordinary password packet lengths.
    +- *
    +- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
    +- * domain controllers include an authorization field known as the Privilege
    +- * Attribute Certificate (PAC), which contains the user's Windows permissions
    +- * (group memberships etc.). The PAC is copied into all tickets obtained on
    +- * the basis of this TGT (even those issued by Unix realms which the Windows
    +- * realm trusts), and can be several kB in size. The maximum token size
    +- * accepted by Windows systems is determined by the MaxAuthToken Windows
    +- * registry setting. Microsoft recommends that it is not set higher than
    +- * 65535 bytes, so that seems like a reasonable limit for us as well.
    +- */
    +-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
    +-
    + /*----------------------------------------------------------------
    +  * Global authentication functions
    +  *----------------------------------------------------------------
     @@ src/backend/libpq/auth.c: auth_failed(Port *port, int status, const char *logdetail)
      		case uaRADIUS:
      			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
    @@ src/backend/libpq/auth.c: auth_failed(Port *port, int status, const char *logdet
     +		case uaOAuth:
     +			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
     +			break;
    - 		case uaCustom:
    - 			{
    - 				CustomAuthProvider *provider = get_provider_by_name(port->hba->custom_provider);
    + 		default:
    + 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
    + 			break;
    +@@ src/backend/libpq/auth.c: auth_failed(Port *port, int status, const char *logdetail)
    +  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
    +  * managed by an external library.
    +  */
    +-static void
    ++void
    + set_authn_id(Port *port, const char *id)
    + {
    + 	Assert(id);
     @@ src/backend/libpq/auth.c: ClientAuthentication(Port *port)
      		case uaTrust:
      			status = STATUS_OK;
    @@ src/backend/libpq/auth.c: ClientAuthentication(Port *port)
     +		case uaOAuth:
     +			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
     +			break;
    - 		case uaCustom:
    - 			{
    - 				CustomAuthProvider *provider = get_provider_by_name(port->hba->custom_provider);
    + 	}
    + 
    + 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
     
      ## src/backend/libpq/hba.c ##
     @@ src/backend/libpq/hba.c: static const char *const UserAuthName[] =
    + 	"ldap",
      	"cert",
      	"radius",
    - 	"custom",
     -	"peer"
     +	"peer",
     +	"oauth",
      };
      
    - 
    + /*
     @@ src/backend/libpq/hba.c: parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
      #endif
      	else if (strcmp(token->string, "radius") == 0)
      		parsedline->auth_method = uaRADIUS;
     +	else if (strcmp(token->string, "oauth") == 0)
     +		parsedline->auth_method = uaOAuth;
    - 	else if (strcmp(token->string, "custom") == 0)
    - 		parsedline->auth_method = uaCustom;
      	else
    + 	{
    + 		ereport(elevel,
     @@ src/backend/libpq/hba.c: parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
    + 			hbaline->auth_method != uaPeer &&
      			hbaline->auth_method != uaGSS &&
      			hbaline->auth_method != uaSSPI &&
    - 			hbaline->auth_method != uaCert &&
    -+			hbaline->auth_method != uaOAuth &&
    - 			hbaline->auth_method != uaCustom)
    --			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert and custom"));
    -+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, oauth, and custom"));
    +-			hbaline->auth_method != uaCert)
    +-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
    ++			hbaline->auth_method != uaCert &&
    ++			hbaline->auth_method != uaOAuth)
    ++			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
      		hbaline->usermap = pstrdup(val);
      	}
      	else if (strcmp(name, "clientcert") == 0)
    @@ src/backend/libpq/hba.c: parse_hba_auth_opt(char *name, char *val, HbaLine *hbal
     +		else
     +			hbaline->oauth_skip_usermap = false;
     +	}
    - 	else if (strcmp(name, "provider") == 0)
    + 	else
      	{
    - 		REQUIRE_AUTH_OPTION(uaCustom, "provider", "custom");
    + 		ereport(elevel,
    +
    + ## src/backend/libpq/meson.build ##
    +@@
    + # Copyright (c) 2022-2023, PostgreSQL Global Development Group
    + 
    + backend_sources += files(
    ++  'auth-oauth.c',
    +   'auth-sasl.c',
    +   'auth-scram.c',
    +   'auth.c',
     
      ## src/backend/utils/misc/guc_tables.c ##
     @@
    @@ src/backend/utils/misc/guc_tables.c
      #include "libpq/auth.h"
      #include "libpq/libpq.h"
     +#include "libpq/oauth.h"
    + #include "libpq/scram.h"
    + #include "nodes/queryjumble.h"
      #include "optimizer/cost.h"
    - #include "optimizer/geqo.h"
    - #include "optimizer/optimizer.h"
     @@ src/backend/utils/misc/guc_tables.c: struct config_string ConfigureNamesString[] =
    - 		check_backtrace_functions, assign_backtrace_functions, NULL
    + 		check_io_direct, assign_io_direct, NULL
      	},
      
     +	{
     +		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
     +			gettext_noop("Command to validate OAuth v2 bearer tokens."),
     +			NULL,
    -+			GUC_SUPERUSER_ONLY
    ++			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
     +		},
     +		&oauth_validator_command,
     +		"",
    @@ src/backend/utils/misc/guc_tables.c: struct config_string ConfigureNamesString[]
      	{
      		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
     
    + ## src/include/libpq/auth.h ##
    +@@
    + 
    + #include "libpq/libpq-be.h"
    + 
    ++/*
    ++ * Maximum accepted size of GSS and SSPI authentication tokens.
    ++ * We also use this as a limit on ordinary password packet lengths.
    ++ *
    ++ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
    ++ * domain controllers include an authorization field known as the Privilege
    ++ * Attribute Certificate (PAC), which contains the user's Windows permissions
    ++ * (group memberships etc.). The PAC is copied into all tickets obtained on
    ++ * the basis of this TGT (even those issued by Unix realms which the Windows
    ++ * realm trusts), and can be several kB in size. The maximum token size
    ++ * accepted by Windows systems is determined by the MaxAuthToken Windows
    ++ * registry setting. Microsoft recommends that it is not set higher than
    ++ * 65535 bytes, so that seems like a reasonable limit for us as well.
    ++ */
    ++#define PG_MAX_AUTH_TOKEN_LENGTH	65535
    ++
    + extern PGDLLIMPORT char *pg_krb_server_keyfile;
    + extern PGDLLIMPORT bool pg_krb_caseins_users;
    + extern PGDLLIMPORT bool pg_gss_accept_deleg;
    +@@ src/include/libpq/auth.h: extern PGDLLIMPORT char *pg_krb_realm;
    + extern void ClientAuthentication(Port *port);
    + extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
    + 							int extralen);
    ++extern void set_authn_id(Port *port, const char *id);
    + 
    + /* Hook for plugins to get control in ClientAuthentication() */
    + typedef void (*ClientAuthentication_hook_type) (Port *, int);
    +
      ## src/include/libpq/hba.h ##
     @@ src/include/libpq/hba.h: typedef enum UserAuth
    + 	uaLDAP,
      	uaCert,
      	uaRADIUS,
    - 	uaCustom,
     -	uaPeer
     -#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
     +	uaPeer,
    @@ src/include/libpq/hba.h: typedef struct HbaLine
     +	char	   *oauth_issuer;
     +	char	   *oauth_scope;
     +	bool		oauth_skip_usermap;
    - 	char	   *custom_provider;
    - 	List	   *custom_auth_options;
      } HbaLine;
    + 
    + typedef struct IdentLine
     
      ## src/include/libpq/oauth.h (new) ##
     @@
 8:  e71df89b8f <  -:  ---------- Add a very simple authn_id extension
 9:  73adeb3645 !  4:  573a2ca3bc Add pytest suite for OAuth
    @@ src/test/python/README (new)
     +A test suite for exercising both the libpq client and the server backend at the
     +protocol level, based on pytest and Construct.
     +
    ++WARNING! This suite takes superuser-level control of the cluster under test,
    ++writing to the server config, creating and destroying databases, etc. It also
    ++spins up various ephemeral TCP services. This is not safe for production servers
    ++and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
    ++the environment.
    ++
     +The test suite currently assumes that the standard PG* environment variables
     +point to the database under test and are sufficient to log in a superuser on
     +that system. In other words, a bare `psql` needs to Just Work before the test
    @@ src/test/python/README (new)
     +
     +The first run of
     +
    -+    make installcheck
    ++    make installcheck PG_TEST_EXTRA=python
     +
     +will install a local virtual environment and all needed dependencies. During
     +development, if libpq changes incompatibly, you can issue
    @@ src/test/python/README (new)
     +The Makefile is there for convenience, but you don't have to use it. Activate
     +the virtualenv to be able to use pytest directly:
     +
    ++    $ export PG_TEST_EXTRA=python
     +    $ source venv/bin/activate
     +    $ py.test -k oauth
     +    ...
    @@ src/test/python/client/test_oauth.py (new)
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
     +        client.check_completed()
     
    + ## src/test/python/conftest.py (new) ##
    +@@
    ++#
    ++# Copyright 2023 Timescale, Inc.
    ++# SPDX-License-Identifier: PostgreSQL
    ++#
    ++
    ++import os
    ++
    ++import pytest
    ++
    ++
    ++@pytest.fixture(scope="session", autouse=True)
    ++def _check_PG_TEST_EXTRA(request):
    ++    """
    ++    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
    ++    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
    ++    I've made this an autoused fixture instead.
    ++
    ++    TODO: there are tests here that are probably safe, but until I do a full
    ++    analysis on which are and which are not, I've made the entire thing opt-in.
    ++    """
    ++    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
    ++    if "python" not in extra_tests:
    ++        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
    +
      ## src/test/python/pq3.py (new) ##
     @@
     +#
    @@ src/test/python/pq3.py (new)
     +
     +# Pq3
     +
    ++
     +# Adapted from construct.core.EnumIntegerString
     +class EnumNamedByte:
     +    def __init__(self, val, name):
    @@ src/test/python/pytest.ini (new)
      ## src/test/python/requirements.txt (new) ##
     @@
     +black
    -+cryptography~=3.4.6
    ++# cryptography 39.x removes a lot of platform support, beware
    ++cryptography~=38.0.4
     +construct~=2.10.61
     +isort~=5.6
    ++# TODO: update to psycopg[c] 3.1
     +psycopg2~=2.8.6
    -+pytest~=6.1
    -+pytest-asyncio~=0.14.0
    ++pytest~=7.3
    ++pytest-asyncio~=0.21.0
     
      ## src/test/python/server/__init__.py (new) ##
     
    @@ src/test/python/server/conftest.py (new)
     +
     +import pq3
     +
    ++BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
    ++
     +
     +@pytest.fixture
     +def connect():
    @@ src/test/python/server/conftest.py (new)
     +            addr = (pq3.pghost(), pq3.pgport())
     +
     +            try:
    -+                sock = socket.create_connection(addr, timeout=2)
    ++                sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
     +            except ConnectionError as e:
     +                pytest.skip(f"unable to connect to {addr}: {e}")
     +
    @@ src/test/python/server/test_oauth.py (new)
     @@
     +#
     +# Copyright 2021 VMware, Inc.
    ++# Portions Copyright 2023 Timescale, Inc.
     +# SPDX-License-Identifier: PostgreSQL
     +#
     +
    @@ src/test/python/server/test_oauth.py (new)
     +
     +import pq3
     +
    ++from .conftest import BLOCKING_TIMEOUT
    ++
     +MAX_SASL_MESSAGE_LENGTH = 65535
     +
     +INVALID_AUTHORIZATION_ERRCODE = b"28000"
    @@ src/test/python/server/test_oauth.py (new)
     +
     +SHARED_MEM_NAME = "oauth-pytest"
     +MAX_TOKEN_SIZE = 4096
    -+MAX_UINT16 = 2 ** 16 - 1
    ++MAX_UINT16 = 2**16 - 1
     +
     +
     +def skip_if_no_postgres():
    @@ src/test/python/server/test_oauth.py (new)
     +    addr = (pq3.pghost(), pq3.pgport())
     +
     +    try:
    -+        with socket.create_connection(addr, timeout=2):
    ++        with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
     +            pass
     +    except ConnectionError as e:
     +        pytest.skip(f"unable to connect to {addr}: {e}")
    @@ src/test/python/server/test_oauth.py (new)
     +    return connect()
     +
     +
    -+@pytest.fixture(scope="module", autouse=True)
    -+def authn_id_extension(oauth_ctx):
    -+    """
    -+    Performs a `CREATE EXTENSION authn_id` in the test database. This fixture is
    -+    autoused, so tests don't need to rely on it.
    -+    """
    -+    conn = psycopg2.connect(database=oauth_ctx.dbname)
    -+    conn.autocommit = True
    -+
    -+    with contextlib.closing(conn):
    -+        c = conn.cursor()
    -+        c.execute("CREATE EXTENSION authn_id;")
    -+
    -+
     +@pytest.fixture(scope="session")
     +def shared_mem():
     +    """
    @@ src/test/python/server/test_oauth.py (new)
     +    expect_handshake_success(conn)
     +
     +    # Make sure that the server has not set an authenticated ID.
    -+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
    ++    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
     +    resp = receive_until(conn, pq3.types.DataRow)
     +
     +    row = resp.payload
    @@ src/test/python/server/test_oauth.py (new)
     +    expect_handshake_success(conn)
     +
     +    # Check the reported authn_id.
    -+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
    ++    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
     +    resp = receive_until(conn, pq3.types.DataRow)
     +
    ++    expected = authn_id
    ++    if expected is not None:
    ++        expected = b"oauth:" + expected
    ++
     +    row = resp.payload
    -+    assert row.columns == [authn_id]
    ++    assert row.columns == [expected]
     +
     +
     +class ExpectedError(object):
    @@ src/test/python/server/test_oauth.py (new)
     +    expect_handshake_success(conn)
     +
     +    # Check the user identity.
    -+    pq3.send(conn, pq3.types.Query, query=b"SELECT authn_id();")
    ++    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
     +    resp = receive_until(conn, pq3.types.DataRow)
     +
     +    row = resp.payload
    -+    expected = oauth_ctx.user.encode("utf-8")
    ++    expected = b"oauth:" + oauth_ctx.user.encode("utf-8")
     +    assert row.columns == [expected]
     +
     +
    @@ src/test/python/server/validate_bearer.py (new)
     +import sys
     +from multiprocessing import shared_memory
     +
    -+MAX_UINT16 = 2 ** 16 - 1
    ++MAX_UINT16 = 2**16 - 1
     +
     +
     +def remove_shm_from_resource_tracker():
10:  ab32128f34 <  -:  ---------- contrib/oauth: switch to pluggable auth API
 -:  ---------- >  5:  4490d029b5 squash! Add pytest suite for OAuth
v7-0001-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/gzip; name=v7-0001-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v7-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v7-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v7-0004-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v7-0004-Add-pytest-suite-for-OAuth.patch.gzDownload
�U�Jdv7-0004-Add-pytest-suite-for-OAuth.patch�[mw�6��|�+�J�F�%Z�����(�S��c��v���e!�XS$K�r��?���h[����j�P$0�<3����������w�^��`�h���=q,8|���G{{=�?��e��H��!��N���z;�w ��}��x�Ng|���o��/_/��<����x�3q���A���o�$�wX���wp�{����^�1��?/;a������lq����?����d�	�1��`�8eW�<�57��<H�d��l��~���l��:�0V<a?���`A$3��Lxw?a��/�"�q�!��WY�����4�2�0��n��o/��~c���"�A(�F��m0�z���vB�m;� �6�~�����z�K�z��]T'�\M'�W�;����^^�(�v� 
2�u���~��xq4���.;�����������������sX�r@�w�K�U>5Rj������3i��nM{�����V����_���aS��s��t�/Y��q]')��H��������~9vz�������-G]���&���a���cP�:���}��+���v�]R�@���m��{T�����o�������t���}�x_o�~�!@H�M���� ��4���v�y����y�\����G�i���6j�i���0��n�?��*=�����:>SE�B�5-0f��Ut���:�<��s'_�1��X��Z�������7�.f#���f�^���-��L&�������#|e���F$�	��p�/@��qz�h��������b;���(���_=�)�������-�M�����-"g�����Q�8��<������A��p�y��Y@��i��?�4N�i0�e�����."������?t?�8�^O��I �v�l���?>� l{����@WN�8�0&���d/%������"�����D
��8�MQ�\�L��+�80hp�4��y��lF���������l����C�['���t�^��0K���3T��I
{%N���!���<�p*�;�(NSq5�:a�p����������������S���7!����Z���="����5+�C��v$��o�F�+������btus�"#��2V�7���WZ��:w��s
&��d%ya��Q����'�[��nl�Oq���M�j������Z�����$��~A%�+d":���F9��~J��[�Bk���=akl��8��k=�06
�<�MEq'e_�M-O���aFsF9�^b����}=�{A�]W�<!��K/N�}���!�c���V���\��k�4��wN�7�nw�y�d'pL�<�M���UM�~uf�S��m��|�/v�bg�x#��-�p�*9��"R/��J�X��Z{��O7�c�1x'`
��6�-p�,�bX�!��0k�z��O�gi1��������!&j�fz�4���$}%KAt<!
�0�LY�Ii���}
0��Z"�	���1��C�4^������D�9�
e���	��B�8�L$3X������Ch��R�D1X�O�1�>L�2Jm?`%��0�w�/IxA�#N2�c��)|�����=���f8P��%q����4�������zy��ds)�9X6���b����S����4NbP��X��:���'��P��x

2^.#��C��v����`�q���9��)���?�H_��o�J���;0�-��R3����vpj`.���s�p�F�`��e:,[&��v4�tA���1��������L�vx;|3�
�	�����QU.I,��)�c����x�n,��[��(0�t���s��F_W��XX��f4\+�������8���=��_�hQ����M�6O���-�e�`�F|���v��x�����Na�|P��R���7�a2hi�A����`�������Rf�?e�(����qKq����.�4������!�����g�����`d&*66A_+m�E|G�G�7�Y���^q��$����k�p�*5����M�;�.�����hUR�j��T>ES��h\B`��8c��Kt^/<������E���m�/x���N��0�+�z �l���K����f|A��9��aC��XV�v0��k��-����z����7��W��������P�z�����	����U�w���y a������,�	r��o�!v����H�����n�uc�I��L��qpHX�XJ�*`Z���&�u,�9�<��e*#��~�sXf�S�.Hp�9�N1M8S�4ea�9{I1
�x�8U�)G�3��b��F��<�7R���>��]��P���������FL�f �Ua���O,�9m����KY\g3 6�F?}��Rsc�Bfy��
�|�:}����<����
X}]8��c�`0g���F�1��\�0G����Z��&��,G��R�W���G��}7��p'�RN��Y����������0@�&@V"�����bS%CiNE�������������g��tfpG���/����?�p�[��N����;�{`�v�m�T��vVn���8�ZU�[������-��@��)9�9�g9iK�������;T<�E�fiM��'
[����q�&g�P^����F�Pd��)<�!�����s���!f�1X�^&u�~b`�%E8��WZR�xu�g*��� �l���O�H��j������������ �qU#x�.���@|�DB��q$lU�����JY����m����f`���3�ze��^���Z��;+�����2	���Y?_�];�`�1-��{����$��)�������*hF����s�p�HcQ
�b�0�jf���"'�������v�R���I �9b��+�����2�< �N������aKj����jH�����")�����gi������	)��;A,P����M�B3ha���~��^��}]m��s��V��L�h	k)�@|���p���w����	RF���N�
pO�P�f0L�	����}�&xC��(q�����q)WB�X�^G�����2�W,��������Z�.��3�a���|��o���a>�]����/��@�f����#v'��[��i�jT��	�� �Ai�TDCA993{R�����!p��
�aD`|�J���*A(&m��y� P�R������_[��U���e).��+V�G�s�T�n��*�4v���4�p:q��XB���)��e�!�zd������qb�5���0(�)��H>U�D@���,�qY�YE���V�N����dV��~|��P*�haN5���0�QN�P��<Zb������.WnB
����
���ZO��\�~�~!��#= ,�a}i@/��x�j[��14a�a�����v(4���jDq�<���
��p�� �={V��fN���V�V�� �X�'AT�^�,)rG�J��T���*q�g�O_��)����!..s�QBvov�S!)f�����k���W��yw�yt|���-���`)�����:O0���[j��#YT�^)O������l��"j=uR�
[VYEBS���9C�m*���E/���nf��MIp
@��X��`�U[9���t7NYf_����K���SX]�S�Feo�5�U*����+GI6�x{������{��T��KG�?������R����b;�2��),�l����.�$
�AdU2�(��l�������#���Z��k�d$�j���Eqj����l.$��WJ�e�I1%�*> �:����M�y���\��v�)I�6�Cg3�%Yn�*���S�*��(������2��	FX��WQ��APD��c�\�o?����3��2��0?���N\y1���+.x����n�u���U.a���x1�����?>)�����"]vp�@4
hX��!�`�Tx���V��a�Kt�J���6�w4
 y�6��$��T���u7��W������n���y��yE9>��7n�>��K�s�~�����SU����;M��<5�C��|�Y���A�VKQ�k���*��E}�4\�E50)1����q�@�yH�md��F�M�������ewt>��� ���;e�G��	�eY"O���}�t�M�8�n��y��N<l����@���	��#��iC��X@�E/��W�E��"8����|���`��Pm��`�^�(
G��N$p�<�l>6�{��t
�kS���`���F'�zM9��J7�;�P��g1�b[�6�i�k����[ �]��G���N�)&�S�^�s�����%Q���������+�������eYeA��au
_O��M��� �)���={X�Y��_����W**�pu�a[��G�2�6f�)_������=@�Q�%}�kl���|f��$�h�7d��u�W�����5@O(�nbY?�w�3P
N����SBP�����z��;&��\����;`;F���^�Z����@V>������W�*=\n��+j;�:+�Mz^-�-��RM��+,���!��4�*���1Y��8���O��J�b4n6?W�/�7���]��t5�t���Rx�s�-|i8�v�(:�PT�B��2�!sp �@�G�	_�1�WG��8bu�E=��w],&�~
��s��KU�A!
���Id�q��\O�Q#EM4[���H@�
���H�2p<��G��dV�Y��uQ�H���'����Iq���L9�h@�a?�0����}3�S9	;TeP��<Wb��Y������X�j=��������kX	2���}#P7�G@������`�)aj[:=t�{�4�>7T�]j��������9���z��I���#vV�}��bl� �G���F9r����c�������4V���W����������C�~����"��?�����PK#������)�<�����!+�u�l{'�1P5_�c�K/��"vn���m.0;jU�#��b!{/�fj2�C���fq���!�,�)!�nZ���
ru���yl�?A������b.gO������jW�:���u�$��k�f���X�����jPK���K4���
U�}zQG������[k�B���f��`���/�����Sn@PdT���m��<P"0��q]��������X��?����^o|���f1�U��_��YEM��S���~��L���a���(s'�h���i�_��rd���3E��y���J�[*�SK��-������9������������2lT�����LW!�S��QL���J������PD����Z��/��(�"e4TY+8����t����Sd��)t>YI'���yy�=�J���K����%�������|�<l�vk��e�|C?��I�8N�(FV��<l�s�Ns��D�J]
M�+2�����g�/v:��x�B��Z	��V�J�J�N����l���v�������
����F�d+`����6��3+F�J�xP�������_�2=�'Y]zeK���:)5R�:x�?A��C��i�D���k����:�~�q'"�c�������$|���6y���^A�x{�����s���9�����
%	S;U�ZBW�<l���G�����~
�[��]���}4�UIC{u��s:q��k���z������z�����}i{�F����
��I��D�����cml���d�#{��$(!"	� u�������
�$���3�H������W�e��h��C�a�"��-#Z"s����C���_�a��B�$�/M�,\��6�T�F��;0X����R���Y���^1��������a6�\�UH6;=��G��������� �BSR��{b6�,rAF�T'�)M�r�������I�/�L���=<:����PN�
v%�)���[SE���y��MR���}�H��\���<�c�]$�������f�m6����|�&��4/k�}t�I`�g:��M�Zl���"Smc�6�72�~=�
|8��m&�e�Jc���d^���9�k.v�kYh�����
�e���J������T�������<��oQ	���%�%����z�:��5P���D��UMGn�Nq��N�������=�A
���9���'I���\g����`�Lp�����d�`p[�A8��w�!c��&��."m��/Op"i���p%�e����o��rp�o�vi��b����8�p)�QEn��q1�����o������E�{��y=�!�O�f��?�9_������Y�KG���������]���>���54f��h� ��
v��� 7��7�K��v"G��I�pz)�����<�b2Y�r��J�R)�����6:L������9{�4�}��2��,���=G&�����gY������P�d�^N!7����� �����6��"o5�2;%��m���w�F
XL0a��DS��`�N�a^�3y������	�����=��6���`��D�u�]7�I��b8:	_
:>D�[3����:6~���-gq>�u��1�v��i�:R����8��p��<6$���h������\|@��@�C�S���{���g����JQ{�����='h�x����VA�Z�
���Hx�P9���c4�W�?J�9��������0W��yJ���0�Of����-�/�&	H<�<���2V Z�h���s3#}fXJ5���g�P�"����+n�^8�|��A ������UGQn�+|
bS���@�$d$����(jR����s���G
�,����&�&���Q/q�����&{�������8�:W+����j�2�r{�c��D"��8�M�,�e���=�A?�KH`|m ��U���wT��'MW����+ c��o������D@Rj��d�U���t��]"�v�����f�[�`�����]�([�@���d��C���YA5��]�'�+��2jj�����]���l11�C0��S��!�6k���0N�O�Kr�A4��2C$91��my��D_����}0[c�g���w.�����K�b�!X�I�rD�Yr�w�o��H�xd-gE��J��[�[YY��
��Z4Azg89=���A%�b`�&����r��O+���h(�H�-�8H�cr��(�U�BP1���-P4��5c6*z�����l���7���R��Pma�������^��ey`R�#��]��F�Jz���Gf
��fso��w��_���|�yPm������'G�z�5�E>0�3�6T������r
�zu������^��%p������X����.��)\���Y_�8��i�����fLS�Rq������L�_Wt_�@�4����R �������6��R
���d��=�*�fK��p�2x��E&�����@���N����P58~7��������]Qcf�L=;�th`����4T�GS��B�Y�������A?�(�b$��-H`�N�lo?�3��*�R�sq�<��\�
�F��xq�;����=�������-m�l4J�����I<"1C](�s�Pv��.�a�Xn���P���a��l<���+�b��$f��Q��9����s�
 �������6�QH��+;i(���w���}�WeCL�E��A93r=�BP�����<��*��B��<�M��8X���;���}-�E@�R������P�4E�#� ��'U�BcRb`�O���#,�P7��|��Y'��p�s���sN,S@@3���Y�=��i1i�NP��.��v��$2"\�s��&���_�(�b���5U�`D���#�5��`��	9��2�C~��JD~��Ep���� xA_�BK��%����	�a4^E^��m����^��c=�*�ZT�/i�N�����	k'pk��h/�
>�����`")��N��C��a��C���A����52���5�U+a�c���y`.���T�;:x�-����uc`�kD�����.?t+����N��4��S�)k�G��������z�p��s��s�
�g�����[�:�K��^3��b�=�)>�_
`�QzYp8!�/p����W`�tJ��	@�T`	� ��<��MbC��~�E\��N$z��T�F���S	)��2\c�����&�"\I{�`XjC�)�$����/��Wbu���
=Y!����BY�w�-i0�%{T��I�
=�%�a����B���|�\f�|�l����i����s�3�[G'D�`�l(���Z3i�X������N\��ZU'hm�.�;����&O'&IL-�U�9��bH��.*J �[gZq����;����'��H�����~�����g���u��_}qzf��[ELr�7I��A���x�o������/��w����q���h	Dgq�YIF�Oto����U>;;%NLG�����K���m����tL�.����-������V>9�'sk�k�����������O�9V���v�zW�[X��,����)	�c�[��8^Cqe�i,����7��9����	X�"��,yS��fe�l�P��
�3/M�;�Eq 7���@{�_��TRQ�����oZa)Z�o�-E�,4�pMn4��,����I
�F��-F��rKM,%�����t�D�E�h~II�F6���}�SM��	2s�+��ZL4��,h&kD~"D&tw��S�=HX���IV���E*r4���*�dxc?��it�K�
�NJ����>���_�X��m�B�jE6���/(A��xE�7��?�m�"��w����T�)�7K�7K�7Kp�,A�A��ss`!�vI>��R�9SXHG�$;���N�6E�e�]��D[�
�mGUkVi�Dv'}�
'�u`����.��!����2a��o_��<_��$�_�����/��\R�IM���0I��`3u�G��F3'ns2`+��mFw�����pf��%#U)nM� �%�m}�%O��9�tn{j- ���m��)t��8�B
i��;���[�c$Hw�_�iq���NBo�86�cO�����U�������"T��n�
�!�,d&�
`o��4��#���px��=7���[y4]��YT���:���-�
8u���tw[������)��Xy���>�������+����?[���V�����2D.����M)�i���^��@���D�`��f�
#n�J�������`�&���:K��x��m�TQ���@���f��0�t�V����������}�S�|������BA~=��WK(�r�n7�r���d �Ob�p�;
�
�;�b)'�7��H����=
R��4n�A~�T�Lr�=��X*�b��?�����d��Q`���h�J AG�������U1���u����Q������NGx��^|���"��X?�Xwy���z�t��d�#�#��/�3��St��v��Aa�@~���l��*��j�p1^e��yp�]����"��K�X ��>�����QZ~,\+������x�J�����0���z��sG�?v������Q������/��9)E.���4����^{�e�/���D"IUX�R���1� �����,�z�GP�|�7	���A���6g+$l��2�J)�Av����D]�ns�2ZH�}�r�b���?����x�q>�6�%(@�o,�?�Sd�y=T��� �\@�E!��*��/��DQ���x+Q����VP����2F�UT��)��mM��S��~wpGk�J��D\���q��������'�BuI9�ZlJJuw�z*eG �����D$V��U����Z-Z
?M��D�4�,z#����Q���6��4n���ax�Cf�_�c�?�z�$`���[$�&]����z�$��b�SH\'�D��e|���4�����Q�D�ef#E��S�=�=zd��M=����ycK����U����������8������H���Q��ws!��xp���Yn���u1�k�X��<GPTd�<iA�|��mZ��?����������Sf
q�E�!$~�8�y:�@	�g���i�Z�1E�c���-��
>�x@��y���Su�B�!�6��j��������P�&o�L{:����BU(����r�P��<O�N�������>3�;��@U������\@�u.�'�Lj�>0 ��(��)�N)�1��t%-D@'�@]Kl=$"JF��Zd�u;�����VY^�g���S4�M)���`��?�
����X�gc�yEC^�Q�UOig�w�nww��N)7n���'��}s&\MiG={I���4��L���)�Ek@k@�/[� H�YW��~dt?�:n�����!NG�*3��`�X���s�`N�M��o�]0\�=��4�H�tco���l+�@���0��04������v=�T��"��������^��h��I�����Y{#`@%r�����uf�)��O��/�����cA���oa]�qUbH��B�^�i��B��&h�M`DY�}��&:oE8�!{�B�L�U�����1OJ�2qF}��o
	u�Li�K0�������-���=�M�Dd'���H�f5��V�����Q���<b����D����y�
�{�(��B�rQ�����D��k`�	�Y��'2��2q��o	Y��z�����_�\P�g��fhj+�38�[���v�.���y.������U]�,�����g�i�8n��I$��m�"t+����ao)
Orp5X�.N�|]/x��C��bi�� Ej^�()�\%x�3�uf��g8sAGk���5nC�hs���U
uA}����Fu\����Z��9�UV"����T]����7���v�A"��g����Y�saS�K� �+j����0�{$Hx�s2�����jI�!1k�Nd�#�7��ZAF$�4$��$|W)�5� ����|N���
����B�h�h�����u�0#N�jg��;`�!���@}RL��J8lp2B��������x���DPb.z���������W�����]n��#yV���Q�z%'Z��xC)g["�2=q�������	@�C��;�R=Rf>��j���R;M7����vz1Nnw��O��)�Q���L�d��Fq��"5U_�1>��"�%z
V�'
��;����[>���!�B�OX_$(�$�t��d��%a/����Xx������Z)�1����(�����t�������j���]Q���bXK�\��7y��$�	jz����I2����<�o&uhX�e7����+.qI���)���E�7�5d�
-�1�7��@f(Oz&�b�����n������|����qU�R�!,�v)k,>z4$��agcA��Vh�����+�(�.����ux^��W.-)�ON���3��X�=s����
������Lcl��� O�Z�@������6n_����5A�������(�w/o���{GP���c4`����am6B��zh���qG��c�������,�BQ��F�G����P2��3D�-5b��K��;�^d���%�f�S���[>��|j��Tu��^<'�;�m�j,_���'����������g��r��g�=6+?B]�xu$_A+��K���|��y���P��<�d����W<��%��@��j������lOQ��j��^(5O�3=9'�'Rl���Go&��z��@�k7g��bX�R
Y��N��$�@\��h	�|u�:`'H�s�2d�4�I�72�;2&Qr�I"�\�E9���6O.�	Dt�cDJ�4�'�!�?&�C���fjL���1�"���I2F�������)�%U_e$r&������3%�CY�Xq���������?{��I�����Xp�hq
���n�.�a o��t�1��Z���6����}58�������
Ja8$�n�;�Ic��$�(���:��k�A;C�(����GA���N���l@���,���|�Rx)�Lxd^g�I�/����-/b��f�n�������f_-�]T(�1;`��2NQ����@Jl&��lX�E���Q�{�������,�x�@��_���������S���mH�fE����!�*^�y�Y��9�����@����
Ee��=�E�xw,7��E�Ee5j_���q����9]!(�w�qZr.vc������!vm\V��0�����������\���es{��+���s���[j5g�?�`�=���:�����	���I���<<�c?QQ	�$h�%��%7Q�'me�D�(I�b�X���W�-,J�\�Dmse;W(����"�`s.�Y�2�I�6��6
!�_�����"�d�!��*��Y�O�K2ky�PiP����%�U�g(�kP5�����m�B�o��
nmn���� �I��&)��D4����;V���������P?f�����X�*����~�;�7WfL��<T�L��2t�`��h��$�D����D���ys�$WQq��Ap%�l��w����i�F]�AI$�@/4m��'A�����'��d�;��RKQ��R7Js5��V�|�D'����~�zn�[����E��q�j�1)����T|bk����H���QV��xL�A�2H�gw7t���r�R+�]&`�C���bi#��6�c�#sh���88�l;z
p7f\�����+<TF����
�P�;��w��m�
K��trK91&����q���y&J#�`�b��v���=atj<�O��>��3�o	08�&fBS�z=�����D��8p��JdvnH%&�x��b�����S��]����m��N�.�Hukj��Y����go-
'�%z���"	t�5�M����/��Y����a��
�/}En"���� r�\��hx�Bm��L�[E[`����0'�#���PF�{4�S��[f��Y4O�`����K�f�qUek���-n�/�+��^���P��
���t���K��b[jl��9���9��SE��������*`M�(0vd���hn[�I>����
A�X9�0�����3��$9���2�QW%��:�5HG T�j���m��
�(�KW��a�d�R���,|?
�����f�X�w0^.����.���:��,tZ���_w�OQ�H�������*?�������@��a ��M`ay�$�Ws.�z��$d��`���<&��p�I}L��#	��B4�,��������js2��"W*
��*�t����*<-��yv��\���A�Y���u���3h-F�:PS���y���~.��4��K���*���3h,���|���S��D(j���+�V(��Rz�u<9��=���� ;�o�������'=�A����$u�?O����/A� =4q7��1��U�dV�i�;;,aj��o����\�����5L��;h�n�i�i���K�����rf�`}p���0�G�>�uJN��������<X�h�VTdC�B���btw������=�[}��g�s�&��N9F�x9,����k�I5� C��AX<�qtk�>�|j�|H��{m���a<�O�J�Hv�J�$Y�]����}��`
4>P����,:���J( ����Z��,<!,��PO���#����������/EI�fu���������|��#��@	�Q��!����j��
�V���T�} :�cd��M�+b���u%��,UWnw��\`G�|�	z@�5t����%6��g|�Iz
Z]n�aN�_]ZX���C��^�6N�"�>���!���0R|�>v��� ��M�&�F�^X�oBb���:�O�\f%�����5��B5��N�m��F�w����B�6��I��;��>�6C��)��H]�,��k��E�])���� �����k/�(��������~:	"m�	:Y� fH@M.q�������X�5����9���m����5v��0F?��2�1����`���$���'�AH�'�$u�Ng����2.rCOm�#����p���m4�(��D������S=�6��`�,Z���#~x~�Q �z����cMR�����P,-��}�c��������*y����8DZ`2�&��=�@>������3��K4�'��������)L6���mM,mEG;��r~���{pG�!�u��K����`���Z=�F�ss���u�:\�<
�)%�,�\|f{��'�](��To�h���k�:��J�����V�mQHm����x0
�Y>E�����!OA�x`�pe6�KL�P6+�zl�G�����%=��c�xwr���F�)����e'i�'������U�P	o|�d����(~�f����Ji��Y-�����#c���[zD�I�x�����B�`��>���-lAt`?a���@j������;7�M��^D��?���
,��d�FN����*;o��$�e=�=��PGB5�^ID�~�6d�!h�??�b�E��g�<���)tA���d�;�Q�	
��H�e�e�����z`���%ir�t�q��(:8��H��'�Sa4�70�D�@�
O#�w0�!�l���2Y�e��W[y�|�B �lj|d�~�6��6l���(�9�pB�s##��'���{��o��z��w��U��#K}&~�W���fHP���-��;�F1�*$����cm�E9�S��<�|�:���b�������4O;� ���#nN�D }���0����U]�hEd����77Acn�� 1���0�\��������x
����[��4��^�-K�f ?�S��:Qc�jd�����z������+CNX�;��
��j�����n�kj_�j���l;;Q&���N!�b��rP2-c"cI����;f^XiI�'vI�`)���M�t�8��)HN�:���$=�@0	�m�d��C��2i\L-���l��M�����P�ut��t��V�i2���w�U=��Z3 z;�S:I"��i�D����d}�����;����(h��R���g�PO_70aD��X<���������+�R��z��G�o+J���&��m�J��{
��P�`W����z�e��q�S�-x���c�B4��0h�	[��]��G�Z�[G���YM�W�,n&m������*@�"�}b�=*����'��E��L����4���2����V��X�s.bg\���2��mlT���h���q�W����A�,ux���x����r����jLH�#M�~� U���J�c2�8�M����	v�M���9��a|%�r�������z�Di/��#�v���]�@������B��R�;�C��ZfZO\lK�����4��9��B��!�
�7��}(�IP�F��F� �1�e����CQ�6��	�M��9[aA������l8T����jT�������0��a�e����L��b)�rE]@��&5�������u���08u���~��m.���v3K�hA�t����Hh�%���!�}�M�kh�R�i�@�N�a%�6,Y�r��"����/�KP����?�l�H\[\�!���x�/*M0����8^XCf�C�EQ�d,7���j&$��h�+��������BF�
�}�\��l[�;O�F����<��/'�$�t�{�Jb<Oh"F)�0�oB�i+A�=:?�1F#f'$C���~
�-h���Z&�@�� ���')o��5���"�mH]����Ecm��o
�QSC:}#������E�#������d�����9	��U(��%C��*���G��X~�(��S/�F��;;�v���<�iOw`D���X{'���A�G`�[|����! 1� h����3��C��G���g8n���N��wv�:��^�Y�1������d�(��N��]O�+q8������y�CD�������x�E�����wk=j�E���������V���L���3�������t?�(-��BO�
���[;������F���Z������
a�U)G��|,��?�O{���Ng��0�G�-�������{O;������yHv>kDJ�T��Mzb�����W��~:x�c������������
���	���2N�B��@�����H(��E���w���ZEt�@���M���7G*���.vObEMCy9�����=�k�5���C�T�[�q�> 5��AV�=�4 W�]rC#*PQ{�L�8E#`��y0��<''m����E��H��Y^d24�+@�A��X��?������(��J�GS�D�8!~t_3���bjb���#PzR���i�e����k��]K�����
.����j����(��t�+���
���J�H����t���x���B�_��]���)�1v�8Bw5c���������c��(�z��X�*��v����*�@��q3�����?�4�>
d6
���Ap��Y���-���Y�8��/��<��M&�� �ds�8�*�!:��[�*V����7�����Onr��]7��g��]�V��)��8��A��9����H�����(E��}6H��_�i:(�����`f�
�
�!`.��X�^{��C��c��p3�V���V~�L,�K_uq�|��k���_�����~����������6�����_�^<o��|�����?�������{v�|\wBJ��������j�rp��Xl��[�?�b���w��7�����������.�G
~�����;����������6D�[�E	i��?��i�
�(��|����
��G�/;/���@����em�:\����#g�D��B&P�����	���+�Y�\��:(<TG�y��_'n��b��U�2}B���G��T��oB����6�&k����Bi��,�1&�R�`A��*I2��C�>��*���7^��$�f!nB����GM��������W�H��@K�c
��Fs�:�RT�l�+���i��.��i�4?���KrLGBn*�!���?�teh������V
��ei�'�b�#A�Ny	��{�=����a@�E�D�YF�h���h|�f�b��0?�%B�b1����b�e	�q!���2�Uh��C����C���b���iI�d���!��z�9�&J�n�\�h��IO��u~�AC
�kd�iV#CH1����!8d��H\�3���8��A�5[��H�<��b,C//%3������S�j�t�1D� @����%<&��1�D��H�Ol�G����<�F
ud
08>�;�t�mjP#������M�|�*aH�����N*	^������<m�/1�@N��%�*����#\��p0h)�c�Rt���������S�=KG���.(�J���� 4���TU�D?$�s��JIs�D����!���|bd�;�o�������H�u�3�y�%��YG�`�Z.e
��R�d�����b�I�(���SX?�t��#
�M��|wu5��!��@�U�y���/����+!��g��M�_�����C�w���ND+��~��}���c,F����
���|��!~�D#-��~d��-�T/���;t��aLC�+����\<(X�e����:�R��O"���[�����~[�?�������N��J��Ttm���;LA�
�jY����.�sc�@K��$�&5��!��L>�v�:f,���?"���`a�M�{�<���z�y�T�������@�lW�(�J�3����������	�d?xw�j?��)xu����o�zD�gj�_�6-����c������D[F+�$���JM�3yw����}����su��@��%�����[k#��n�#���;�&��`u�I46}o�����Q���L�jw7D?��Ly��,�)]���?���o��o���W���;�?G������cwz�B���7��a#������#
)���%���6"������� �����4*���|F�d�=�Fb�'k��K��)�����D��&3":o������b�
����9��W���a&	zMT���2��kj�@
:R=��[�g��CZ	�H����$)��J�,�w�s@��
��|��������>#�e�CI��H�����c�Z�RS\AC��~��BP<��+�Ba20��!6`~<������4hj@el���JjQ� Js4y$�`������U�SC8�c��,bjHs9#��WB����AG ��P@RB1L�������F28u'�^&�}Y=D��@Y���E�E�\��!OP+p�m�Yw�at8T�,x���0e���"������}���Bk�9�!�#zr�a���+���H ���}��3W���V��.�jD����GDa�z�����6K���]�;0�$��fD�+��2[6�q��85GFN6�5�@�j3e������J����D����|�$N�X��:�DK9� �5�g�8�!���o=����|�Vq�t3��@�7�WKfN(Cw(Q��TPt�@9���n��_���bLe�
B]��)q5��o�����C�}�D�	y�����c��6�C�Y��1�a8��b\]K��d(�����X���?����r)�9	��cq�<�w�$d�'=�/
$��q���~��<��D�����Q@s�\�/�Xb*+��(l {(�����r6�'z?4��#����@�*����n�S���:!����7�����8(��k�-�P�z)��o�2���a$��WW�M��H5��W�4��i���r�!�U�92�.�(2|dz=i�L��a�4@9�����a�<���[c�������HN6��������B�{����EW�4n��W��DoGAmU_v	^���An��_�z�|��kEZ�x��7�|�@j��s�='��s(X8��29"�[\�R��	��P�FK7�H
�^������r,��������S�n��{�orc8���	�_Os���K��}��.��)�L+8������a_�]�B�X����������-�hy�a2�3F�t�SQh
Z����5lsG�mZ^�}��
"�M�	Dg����U�����9�]�~H
��e�	��\�h�d8k�9KJ��t(h1I}7QY����r����l�c�>]���uqF��(����1U0Z�������z��ppxC��Z��G��@�]�G��F�����&�,LG*f�S���W���{0�����j�)!>�]#�v>����%�UP��^����8��:��)[��O=p��P\
Pzi^�`\)FDl���M�=1(�R@~7G�Q��OVi
� 1��q�m�`a�����&����s��j��\u��cw�U��=u��a��)g?�f=��S�s�I��?��r��f��M2�B%c*tC�!��=��)� %�����LE7`��NK��\s�?gN��]4���.<��ONd���n��p��l	}�AxC�5��e����MJ^�0a]�� \�@����WI��bi�}�{��X�N�"���k}���
e�!�QU���b>���4��.�	�k�<f�L��A�!�O��k4l��h7������,��q��U0��y��u�~_n��T���n$/K�72�[�14N,c����}�>���Z��|��2V��HQ�dG�zz�?S���b�)����]�&�]�'�\Q{����f�!�EiE^y<��tI��D�@�O
�a���KB���e�AMeB[d�L�"��m���I��W��?�5����h7�(C���m�'�d��q�i[�=�K��\ ���M�ZFE�y�6R�����{.�L<"�x�|�C��A�� a(����|�����A}�	N��y��r���]F��6��?,8��/����a������j�0�;���k���J�������P�:�6qJm��A�1���y 1_j���a����A���FI��K{
b�����Y|n
r�%����
�E��EPk�{�P��q��1'��@4�I�L��qSxcG:����0~���������m��]���;���]0e�#���
c4��^^x���J!�5���?����2=�{������]�
���j���v���,��J��_&���F����{x�s�9������W���d�8O�w���Mb��
o��G|^�''���O�
\D�B�M=��L��P�7u_��5�H����#vZ)j�t���>cG��d�h�1.}��I'�N��W��'�SjE��5k�n�b@�����O������5x�\K��DZHm�B����	�i��aAfCkx�en�)t���%L�	;H�U���Z��yd�����yF�d����{A�h�?&�P@���Z�z���S�2��A�
���\�!�,�6���<VW�*U���%l8I����	��XNP����P��z�?�F�������K���:���la���h�����
�����^2ki	����8o(N���yN��W����dSDB��!,�����l��F��G+�x�w.�����C$�@��6�h}������S"1���0���50}��UC1���<d�D�7dB[dBI�j�4�4�9��e����R�\xn�;�w������
E{$p���5R*n����+�e(��%����5d�|�<M62�$+q���6n��~*J�z���#��,�.�H�Zn�BBMQ���3�c��bhpT?B��h��T��
���6MCr�'kN��0�����j�
�����$L��W�\�����]Q�]��a67��)�K��ZM,6���$4F�PH�����(l'��g]WlfA��f/=���5�G`�V��oY�kY�v�������Z/t@Z7��j�!��K����k7n�qq�������'n���F����38����{�h|T��C��C�J�$���������<��lm��>�qC>�q��-���R��]��f��K�6�Q�M6�RRM:�*��*����$	0����K�@�^�',���xU��X��<%M�U2q�%��{s����W��e��!��!�`�
��T��je��\���@!x���"��3����m#�*�	\�G�*1�6�/3!�i�sb
pO�4���
�]+�KU(_s�22��.@c.�B�uB&.��	�=��j�`@y��G�1��]\c��Q_B(Sn`���x�\���<���S9�Q,,�����4�p��w�����"[���y2V����{�S@�cOq���3P�K���9��^�Z�E$���3�\P2��;{��,V**����`C��Lz�����&�l#��V�[�kA# ��D�e-���������M�L7���4�@#��`���h��l^1��7~)7Y3�UlI�a���E
(<�����>�E����)@�'0�)�OPd����m��7�����9�6��5��$�cR�#%�J�p!�
1�����y,�>���(�#u�c^��&��u1�i[5�p�-��!��dF	w���$+h��7��A��-�Ha�
t�7�U<
je�����P�oe�����8� z��D�i�o���#��A���w{�W�^���om���.�\�b�G`����������
U	�t���E�}s��ZoH��E��`	�Y��vSp G��-.�`��z��/��/t]��l]�$�F�,�/tmxQ���	�`�� �Y�_(���\���e7}�$�~�U~i[�.�*#�
�h��]��R�� �0j���A�_�>�����4J+��%�A|*�����v!E���
��YZ�C�x�i�?�����|�(�A��`D:t_�)+�{�[�VW9j3��8���u����|i;���U9RD>8��|�'_��\y���(^U���G��Y���h�R�uw?��d��@=�oH#����
��[��YD!@�d��_�I�QLZ�2s�-(S�����+��w_��HE)^f�+A
��������	��������2$%��9}�[�Sr�B���F�6T�=�FH��R���,��\y�g`A����T���h����vYsx��`����F����[�����R�~��9���2�l������$D%���t��q:��>R�?	U�e����:S�����60�QU��?)�D�6[G��)0��n&��b�?����A|���l��� ����9��t�6Z_Bn��G��o���������*�.f�/���gqC����j>h���Z|������FTc��o�\���z�M0K�
�K��Y���4b��/�z�j�[t��<��gp�@����X����m���x��D(<<?���_��$&�l#���K���G���44���HUXv�/��f�*�<���U���a��+�-�er@P�hHe+Y�/1��a�)�$L�U5�o�H/o`���X���|��f�����R�Gn��Wx������ u�[�ie���.A_H��3�",��op�I9����/Q�8��`�b3S=d9�.�L9�;�-M���k�b��!�,��%���L �I��I�u�P�x`&Qhj��.u�CXQ���Lz�K���b|�Br7*���"�2=b\H1�4�����n��*9���6%������a
�b���L�HmU�b�dI�f���f���&UWn��5l��-�B�ze��3�#�4�Fe.���1��C��BF��/z�;��<c�R����,�gpU�nr=R�7�W�{M�;���1��
����p�~�8�{�8I���!��d%�H;���g���>�S9�TR���
{��x��N�W!�
��x��<�,x��A�*EL$�����cOG��]���7@�2��%i+��6���* ����P�	g�����%q^�e�{�W���:��������?�=%��$�&y����nP����c����<O'�V�����J���H���)�aC��qp2�z���0U)���$�����7��E��A��7?�N�x��x�d��������++��Z�W�Iw�sU����Z�A��6�����N�r_�=���!�W�\[����*@��O��7��{�x��W+++�j/�X��?~\����V�k��fc�|��W������5f��E7�/�/�"5��Q7��Go����*�&bIWt��]y����6��cqb�VU�.c�P`��0`��FU�����`���,'x1>��fu#{=�?��Q�k�u�����c�n����RXT�b�r�)�-�����\�6�'k�^��sp��g�ls�a��`u�OV;�h5]��������a���8�<8~ypD2�����o@����g ��)*�
�
\����FY�
��"]� +=�!U�g
Y�"E�@^�=���)w\����~��764�orH�P�M����B5�-���/P{p9���
�S����x�2���K���<�p��~����E:�F���<I�t�>l-��D���@B��j<�E�@��g:�h[�����#���^����Idqz����J�k��M��G������M3A��g�6������I{:����l�2c�g#r��d�L�\b9���J����R��`��p��D��xCrQ�*9I88R�kr�g��8�]]��N��_�MNW1����f��,}�����vA��A1�����E3�l�GG}�I�	!�A���<}��TH����9���"I6 "�������p�Ie�����M�c�<�)��(���$/���������j	\�����ooq����%Jf���q*wFH�j�=��0��,�/�B:�y����Mt,`�41��������eZ'�/�]����D���1P�9s$�������i������	���u-��~`�yp����={�������?�yvdL���>q:0XD��]��nr=#>i9M%Q�%.BC��#V<.g���%��	�J�x��eD�*�i�y�!3]��i�>_R���~�H'�,86����:�L*8^JB��9g#�������|e��V�H'Z�zd���xU>Ez���m�{1��=P�]NR��R\s�*������5+�r��(�0@P�S����������������VrX��%�${��F�w�����b��k"B������(z�������(�RP���}�rkF�-�=(8z�����Z��k����O��G�=��n��9J���I:���m���C���@��r�Rs�x����V*)ul
e*�
Y�!S��Ky+\i�U�������E��Y�I�0RH�;���a2���tC�vL:��+�WuoW���,^�m�3-���~0�����tj��+�8��aS��H��������n�k	�/�+p��t�&��H�*�i�pU�p}���$;����B�C�1������2!8�FT���� ��H���}��See"{}�����������h��q�\y�I�k�zg�<NF�S<�(�*t�����
Q�(���R�C�!>�tq���y}��@��G� &Z���:��t�)uC����W�$30��x�=�bQW�����}���(�,v������m��r��q��pdfP�!��B��J
����!��#�=����P��X��$�O�����J����)��xr2�4��O��
���'��?�
 :"[�M�`hW�z�{�Y�H����qI��,��|0e��\�..���%I�^�JGA[��
�E�5;�a?
�g2CD���[))��',���.���!�������q�����uZ�*M�<��2��o��t�l����f��lo�;���`�i���`�l���$��;k���Y	F��&���v����`g3��
}�Oq����M��w����`�I��4��+4��U����@a�&S4�����s��h�9`}���[M]�q���_����O�t��4S�UW���2����j��������`�����������!����{'�R��t1,��n��5g�UP�%��F
S��H���4R�6�?�j#Sd�xp(���|@������>��
A�N�����0~f3D�Z�'H�cpF���iki �Za�a��`X�'�u:-��7������_����V��g���K�����������'�ia�������;���,�@�����7-�L��|90U[&A��4��� ����2�^���eK)KKo�2��Zw-Y��~����r��u�l�Gq/�����;G�������yb�.���3�%n����9:E�l#�[�+G||���Z�&$�l��n`h����V�9����7�3yc��qj�5���n��f����6�[K_�����BZ[��{��Y	_KwRkO�I����9s�8�Qv�������x:oC<���3iZ���X
6&������7���t�.5�r��E�[7�6�Z]O��������]��T�|����\�_=C�&�H&9�\7��*��G�����3��Y�����fF9�A0��u��df�%q9{R�����5�ci����uM��)b	���e5�D���Jm���+��cq�JN�Z����Hn���R�� �����\�����y�p�{������vk�����y�\_.�01\��A�M��e�S���MG����S2�:�[!;�@�������?sG������C����fJ0��"�U��F8�������S��9rG/�d#�4�g~t}����.�S���"G���m��?��;�-�m\�w����A���w��C�����o�]�.[��>���x�i<�aR�>���3�9�x8���i��Y��2�w5����G.��o�d�X��p��J|��y{"��f��n~~b���U���2{�mv�J����*���@(	����#��!(+>�P�j�����[��|A����Y)j��b���)��_�����{u�+���B�^Dqy�q�\a�,)tp'`���v������Xb%e�^�NHc��u����z>����������Z���g
r}����J-_s��	��?�jk��`�5����u�b9KTm�~�j��)�����Z_jc1m���'�s��j�g�h��s���������3�5|�n��j�S��?^��9�Xja�j��������^v�����i�}���;
{~:�"6�7�*��j���5.5��M_:��[��a2�e��e�d�=���%a��R���y=o��Y$��5�g������r�]j����|���@[���l;�>�_W����/^���<o~]eNI�(��9-�|����g�����m��]yB�6�����������-�<�Tg��y��j�U�"���D�_�x1e�A����K ������(-�-vB�QE����{S��U��f!2@���P�����V$Ce"Ltv�����]��-V�IKa,�f`��0�'X��8�b�%�����X��Li!,qy�O����,����V����_R�����+o_�|�������sOK�iSg��\9]q��zz�[Rv�{v����Du7X�[�>��-'�ZN"w�e[V�uW��%�SUek��p{a��;r��4�I���_$s����s����7UW~T**��� �tB�y�SFS*�T����}0���o����|�s>U�8�	�@5���V����%�)���TT��m�>s�9^AG(m~���+BJT�m�]�E�=��7����$���'����l����U���P���%x��B2W>P�By�o�7������Y���o_d$������C�WI�"e�r��J����U&��Q����x�a�t�_b��n����W�d�`��9��-����YA���	����x1����!b�=To��7g�]����H����{"gs.
��uJB6�2>6������H��}�"=`yneT6�/V�Yn�y���`�,d�<���*�G|���0����x�����(r����m�V�0Sb����|��Y�|*
/0�����5pL�������9�XW��D����3�*=�q��|������|����������'{��
����ae��	��+��.O���FAU�LZ�Uy�&q�'yM�o�@�^�z+yr����o�n����#Z����#_r�K����3�I�Bv�����u����;���������|�d��N��A�����X_<
�	�K�P��shn�����`o�xo@z�'z�K��X��S��*�uC6\-����e
4?�`k{��li��UOP����W�}n��������V�r2%�L���y�3���N�		Tj�:�mA�K���`���
��������D��L�C�Z>��x����R���"�4 2��+����������Yj�p�u�|�vJ�=|�KCl�9��n1H:���K[CFF������|�x����}UO:��i����n��l0�kF0�F�K�Q����(��.j�P$����"�:��:~�^�n���/�.Df�e�0���I���o��a[������"�$<����4���-n�YW�j+���t ���n��a�m��v6/�Z��7��E�Tt�����&��:�J����)�?o�smg����mv�j�Q��=�n�CP��G��"��:
�����ZT�q�QPtqgR8�I<���25����'��B+�G&��f��`�L����l:�N�](�#j�"v(�� 6zoG2�RM��l	o1�w����8I����xt��d5�� �p@���^%	���h&���1\����-����`�Z'h����O��@���=���\����l��N]��$wU6��nk]>�%��5�����mHe������u9H�'	������
����������m���6��Q���� ���s�i��(��\W��gc���) ]��v'�Ev��Z����b� �tks��r6:e����6�L��4��uksG
	���qkSz���Q\K��xj�XOt
|{k���X����'d�7Ff��-�J���tCkG�I�J��t �M;O��I:�n�4�k
���>QP$�����+ ��zj��$%����F���gK����=��k�G���7����>7�`hF���c!����/;�#�~�|�<=�is{��.(�5FwV�g_������)�:��bK��fK�����$��s}���tCz�v<8����
s��7Q�d:>�KO��$�����x\2�v{_�����6b���>mH�#�>k�X�d��m���4�
g�_�F�93	���C,�4�n#����^1��8�Yv�&����shPU��o���`�
�9��dfi�XB�}4�d1R-bvu@}��K�O+�gK'�n�]����Eh�`�Q(�2EQL�Dp�B�N�<�pG_B�Kz�"%h?K��<�	�g��!��U<*x&z�	>����^����vs��%��������P�d�������������"�b�����
e��R���T�f?�%AE�VC�>�JCL��B��m��
4�%��}��-�_�um/�+�;XFP�l�/p�j�=�W�^@�Q��@��u��a��m���kyzR��I���h���a7�����,hC"'`�����.����s�(I+����N�&� �����{�v��>�>��R�2�S�������t��e����p�������K����Q�I-:/����J�@������Q�3M!*LK�y�����m��� ad��r�4�?�nStY7}��!�������4?����gcL��.��t��>������\<�Z���
�[k��
x-6g}SQ(�t������qBv�H�o��mm��;�k^�y��-�5��S��S�����z��}x��^/�/4�O5���X����~��9�X�o�9��U�|��
	������B���o��#zt�O4��n�Sr�3~5�|2�������Bk��^�n`�w�a"^$�b�t��`���������8�A�����\�If#�Y��#{;�S�%
�Zic��f�y��b!�E	%/�*�k�������h}+j~�������
v7-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v7-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v7-0005-squash-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v7-0005-squash-Add-pytest-suite-for-OAuth.patch.gzDownload
#56Jacob Champion
jchampion@timescale.com
In reply to: Jacob Champion (#55)
6 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 4/27/23 10:35, Jacob Champion wrote:

Moving forward, the first thing I plan to tackle is asynchronous
operation, so that polling clients can still operate sanely. If I can
find a good solution there, the conversations about possible extension
points should get a lot easier.

Attached is patchset v8, now with concurrency and 300% more cURL! And
many more questions to answer.

This is a full reimplementation of the client-side OAuth flow. It's an
async-first engine built on top of cURL's multi handles. All pending
operations are multiplexed into a single epoll set (the "altsock"),
which is exposed through PQsocket() for the duration of the OAuth flow.
Clients return to the flow on their next call to PQconnectPoll().

Andrey and Mahendrakar: you'll probably be interested in the
conn->async_auth() callback, conn->altsock, and the pg_fe_run_oauth_flow
entry point. This is intended to be the foundation for alternative flows.

I've kept the blocking iddawc implementation for comparison, but if
you're running the tests against it, be aware that the asynchronous
tests will, predictably, hang. Skip them with `py.test -k 'not
asynchronous'`.

= The Good =

- PQconnectPoll() is no longer indefinitely blocked on a single
connection's OAuth handshake. (iddawc doesn't appear to have any
asynchronous primitives in its API, unless I've missed something crucial.)

- We now have a swappable entry point. Alternative flows could be
implemented by applications without forcing clients to redesign their
polling loops (PQconnect* should just work as expected).

- We have full control over corner cases in our default flow. Debugging
failures is much nicer, with explanations of exactly what has gone wrong
and where, compared to iddawc's "I_ERROR" messages.

- cURL is not a lightweight library by any means, but we're no longer
bundling things like web servers that we're not going to use.

= The Bad =

- Unsurprisingly, there's a lot more code now that we're implementing
the flow ourselves. The client patch has tripled in size, and we'd be on
the hook for implementing and staying current with the RFCs.

- The client implementation is currently epoll-/Linux-specific. I think
kqueue shouldn't be too much trouble for the BSDs, but it's even more
code to maintain.

- Some clients in the wild (psycopg2/psycopg) suppress all notifications
during PQconnectPoll(). To accommodate them, I no longer use the
noticeHooks for communicating the user code, but that means we have to
come up with some other way to let applications override the printing to
stderr. Something like the OpenSSL decryption callback, maybe?

= The Ugly =

- Unless someone is aware of some amazing Winsock magic, I'm pretty sure
the multiplexed-socket approach is dead in the water on Windows. I think
the strategy there probably has to be a background thread plus a fake
"self-pipe" (loopback socket) for polling... which may be controversial?

- We have to figure out how to initialize cURL in a thread-safe manner.
Newer versions of libcurl and OpenSSL improve upon this situation, but I
don't think there's a way to check at compile time whether the
initialization strategy is safe or not (and even at runtime, I think
there may be a chicken-and-egg problem with the API, where it's not safe
to check for thread-safe initialization until after you've safely
initialized).

= Next Steps =

There are so many TODOs in the cURL implementation: it's been a while
since I've done any libcurl programming, it all needs to be hardened,
and I need to comb through the relevant specs again. But I don't want to
gold-plate it if this overall approach is unacceptable. So, questions
for the gallery:

1) Would starting up a background thread (pooled or not) be acceptable
on Windows? Alternatively, does anyone know enough Winsock deep magic to
combine multiple pending events into one (selectable!) socket?

2) If a background thread is acceptable on one platform, does it make
more sense to use one on every platform and just have synchronous code
everywhere? Or should we use a threadless async implementation when we can?

3) Is the current conn->async_auth() entry point sufficient for an
application to implement the Microsoft flows discussed upthread?

4) Would we want to try to require a new enough cURL/OpenSSL to avoid
thread safety problems during initialization, or do we need to introduce
some API equivalent to PQinitOpenSSL?

5) Does this maintenance tradeoff (full control over the client vs. a
large amount of RFC-governed code) seem like it could be okay?

Thanks,
--Jacob

Attachments:

since-v7.diff.txttext/plain; charset=UTF-8; name=since-v7.diff.txtDownload
1:  c3698bbc3d = 1:  6434d90105 common/jsonapi: support FRONTEND clients
2:  0cd726fd55 < -:  ---------- libpq: add OAUTHBEARER SASL mechanism
-:  ---------- > 2:  13ddf2b6b3 libpq: add OAUTHBEARER SASL mechanism
3:  77889eb986 = 3:  0b0b0f2b33 backend: add OAUTHBEARER SASL mechanism
4:  573a2ca3bc ! 4:  0e8ddadcbf Add pytest suite for OAuth
    @@ Commit message
         dependencies will be installed into ./venv for you. See the README for
         more details.
     
    +    For iddawc, asynchronous tests still hang, as expected. Bad-interval
    +    tests fail because iddawc apparently doesn't care that the interval is
    +    bad.
    +
      ## src/test/python/.gitignore (new) ##
     @@
     +__pycache__/
    @@ src/test/python/client/conftest.py (new)
     +import threading
     +
     +import psycopg2
    ++import psycopg2.extras
     +import pytest
     +
     +import pq3
    @@ src/test/python/client/conftest.py (new)
     +    def run(self):
     +        try:
     +            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
    ++            self._pump_async(conn)
     +            conn.close()
     +        except Exception as e:
     +            self.exception = e
    @@ src/test/python/client/conftest.py (new)
     +            self.exception = None
     +            raise e
     +
    ++    def _pump_async(self, conn):
    ++        """
    ++        Polls a psycopg2 connection until it's completed. (Synchronous
    ++        connections will work here too; they'll just immediately return OK.)
    ++        """
    ++        psycopg2.extras.wait_select(conn)
    ++
     +
     +@pytest.fixture
     +def accept(server_socket):
    @@ src/test/python/client/conftest.py (new)
     +        return sock, client
     +
     +    yield factory
    -+    client.check_completed()
    ++
    ++    if client is not None:
    ++        client.check_completed()
     +
     +
     +@pytest.fixture
    @@ src/test/python/client/test_oauth.py (new)
     @@
     +#
     +# Copyright 2021 VMware, Inc.
    ++# Portions Copyright 2023 Timescale, Inc.
     +# SPDX-License-Identifier: PostgreSQL
     +#
     +
    @@ src/test/python/client/test_oauth.py (new)
     +import threading
     +import time
     +import urllib.parse
    ++from numbers import Number
     +
     +import psycopg2
     +import pytest
    @@ src/test/python/client/test_oauth.py (new)
     +    finish_handshake(conn)
     +
     +
    ++class RawResponse(str):
    ++    """
    ++    Returned by registered endpoint callbacks to take full control of the
    ++    response. Usually, return values are converted to JSON; a RawResponse body
    ++    will be passed to the client as-is, allowing endpoint implementations to
    ++    issue invalid JSON.
    ++    """
    ++
    ++    pass
    ++
    ++
     +class OpenIDProvider(threading.Thread):
     +    """
     +    A thread that runs a mock OpenID provider server.
    @@ src/test/python/client/test_oauth.py (new)
     +
     +    def run(self):
     +        try:
    -+            self.server.serve_forever()
    ++            # XXX socketserver.serve_forever() has a serious architectural
    ++            # issue: its select loop wakes up every `poll_interval` seconds to
    ++            # see if the server is shutting down. The default, 500 ms, only lets
    ++            # us run two tests every second. But the faster we go, the more CPU
    ++            # we burn unnecessarily...
    ++            self.server.serve_forever(poll_interval=0.01)
     +        except Exception as e:
     +            self.exception = e
     +
    @@ src/test/python/client/test_oauth.py (new)
     +            self.endpoint_paths = {}
     +            self._endpoints = {}
     +
    ++            # Provide a standard discovery document by default; tests can
    ++            # override it.
    ++            self.register_endpoint(
    ++                None,
    ++                "GET",
    ++                "/.well-known/openid-configuration",
    ++                self._default_discovery_handler,
    ++            )
    ++
     +        def register_endpoint(self, name, method, path, func):
     +            if method not in self._endpoints:
     +                self._endpoints[method] = {}
     +
     +            self._endpoints[method][path] = func
    -+            self.endpoint_paths[name] = path
    ++
    ++            if name is not None:
    ++                self.endpoint_paths[name] = path
     +
     +        def endpoint(self, method, path):
     +            if method not in self._endpoints:
    @@ src/test/python/client/test_oauth.py (new)
     +
     +            return self._endpoints[method].get(path)
     +
    ++        def _default_discovery_handler(self, headers, params):
    ++            doc = {
    ++                "issuer": self.issuer,
    ++                "response_types_supported": ["token"],
    ++                "subject_types_supported": ["public"],
    ++                "id_token_signing_alg_values_supported": ["RS256"],
    ++                "grant_types_supported": [
    ++                    "urn:ietf:params:oauth:grant-type:device_code"
    ++                ],
    ++            }
    ++
    ++            for name, path in self.endpoint_paths.items():
    ++                doc[name] = self.issuer + path
    ++
    ++            return 200, doc
    ++
     +    class _Server(http.server.HTTPServer):
     +        def handle_error(self, request, addr):
     +            self.shutdown_request(request)
    @@ src/test/python/client/test_oauth.py (new)
     +    class _Handler(http.server.BaseHTTPRequestHandler):
     +        timeout = BLOCKING_TIMEOUT
     +
    -+        def _discovery_handler(self, headers, params):
    -+            oauth = self.server.oauth
    -+
    -+            doc = {
    -+                "issuer": oauth.issuer,
    -+                "response_types_supported": ["token"],
    -+                "subject_types_supported": ["public"],
    -+                "id_token_signing_alg_values_supported": ["RS256"],
    -+            }
    -+
    -+            for name, path in oauth.endpoint_paths.items():
    -+                doc[name] = oauth.issuer + path
    -+
    -+            return 200, doc
    -+
     +        def _handle(self, *, params=None, handler=None):
     +            oauth = self.server.oauth
     +            assert self.headers["Host"] == oauth.host
    @@ src/test/python/client/test_oauth.py (new)
     +                    handler is not None
     +                ), f"no registered endpoint for {self.command} {self.path}"
     +
    -+            code, resp = handler(self.headers, params)
    ++            result = handler(self.headers, params)
    ++
    ++            if len(result) == 2:
    ++                headers = {"Content-Type": "application/json"}
    ++                code, resp = result
    ++            else:
    ++                code, headers, resp = result
     +
     +            self.send_response(code)
    -+            self.send_header("Content-Type", "application/json")
    ++            for h, v in headers.items():
    ++                self.send_header(h, v)
     +            self.end_headers()
     +
    -+            resp = json.dumps(resp)
    -+            resp = resp.encode("utf-8")
    -+            self.wfile.write(resp)
    ++            if resp is not None:
    ++                if not isinstance(resp, RawResponse):
    ++                    resp = json.dumps(resp)
    ++                resp = resp.encode("utf-8")
    ++                self.wfile.write(resp)
     +
     +            self.close_connection = True
     +
     +        def do_GET(self):
    -+            if self.path == "/.well-known/openid-configuration":
    -+                self._handle(handler=self._discovery_handler)
    -+                return
    -+
     +            self._handle()
     +
     +        def _request_body(self):
    @@ src/test/python/client/test_oauth.py (new)
     +@pytest.mark.parametrize("secret", [None, "", "hunter2"])
     +@pytest.mark.parametrize("scope", [None, "", "openid email"])
     +@pytest.mark.parametrize("retries", [0, 1])
    ++@pytest.mark.parametrize(
    ++    "asynchronous",
    ++    [
    ++        pytest.param(False, id="synchronous"),
    ++        pytest.param(True, id="asynchronous"),
    ++    ],
    ++)
     +def test_oauth_with_explicit_issuer(
    -+    capfd, accept, openid_provider, retries, scope, secret
    ++    capfd, accept, openid_provider, asynchronous, retries, scope, secret
     +):
     +    client_id = secrets.token_hex()
     +
    @@ src/test/python/client/test_oauth.py (new)
     +        oauth_client_id=client_id,
     +        oauth_client_secret=secret,
     +        oauth_scope=scope,
    ++        async_=asynchronous,
     +    )
     +
     +    device_code = secrets.token_hex()
    @@ src/test/python/client/test_oauth.py (new)
     +        assert expected in stderr
     +
     +
    -+def test_oauth_requires_client_id(accept, openid_provider):
    -+    sock, client = accept(
    -+        oauth_issuer=openid_provider.issuer,
    -+        # Do not set a client ID; this should cause a client error after the
    -+        # server asks for OAUTHBEARER and the client tries to contact the
    -+        # issuer.
    -+    )
    -+
    ++def expect_disconnected_handshake(sock):
    ++    """
    ++    Helper for any tests that expect the client to disconnect immediately after
    ++    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
    ++    the client to have an oauth_issuer set so that it doesn't try to go through
    ++    discovery.
    ++    """
     +    with sock:
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
     +            # Initiate a handshake.
    @@ src/test/python/client/test_oauth.py (new)
     +            # The client should disconnect at this point.
     +            assert not conn.read()
     +
    ++
    ++def test_oauth_requires_client_id(accept, openid_provider):
    ++    sock, client = accept(
    ++        oauth_issuer=openid_provider.issuer,
    ++        # Do not set a client ID; this should cause a client error after the
    ++        # server asks for OAUTHBEARER and the client tries to contact the
    ++        # issuer.
    ++    )
    ++
    ++    expect_disconnected_handshake(sock)
    ++
     +    expected_error = "no oauth_client_id is set"
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
     +        client.check_completed()
    @@ src/test/python/client/test_oauth.py (new)
     +            finish_handshake(conn)
     +
     +
    ++def alt_patterns(*patterns):
    ++    """
    ++    Just combines multiple alternative regexes into one. It's not very efficient
    ++    but IMO it's easier to read and maintain.
    ++    """
    ++    pat = ""
    ++
    ++    for p in patterns:
    ++        if pat:
    ++            pat += "|"
    ++        pat += f"({p})"
    ++
    ++    return pat
    ++
    ++
     +@pytest.mark.parametrize(
     +    "failure_mode, error_pattern",
     +    [
     +        pytest.param(
    -+            {
    -+                "error": "invalid_client",
    -+                "error_description": "client authentication failed",
    -+            },
    -+            r"client authentication failed \(invalid_client\)",
    ++            (
    ++                400,
    ++                {
    ++                    "error": "invalid_client",
    ++                    "error_description": "client authentication failed",
    ++                },
    ++            ),
    ++            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
     +            id="authentication failure with description",
     +        ),
     +        pytest.param(
    -+            {"error": "invalid_request"},
    -+            r"\(invalid_request\)",
    ++            (400, {"error": "invalid_request"}),
    ++            r"failed to obtain device authorization: \(invalid_request\)",
     +            id="invalid request without description",
     +        ),
     +        pytest.param(
    -+            {},
    -+            r"failed to obtain device authorization",
    ++            (400, {}),
    ++            alt_patterns(
    ++                r'failed to parse token error response: field "error" is missing',
    ++                r"failed to obtain device authorization: \(iddawc error I_ERROR_PARAM\)",
    ++            ),
     +            id="broken error response",
     +        ),
    ++        pytest.param(
    ++            (200, RawResponse(r'{ "interval": 3.5.8 }')),
    ++            alt_patterns(
    ++                r"failed to parse device authorization: Token .* is invalid",
    ++                r"failed to obtain device authorization: \(iddawc error I_ERROR\)",
    ++            ),
    ++            id="non-numeric interval",
    ++        ),
    ++        pytest.param(
    ++            (200, RawResponse(r'{ "interval": 08 }')),
    ++            alt_patterns(
    ++                r"failed to parse device authorization: Token .* is invalid",
    ++                r"failed to obtain device authorization: \(iddawc error I_ERROR\)",
    ++            ),
    ++            id="invalid numeric interval",
    ++        ),
     +    ],
     +)
     +def test_oauth_device_authorization_failures(
    @@ src/test/python/client/test_oauth.py (new)
     +    # any unprotected state mutation here.
     +
     +    def authorization_endpoint(headers, params):
    -+        return 400, failure_mode
    ++        return failure_mode
     +
     +    openid_provider.register_endpoint(
     +        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
    @@ src/test/python/client/test_oauth.py (new)
     +        "token_endpoint", "POST", "/token", token_endpoint
     +    )
     +
    -+    with sock:
    -+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    -+            # Initiate a handshake, which should result in the above endpoints
    -+            # being called.
    -+            startup = pq3.recv1(conn, cls=pq3.Startup)
    -+            assert startup.proto == pq3.protocol(3, 0)
    ++    expect_disconnected_handshake(sock)
     +
    -+            pq3.send(
    -+                conn,
    -+                pq3.types.AuthnRequest,
    -+                type=pq3.authn.SASL,
    -+                body=[b"OAUTHBEARER", b""],
    -+            )
    ++    # Now make sure the client correctly failed.
    ++    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
    ++        client.check_completed()
     +
    -+            # The client should not continue the connection due to the hardcoded
    -+            # provider failure; we disconnect here.
    ++
    ++Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
    ++
    ++
    ++@pytest.mark.parametrize(
    ++    "bad_value",
    ++    [
    ++        pytest.param({"device_code": 3}, id="object"),
    ++        pytest.param([1, 2, 3], id="array"),
    ++        pytest.param("some string", id="string"),
    ++        pytest.param(4, id="numeric"),
    ++        pytest.param(False, id="boolean"),
    ++        pytest.param(None, id="null"),
    ++        pytest.param(Missing, id="missing"),
    ++    ],
    ++)
    ++@pytest.mark.parametrize(
    ++    "field_name,ok_type,required",
    ++    [
    ++        ("device_code", str, True),
    ++        ("user_code", str, True),
    ++        ("verification_uri", str, True),
    ++        ("interval", int, False),
    ++    ],
    ++)
    ++def test_oauth_device_authorization_bad_json_schema(
    ++    accept, openid_provider, field_name, ok_type, required, bad_value
    ++):
    ++    # To make the test matrix easy, just skip the tests that aren't actually
    ++    # interesting (field of the correct type, missing optional field).
    ++    if bad_value is Missing and not required:
    ++        pytest.skip("not interesting: optional field")
    ++    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
    ++        pytest.skip("not interesting: correct type")
    ++
    ++    sock, client = accept(
    ++        oauth_issuer=openid_provider.issuer,
    ++        oauth_client_id=secrets.token_hex(),
    ++    )
    ++
    ++    # Set up our provider callbacks.
    ++    # NOTE that these callbacks will be called on a background thread. Don't do
    ++    # any unprotected state mutation here.
    ++
    ++    def authorization_endpoint(headers, params):
    ++        # Begin with an acceptable base response...
    ++        resp = {
    ++            "device_code": "my-device-code",
    ++            "user_code": "my-user-code",
    ++            "interval": 0,
    ++            "verification_uri": "https://example.com",
    ++            "expires_in": 5,
    ++        }
    ++
    ++        # ...then tweak it so the client fails.
    ++        if bad_value is Missing:
    ++            del resp[field_name]
    ++        else:
    ++            resp[field_name] = bad_value
    ++
    ++        return 200, resp
    ++
    ++    openid_provider.register_endpoint(
    ++        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
    ++    )
    ++
    ++    def token_endpoint(headers, params):
    ++        assert False, "token endpoint was invoked unexpectedly"
    ++
    ++    openid_provider.register_endpoint(
    ++        "token_endpoint", "POST", "/token", token_endpoint
    ++    )
    ++
    ++    expect_disconnected_handshake(sock)
     +
     +    # Now make sure the client correctly failed.
    ++    if bad_value is Missing:
    ++        error_pattern = f'field "{field_name}" is missing'
    ++    elif ok_type == str:
    ++        error_pattern = f'field "{field_name}" must be a string'
    ++    elif ok_type == int:
    ++        error_pattern = f'field "{field_name}" must be a number'
    ++    else:
    ++        assert False, "update error_pattern for new failure mode"
    ++
    ++    # XXX iddawc doesn't really check for problems in the device authorization
    ++    # response, leading to this patchwork:
    ++    if field_name == "verification_uri":
    ++        error_pattern = alt_patterns(
    ++            error_pattern,
    ++            "issuer did not provide a verification URI",
    ++        )
    ++    elif field_name == "user_code":
    ++        error_pattern = alt_patterns(
    ++            error_pattern,
    ++            "issuer did not provide a user code",
    ++        )
    ++    else:
    ++        error_pattern = alt_patterns(
    ++            error_pattern,
    ++            r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
    ++        )
    ++
     +    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
     +        client.check_completed()
     +
    @@ src/test/python/client/test_oauth.py (new)
     +    "failure_mode, error_pattern",
     +    [
     +        pytest.param(
    -+            {
    -+                "error": "expired_token",
    -+                "error_description": "the device code has expired",
    -+            },
    -+            r"the device code has expired \(expired_token\)",
    ++            (
    ++                400,
    ++                {
    ++                    "error": "expired_token",
    ++                    "error_description": "the device code has expired",
    ++                },
    ++            ),
    ++            r"failed to obtain access token: the device code has expired \(expired_token\)",
     +            id="expired token with description",
     +        ),
     +        pytest.param(
    -+            {"error": "access_denied"},
    -+            r"\(access_denied\)",
    ++            (400, {"error": "access_denied"}),
    ++            r"failed to obtain access token: \(access_denied\)",
     +            id="access denied without description",
     +        ),
     +        pytest.param(
    -+            {},
    -+            r"OAuth token retrieval failed",
    -+            id="broken error response",
    ++            (400, {}),
    ++            alt_patterns(
    ++                r'failed to parse token error response: field "error" is missing',
    ++                r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
    ++            ),
    ++            id="empty error response",
    ++        ),
    ++        pytest.param(
    ++            (200, {}, {}),
    ++            alt_patterns(
    ++                r"failed to parse access token response: no content type was provided",
    ++                r"failed to obtain access token: \(iddawc error I_ERROR\)",
    ++            ),
    ++            id="missing content type",
    ++        ),
    ++        pytest.param(
    ++            (200, {"Content-Type": "text/plain"}, {}),
    ++            alt_patterns(
    ++                r"failed to parse access token response: unexpected content type",
    ++                r"failed to obtain access token: \(iddawc error I_ERROR\)",
    ++            ),
    ++            id="wrong content type",
     +        ),
     +    ],
     +)
    @@ src/test/python/client/test_oauth.py (new)
     +    )
     +
     +    retry_lock = threading.Lock()
    ++    final_sent = False
     +
     +    def token_endpoint(headers, params):
     +        with retry_lock:
    -+            nonlocal retries
    ++            nonlocal retries, final_sent
     +
     +            # If the test wants to force the client to retry, return an
     +            # authorization_pending response and decrement the retry count.
    @@ src/test/python/client/test_oauth.py (new)
     +                retries -= 1
     +                return 400, {"error": "authorization_pending"}
     +
    -+        return 400, failure_mode
    ++            # We should only return our failure_mode response once; any further
    ++            # requests indicate that the client isn't correctly bailing out.
    ++            assert not final_sent, "client continued after token error"
    ++
    ++            final_sent = True
    ++
    ++        return failure_mode
     +
     +    openid_provider.register_endpoint(
     +        "token_endpoint", "POST", "/token", token_endpoint
     +    )
     +
    -+    with sock:
    -+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    -+            # Initiate a handshake, which should result in the above endpoints
    -+            # being called.
    -+            startup = pq3.recv1(conn, cls=pq3.Startup)
    -+            assert startup.proto == pq3.protocol(3, 0)
    ++    expect_disconnected_handshake(sock)
     +
    -+            pq3.send(
    -+                conn,
    -+                pq3.types.AuthnRequest,
    -+                type=pq3.authn.SASL,
    -+                body=[b"OAUTHBEARER", b""],
    -+            )
    ++    # Now make sure the client correctly failed.
    ++    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
    ++        client.check_completed()
     +
    -+            # The client should not continue the connection due to the hardcoded
    -+            # provider failure; we disconnect here.
    ++
    ++@pytest.mark.parametrize(
    ++    "bad_value",
    ++    [
    ++        pytest.param({"device_code": 3}, id="object"),
    ++        pytest.param([1, 2, 3], id="array"),
    ++        pytest.param("some string", id="string"),
    ++        pytest.param(4, id="numeric"),
    ++        pytest.param(False, id="boolean"),
    ++        pytest.param(None, id="null"),
    ++        pytest.param(Missing, id="missing"),
    ++    ],
    ++)
    ++@pytest.mark.parametrize(
    ++    "field_name,ok_type,required",
    ++    [
    ++        ("access_token", str, True),
    ++        ("token_type", str, True),
    ++    ],
    ++)
    ++def test_oauth_token_bad_json_schema(
    ++    accept, openid_provider, field_name, ok_type, required, bad_value
    ++):
    ++    # To make the test matrix easy, just skip the tests that aren't actually
    ++    # interesting (field of the correct type, missing optional field).
    ++    if bad_value is Missing and not required:
    ++        pytest.skip("not interesting: optional field")
    ++    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
    ++        pytest.skip("not interesting: correct type")
    ++
    ++    sock, client = accept(
    ++        oauth_issuer=openid_provider.issuer,
    ++        oauth_client_id=secrets.token_hex(),
    ++    )
    ++
    ++    # Set up our provider callbacks.
    ++    # NOTE that these callbacks will be called on a background thread. Don't do
    ++    # any unprotected state mutation here.
    ++
    ++    def authorization_endpoint(headers, params):
    ++        resp = {
    ++            "device_code": "my-device-code",
    ++            "user_code": "my-user-code",
    ++            "interval": 0,
    ++            "verification_uri": "https://example.com",
    ++            "expires_in": 5,
    ++        }
    ++
    ++        return 200, resp
    ++
    ++    openid_provider.register_endpoint(
    ++        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
    ++    )
    ++
    ++    def token_endpoint(headers, params):
    ++        # Begin with an acceptable base response...
    ++        resp = {
    ++            "access_token": secrets.token_urlsafe(),
    ++            "token_type": "bearer",
    ++        }
    ++
    ++        # ...then tweak it so the client fails.
    ++        if bad_value is Missing:
    ++            del resp[field_name]
    ++        else:
    ++            resp[field_name] = bad_value
    ++
    ++        return 200, resp
    ++
    ++    openid_provider.register_endpoint(
    ++        "token_endpoint", "POST", "/token", token_endpoint
    ++    )
    ++
    ++    expect_disconnected_handshake(sock)
     +
     +    # Now make sure the client correctly failed.
    ++    error_pattern = "failed to parse access token response: "
    ++    if bad_value is Missing:
    ++        error_pattern += f'field "{field_name}" is missing'
    ++    elif ok_type == str:
    ++        error_pattern += f'field "{field_name}" must be a string'
    ++    elif ok_type == int:
    ++        error_pattern += f'field "{field_name}" must be a number'
    ++    else:
    ++        assert False, "update error_pattern for new failure mode"
    ++
    ++    # XXX iddawc is fairly silent on the topic.
    ++    error_pattern = alt_patterns(
    ++        error_pattern,
    ++        r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
    ++    )
    ++
     +    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
     +        client.check_completed()
     +
    @@ src/test/python/client/test_oauth.py (new)
     +
     +
     +@pytest.mark.parametrize(
    ++    "bad_response,expected_error",
    ++    [
    ++        pytest.param(
    ++            (200, {"Content-Type": "text/plain"}, {}),
    ++            r'failed to parse OpenID discovery document: unexpected content type "text/plain"',
    ++            id="not JSON",
    ++        ),
    ++        pytest.param(
    ++            (200, {}, {}),
    ++            r"failed to parse OpenID discovery document: no content type was provided",
    ++            id="no Content-Type",
    ++        ),
    ++        pytest.param(
    ++            (204, {}, None),
    ++            r"failed to fetch OpenID discovery document: unexpected response code 204",
    ++            id="no content",
    ++        ),
    ++        pytest.param(
    ++            (301, {"Location": "https://localhost/"}, None),
    ++            r"failed to fetch OpenID discovery document: unexpected response code 301",
    ++            id="redirection",
    ++        ),
    ++        pytest.param(
    ++            (404, {}),
    ++            r"failed to fetch OpenID discovery document: unexpected response code 404",
    ++            id="not found",
    ++        ),
    ++        pytest.param(
    ++            (200, RawResponse("blah\x00blah")),
    ++            r"failed to parse OpenID discovery document: response contains embedded NULLs",
    ++            id="NULL bytes in document",
    ++        ),
    ++        pytest.param(
    ++            (200, 123),
    ++            r"failed to parse OpenID discovery document: top-level element must be an object",
    ++            id="scalar at top level",
    ++        ),
    ++        pytest.param(
    ++            (200, []),
    ++            r"failed to parse OpenID discovery document: top-level element must be an object",
    ++            id="array at top level",
    ++        ),
    ++        pytest.param(
    ++            (200, RawResponse("{")),
    ++            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
    ++            id="unclosed object",
    ++        ),
    ++        pytest.param(
    ++            (200, RawResponse(r'{ "hello": ] }')),
    ++            r"failed to parse OpenID discovery document.* Expected JSON value",
    ++            id="bad array",
    ++        ),
    ++        pytest.param(
    ++            (200, {"issuer": 123}),
    ++            r'failed to parse OpenID discovery document: field "issuer" must be a string',
    ++            id="non-string issuer",
    ++        ),
    ++        pytest.param(
    ++            (200, {"issuer": ["something"]}),
    ++            r'failed to parse OpenID discovery document: field "issuer" must be a string',
    ++            id="issuer array",
    ++        ),
    ++        pytest.param(
    ++            (200, {"issuer": {}}),
    ++            r'failed to parse OpenID discovery document: field "issuer" must be a string',
    ++            id="issuer object",
    ++        ),
    ++        pytest.param(
    ++            (200, {"grant_types_supported": 123}),
    ++            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
    ++            id="scalar grant types field",
    ++        ),
    ++        pytest.param(
    ++            (200, {"grant_types_supported": {}}),
    ++            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
    ++            id="object grant types field",
    ++        ),
    ++        pytest.param(
    ++            (200, {"grant_types_supported": [123]}),
    ++            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
    ++            id="non-string grant types",
    ++        ),
    ++        pytest.param(
    ++            (200, {"grant_types_supported": ["something", 123]}),
    ++            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
    ++            id="non-string grant types later in the list",
    ++        ),
    ++        pytest.param(
    ++            (200, {"grant_types_supported": ["something", {}]}),
    ++            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
    ++            id="object grant types later in the list",
    ++        ),
    ++        pytest.param(
    ++            (200, {"grant_types_supported": ["something", ["something"]]}),
    ++            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
    ++            id="embedded array grant types later in the list",
    ++        ),
    ++        pytest.param(
    ++            (
    ++                200,
    ++                {
    ++                    "grant_types_supported": ["something"],
    ++                    "token_endpoint": "https://example.com/",
    ++                    "issuer": 123,
    ++                },
    ++            ),
    ++            r'failed to parse OpenID discovery document: field "issuer" must be a string',
    ++            id="non-string issuer after other valid fields",
    ++        ),
    ++        pytest.param(
    ++            (
    ++                200,
    ++                {
    ++                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
    ++                    "issuer": 123,
    ++                },
    ++            ),
    ++            r'failed to parse OpenID discovery document: field "issuer" must be a string',
    ++            id="non-string issuer after other ignored fields",
    ++        ),
    ++        pytest.param(
    ++            (200, {"token_endpoint": "https://example.com/"}),
    ++            r'failed to parse OpenID discovery document: field "issuer" is missing',
    ++            id="missing issuer",
    ++        ),
    ++        pytest.param(
    ++            (200, {"issuer": "https://example.com/"}),
    ++            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
    ++            id="missing token endpoint",
    ++        ),
    ++        pytest.param(
    ++            (
    ++                200,
    ++                {
    ++                    "issuer": "https://example.com",
    ++                    "token_endpoint": "https://example.com/token",
    ++                    "device_authorization_endpoint": "https://example.com/dev",
    ++                },
    ++            ),
    ++            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
    ++            id="missing device code grants",
    ++        ),
    ++        pytest.param(
    ++            (
    ++                200,
    ++                {
    ++                    "issuer": "https://example.com",
    ++                    "token_endpoint": "https://example.com/token",
    ++                    "grant_types_supported": [
    ++                        "urn:ietf:params:oauth:grant-type:device_code"
    ++                    ],
    ++                },
    ++            ),
    ++            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
    ++            id="missing device_authorization_endpoint",
    ++        ),
    ++        #
    ++        # Exercise HTTP-level failures by breaking the protocol. Note that the
    ++        # error messages here are implementation-dependent.
    ++        #
    ++        pytest.param(
    ++            (1000, {}),
    ++            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
    ++            id="invalid HTTP response code",
    ++        ),
    ++        pytest.param(
    ++            (200, {"Content-Length": -1}, {}),
    ++            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
    ++            id="bad HTTP Content-Length",
    ++        ),
    ++    ],
    ++)
    ++def test_oauth_discovery_provider_failure(
    ++    accept, openid_provider, bad_response, expected_error
    ++):
    ++    sock, client = accept(
    ++        oauth_issuer=openid_provider.issuer,
    ++        oauth_client_id=secrets.token_hex(),
    ++    )
    ++
    ++    def failing_discovery_handler(headers, params):
    ++        return bad_response
    ++
    ++    openid_provider.register_endpoint(
    ++        None,
    ++        "GET",
    ++        "/.well-known/openid-configuration",
    ++        failing_discovery_handler,
    ++    )
    ++
    ++    expect_disconnected_handshake(sock)
    ++
    ++    # XXX iddawc doesn't differentiate...
    ++    expected_error = alt_patterns(
    ++        expected_error,
    ++        r"failed to fetch OpenID discovery document \(iddawc error I_ERROR(_PARAM)?\)",
    ++    )
    ++
    ++    with pytest.raises(psycopg2.OperationalError, match=expected_error):
    ++        client.check_completed()
    ++
    ++
    ++@pytest.mark.parametrize(
     +    "sasl_err,resp_type,resp_payload,expected_error",
     +    [
     +        pytest.param(
    @@ src/test/python/client/test_oauth.py (new)
     +            "server sent additional OAuth data",
     +            id="broken server: SASL success after error",
     +        ),
    ++        pytest.param(
    ++            {"status": "invalid_request"},
    ++            pq3.types.AuthnRequest,
    ++            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
    ++            "duplicate SASL authentication request",
    ++            id="broken server: SASL reinitialization after error",
    ++        ),
     +    ],
     +)
     +def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
5:  4490d029b5 ! 5:  65c319a6a3 squash! Add pytest suite for OAuth
    @@ .cirrus.yml: task:
          sysctl kern.corefile='/tmp/cores/%N.%P.core'
        setup_additional_packages_script: |
     -    #pkg install -y ...
    -+    pkg install -y iddawc
    ++    pkg install -y curl
      
        # NB: Intentionally build without -Dllvm. The freebsd image size is already
        # large enough to make VM startup slow, and even without llvm freebsd
    @@ .cirrus.yml: task:
              --buildtype=debug \
              -Dcassert=true -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
              -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
    -+        -Doauth=enabled \
    ++        -Doauth=curl \
              -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
              build
          EOF
    @@ .cirrus.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
        --with-libxslt
        --with-llvm
        --with-lz4
    -+  --with-oauth
    ++  --with-oauth=curl
        --with-pam
        --with-perl
        --with-python
    @@ .cirrus.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
      
      LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
        -Dllvm=enabled
    -+  -Doauth=enabled
    ++  -Doauth=curl
        -Duuid=e2fs
      
      
    @@ .cirrus.yml: task:
     -    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
     +    apt-get update
     +    DEBIAN_FRONTEND=noninteractive apt-get -y install \
    -+      libiddawc-dev \
    -+      libiddawc-dev:i386 \
    ++      libcurl4-openssl-dev \
    ++      libcurl4-openssl-dev:i386 \
     +      python3-venv \
      
        matrix:
    @@ .cirrus.yml: task:
     -    #apt-get update
     -    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
     +    apt-get update
    -+    DEBIAN_FRONTEND=noninteractive apt-get -y install libiddawc-dev
    ++    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
      
        ###
        # Test that code can be built with gcc/clang without warnings
    @@ meson.build: foreach test_dir : tests
     +      test_group = test_dir['name']
     +      test_output = test_result_dir / test_group / kind
     +      test_kwargs = {
    -+        'protocol': 'tap',
    ++        #'protocol': 'tap',
     +        'suite': test_group,
     +        'timeout': 1000,
     +        'depends': test_deps,
    @@ meson.build: foreach test_dir : tests
     +      pytest = venv_path / 'bin' / 'py.test'
     +      test_command = [
     +        pytest,
    ++        # Avoid running these tests against an existing database.
    ++        '--temp-instance', test_output / 'tmp_check',
    ++
     +        # FIXME pytest-tap's stream feature accidentally suppresses errors that
     +        # are critical for debugging:
     +        #     https://github.com/python-tap/pytest-tap/issues/30
    -+        # Fix -- or maybe don't use the meson TAP protocol for now?
    -+        '--tap-stream',
    -+        # Avoid running these tests against an existing database.
    -+        '--temp-instance', test_output / 'tmp_check',
    ++        # Don't use the meson TAP protocol for now...
    ++        #'--tap-stream',
     +      ]
     +
     +      foreach pyt : t['tests']
    @@ src/test/python/client/test_oauth.py: import pq3
     +# The client tests need libpq to have been compiled with OAuth support; skip
     +# them otherwise.
     +pytestmark = pytest.mark.skipif(
    -+    os.getenv("with_oauth") == "no",
    ++    os.getenv("with_oauth") == "none",
     +    reason="OAuth client tests require --with-oauth support",
     +)
     +
    @@ src/test/python/meson.build (new)
     +      './test_pq3.py',
     +    ],
     +    'env': {
    -+      'with_oauth': oauth.found() ? 'yes' : 'no',
    ++      'with_oauth': oauth_library,
     +
     +      # Point to the default database; the tests will create their own databases
     +      # as needed.
v8-0001-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/gzip; name=v8-0001-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v8-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v8-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v8-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v8-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v8-0004-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v8-0004-Add-pytest-suite-for-OAuth.patch.gzDownload
v8-0005-squash-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v8-0005-squash-Add-pytest-suite-for-OAuth.patch.gzDownload
#57Daniele Varrazzo
daniele.varrazzo@gmail.com
In reply to: Jacob Champion (#56)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sat, 20 May 2023 at 00:01, Jacob Champion <jchampion@timescale.com> wrote:

- Some clients in the wild (psycopg2/psycopg) suppress all notifications
during PQconnectPoll().

If there is anything we can improve in psycopg please reach out.

-- Daniele

#58Jacob Champion
jchampion@timescale.com
In reply to: Daniele Varrazzo (#57)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, May 23, 2023 at 4:22 AM Daniele Varrazzo
<daniele.varrazzo@gmail.com> wrote:

On Sat, 20 May 2023 at 00:01, Jacob Champion <jchampion@timescale.com> wrote:

- Some clients in the wild (psycopg2/psycopg) suppress all notifications
during PQconnectPoll().

If there is anything we can improve in psycopg please reach out.

Will do, thank you! But in this case, I think there's nothing to
improve in psycopg -- in fact, it highlighted the problem with my
initial design, and now I think the notice processor will never be an
appropriate avenue for communication of the user code.

The biggest issue is that there's a chicken-and-egg situation: if
you're using the synchronous PQconnect* API, you can't override the
notice hooks while the handshake is in progress, because you don't
have a connection handle yet. The second problem is that there are a
bunch of parameters coming back from the server (user code,
verification URI, expiration time) that the application may choose to
display or use, and communicating those pieces in a (probably already
translated) flat text string is a pretty hostile API.

So I think we'll probably need to provide a global handler API,
similar to the passphrase hook we currently provide, that can receive
these pieces separately and assemble them however the application
desires. The hard part will be to avoid painting ourselves into a
corner, because this particular information is specific to the device
authorization flow, and if we ever want to add other flows into libpq,
we'll probably not want to add even more hooks.

Thanks,
--Jacob

#59Jacob Champion
jchampion@timescale.com
In reply to: Jacob Champion (#56)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 5/19/23 15:01, Jacob Champion wrote:

But I don't want to
gold-plate it if this overall approach is unacceptable. So, questions
for the gallery:

1) Would starting up a background thread (pooled or not) be acceptable
on Windows? Alternatively, does anyone know enough Winsock deep magic to
combine multiple pending events into one (selectable!) socket?

2) If a background thread is acceptable on one platform, does it make
more sense to use one on every platform and just have synchronous code
everywhere? Or should we use a threadless async implementation when we can?

3) Is the current conn->async_auth() entry point sufficient for an
application to implement the Microsoft flows discussed upthread?

4) Would we want to try to require a new enough cURL/OpenSSL to avoid
thread safety problems during initialization, or do we need to introduce
some API equivalent to PQinitOpenSSL?

5) Does this maintenance tradeoff (full control over the client vs. a
large amount of RFC-governed code) seem like it could be okay?

There was additional interest at PGCon, so I've registered this in the
commitfest.

Potential reviewers should be aware that the current implementation
requires Linux (or, more specifically, epoll), as the cfbot shows. But
if you have any opinions on the above questions, those will help me
tackle the other platforms. :D

Thanks!
--Jacob

#60Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#56)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sat, May 20, 2023 at 10:01 AM Jacob Champion <jchampion@timescale.com> wrote:

- The client implementation is currently epoll-/Linux-specific. I think
kqueue shouldn't be too much trouble for the BSDs, but it's even more
code to maintain.

I guess you also need a fallback that uses plain old POSIX poll()? I
see you're not just using epoll but also timerfd. Could that be
converted to plain old timeout bookkeeping? That should be enough to
get every other Unix and *possibly* also Windows to work with the same
code path.

- Unless someone is aware of some amazing Winsock magic, I'm pretty sure
the multiplexed-socket approach is dead in the water on Windows. I think
the strategy there probably has to be a background thread plus a fake
"self-pipe" (loopback socket) for polling... which may be controversial?

I am not a Windows user or hacker, but there are certainly several
ways to multiplex sockets. First there is the WSAEventSelect() +
WaitForMultipleObjects() approach that latch.c uses. It has the
advantage that it allows socket readiness to be multiplexed with
various other things that use Windows "events". But if you don't need
that, ie you *only* need readiness-based wakeup for a bunch of sockets
and no other kinds of fd or object, you can use winsock's plain old
select() or its fairly faithful poll() clone called WSAPoll(). It
looks a bit like that'd be true here if you could kill the timerfd?

It's a shame to write modern code using select(), but you can find
lots of shouting all over the internet about WSAPoll()'s defects, most
famously the cURL guys[1]https://daniel.haxx.se/blog/2012/10/10/wsapoll-is-broken/ whose blog is widely cited, so people still
do it. Possibly some good news on that front: by my reading of the
docs, it looks like that problem was fixed in Windows 10 2004[2]https://learn.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-wsapoll which
itself is by now EOL, so all systems should have the fix? I suspect
that means that, finally, you could probably just use the same poll()
code path for Unix (when epoll is not available) *and* Windows these
days, making porting a lot easier. But I've never tried it, so I
don't know what other problems there might be. Another thing people
complain about is the lack of socketpair() or similar in winsock which
means you unfortunately can't easily make anonymous
select/poll-compatible local sockets, but that doesn't seem to be
needed here.

[1]: https://daniel.haxx.se/blog/2012/10/10/wsapoll-is-broken/
[2]: https://learn.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-wsapoll

#61Jacob Champion
jchampion@timescale.com
In reply to: Thomas Munro (#60)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Jun 30, 2023 at 9:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:

On Sat, May 20, 2023 at 10:01 AM Jacob Champion <jchampion@timescale.com> wrote:

- The client implementation is currently epoll-/Linux-specific. I think
kqueue shouldn't be too much trouble for the BSDs, but it's even more
code to maintain.

I guess you also need a fallback that uses plain old POSIX poll()?

The use of the epoll API here is to combine several sockets into one,
not to actually call epoll_wait() itself. kqueue descriptors should
let us do the same, IIUC.

I see you're not just using epoll but also timerfd. Could that be
converted to plain old timeout bookkeeping? That should be enough to
get every other Unix and *possibly* also Windows to work with the same
code path.

I might be misunderstanding your suggestion, but I think our internal
bookkeeping is orthogonal to that. The use of timerfd here allows us
to forward libcurl's timeout requirements up to the top-level
PQsocket(). As an example, libcurl is free to tell us to call it again
in ten milliseconds, and we have to make sure a nonblocking client
calls us again after that elapses; otherwise they might hang waiting
for data that's not coming.

- Unless someone is aware of some amazing Winsock magic, I'm pretty sure
the multiplexed-socket approach is dead in the water on Windows. I think
the strategy there probably has to be a background thread plus a fake
"self-pipe" (loopback socket) for polling... which may be controversial?

I am not a Windows user or hacker, but there are certainly several
ways to multiplex sockets. First there is the WSAEventSelect() +
WaitForMultipleObjects() approach that latch.c uses.

I don't think that strategy plays well with select() clients, though
-- it requires a handle array, and we've just got the one socket.

My goal is to maintain compatibility with existing PQconnectPoll()
applications, where the only way we get to communicate with the client
is through the PQsocket() for the connection. Ideally, you shouldn't
have to completely rewrite your application loop just to make use of
OAuth. (I assume a requirement like that would be a major roadblock to
committing this -- and if that's not a correct assumption, then I
guess my job gets a lot easier?)

It's a shame to write modern code using select(), but you can find
lots of shouting all over the internet about WSAPoll()'s defects, most
famously the cURL guys[1] whose blog is widely cited, so people still
do it.

Right -- that's basically the root of my concern. I can't guarantee
that existing Windows clients out there are all using
WaitForMultipleObjects(). From what I can tell, whatever we hand up
through PQsocket() has to be fully Winsock-/select-compatible.

Another thing people
complain about is the lack of socketpair() or similar in winsock which
means you unfortunately can't easily make anonymous
select/poll-compatible local sockets, but that doesn't seem to be
needed here.

For the background-thread implementation, it probably would be. I've
been looking at libevent (BSD-licensed) and its socketpair hack for
Windows...

Thanks!
--Jacob

#62Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#61)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Jul 6, 2023 at 9:00 AM Jacob Champion <jchampion@timescale.com> wrote:

My goal is to maintain compatibility with existing PQconnectPoll()
applications, where the only way we get to communicate with the client
is through the PQsocket() for the connection. Ideally, you shouldn't
have to completely rewrite your application loop just to make use of
OAuth. (I assume a requirement like that would be a major roadblock to
committing this -- and if that's not a correct assumption, then I
guess my job gets a lot easier?)

Ah, right, I get it.

I guess there are a couple of ways to do it if we give up the goal of
no-code-change-for-the-client:

1. Generalised PQsocket(), that so that a client can call something like:

int PQpollset(const PGConn *conn, struct pollfd fds[], int fds_size,
int *nfds, int *timeout_ms);

That way, libpq could tell you about which events it would like to
wait for on which fds, and when it would like you to call it back due
to timeout, and you can either pass that information directly to
poll() or WSAPoll() or some equivalent interface (we don't care, we
just gave you the info you need), or combine it in obvious ways with
whatever else you want to multiplex with in your client program.

2. Convert those events into new libpq events like 'I want you to
call me back in 100ms', and 'call me back when socket #42 has data',
and let clients handle that by managing their own poll set etc. (This
is something I've speculated about to support more efficient
postgres_fdw shard query multiplexing; gotta figure out how to get
multiple connections' events into one WaitEventSet...)

I guess there is a practical middle ground where client code on
systems that have epoll/kqueue can use OAUTHBEARER without any code
change, and the feature is available on other systems too but you'll
have to change your client code to use one of those interfaces or else
you get an error 'coz we just can't do it. Or, more likely in the
first version, you just can't do it at all... Doesn't seem that bad
to me.

BTW I will happily do the epoll->kqueue port work if necessary.

#63Jacob Champion
jchampion@timescale.com
In reply to: Thomas Munro (#62)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Jul 5, 2023 at 3:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:

I guess there are a couple of ways to do it if we give up the goal of
no-code-change-for-the-client:

1. Generalised PQsocket(), that so that a client can call something like:

int PQpollset(const PGConn *conn, struct pollfd fds[], int fds_size,
int *nfds, int *timeout_ms);

That way, libpq could tell you about which events it would like to
wait for on which fds, and when it would like you to call it back due
to timeout, and you can either pass that information directly to
poll() or WSAPoll() or some equivalent interface (we don't care, we
just gave you the info you need), or combine it in obvious ways with
whatever else you want to multiplex with in your client program.

I absolutely wanted something like this while I was writing the code
(it would have made things much easier), but I'd feel bad adding that
much complexity to the API if the vast majority of connections use
exactly one socket. Are there other use cases in libpq where you think
this expanded API could be useful? Maybe to lift some of the existing
restrictions for PQconnectPoll(), add async DNS resolution, or
something?

Couple complications I can think of at the moment:
1. Clients using persistent pollsets will have to remove old
descriptors, presumably by tracking the delta since the last call,
which might make for a rough transition. Bookkeeping bugs probably
wouldn't show up unless they used OAuth in their test suites. With the
current model, that's more hidden and libpq takes responsibility for
getting it right.
2. In the future, we might need to think carefully around situations
where we want multiple PGConn handles to share descriptors (e.g.
multiplexed backend connections). I avoid tricky questions at the
moment by assigning only one connection per multi pool.

2. Convert those events into new libpq events like 'I want you to
call me back in 100ms', and 'call me back when socket #42 has data',
and let clients handle that by managing their own poll set etc. (This
is something I've speculated about to support more efficient
postgres_fdw shard query multiplexing; gotta figure out how to get
multiple connections' events into one WaitEventSet...)

Something analogous to libcurl's socket and timeout callbacks [1]https://curl.se/libcurl/c/CURLMOPT_SOCKETFUNCTION.html,
then? Or is there an existing libpq API you were thinking about using?

I guess there is a practical middle ground where client code on
systems that have epoll/kqueue can use OAUTHBEARER without any code
change, and the feature is available on other systems too but you'll
have to change your client code to use one of those interfaces or else
you get an error 'coz we just can't do it.

That's a possibility -- if your platform is able to do it nicely,
might as well use it. (In a similar vein, I'd personally vote against
having every platform use a background thread, even if we decided to
implement it for Windows.)

Or, more likely in the
first version, you just can't do it at all... Doesn't seem that bad
to me.

Any initial opinions on whether it's worse or better than a worker thread?

BTW I will happily do the epoll->kqueue port work if necessary.

And I will happily take you up on that; thanks!

--Jacob

[1]: https://curl.se/libcurl/c/CURLMOPT_SOCKETFUNCTION.html

#64Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#63)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Jul 7, 2023 at 4:57 AM Jacob Champion <jchampion@timescale.com> wrote:

On Wed, Jul 5, 2023 at 3:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:

2. Convert those events into new libpq events like 'I want you to
call me back in 100ms', and 'call me back when socket #42 has data',
and let clients handle that by managing their own poll set etc. (This
is something I've speculated about to support more efficient
postgres_fdw shard query multiplexing; gotta figure out how to get
multiple connections' events into one WaitEventSet...)

Something analogous to libcurl's socket and timeout callbacks [1],
then? Or is there an existing libpq API you were thinking about using?

Yeah. Libpq already has an event concept. I did some work on getting
long-lived WaitEventSet objects to be used in various places, some of
which got committed[1]/messages/by-id/CA+hUKGJAC4Oqao=qforhNey20J8CiG2R=oBPqvfR0vOJrFysGw@mail.gmail.com, but not yet the parts related to postgres_fdw
(which uses libpq connections to talk to other PostgreSQL servers, and
runs into the limitations of PQsocket()). Horiguchi-san had the good
idea of extending the event system to cover socket changes, but I
haven't actually tried it yet. One day.

Or, more likely in the
first version, you just can't do it at all... Doesn't seem that bad
to me.

Any initial opinions on whether it's worse or better than a worker thread?

My vote is that it's perfectly fine to make a new feature that only
works on some OSes. If/when someone wants to work on getting it going
on Windows/AIX/Solaris (that's the complete set of no-epoll, no-kqueue
OSes we target), they can write the patch.

[1]: /messages/by-id/CA+hUKGJAC4Oqao=qforhNey20J8CiG2R=oBPqvfR0vOJrFysGw@mail.gmail.com

#65Jacob Champion
jchampion@timescale.com
In reply to: Thomas Munro (#64)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Jul 6, 2023 at 1:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:

On Fri, Jul 7, 2023 at 4:57 AM Jacob Champion <jchampion@timescale.com> wrote:

Something analogous to libcurl's socket and timeout callbacks [1],
then? Or is there an existing libpq API you were thinking about using?

Yeah. Libpq already has an event concept.

Thanks -- I don't know how I never noticed libpq-events.h before.

Per-connection events (or callbacks) might bring up the same
chicken-and-egg situation discussed above, with the notice hook. We'll
be fine as long as PQconnectStart is guaranteed to return before the
PQconnectPoll engine gets to authentication, and it looks like that's
true with today's implementation, which returns pessimistically at
several points instead of just trying to continue the exchange. But I
don't know if that's intended as a guarantee for the future. At the
very least we would have to pin that implementation detail.

Or, more likely in the
first version, you just can't do it at all... Doesn't seem that bad
to me.

Any initial opinions on whether it's worse or better than a worker thread?

My vote is that it's perfectly fine to make a new feature that only
works on some OSes. If/when someone wants to work on getting it going
on Windows/AIX/Solaris (that's the complete set of no-epoll, no-kqueue
OSes we target), they can write the patch.

Okay. I'm curious to hear others' thoughts on that, too, if anyone's lurking.

Thanks!
--Jacob

#66Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#65)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Thanks Jacob for making progress on this.

3) Is the current conn->async_auth() entry point sufficient for an
application to implement the Microsoft flows discussed upthread?

Please confirm my understanding of the flow is correct:
1. Client calls PQconnectStart.
- The client doesn't know yet what is the issuer and the scope.
- Parameters are strings, so callback is not provided yet.
2. Client gets PgConn from PQconnectStart return value and updates
conn->async_auth to its own callback.
3. Client polls PQconnectPoll and checks conn->sasl_state until the
value is SASL_ASYNC
4. Client accesses conn->oauth_issuer and conn->oauth_scope and uses
those info to trigger the token flow.
5. Expectations on async_auth:
a. It returns PGRES_POLLING_READING while token acquisition is going on
b. It returns PGRES_POLLING_OK and sets conn->sasl_state->token
when token acquisition succeeds.
6. Is the client supposed to do anything with the altsock parameter?

Is the above accurate understanding?

If yes, it looks workable with a couple of improvements I think would be nice:
1. Currently, oauth_exchange function sets conn->async_auth =
pg_fe_run_oauth_flow and starts Device Code flow automatically when
receiving challenge and metadata from the server.
There probably should be a way for the client to prevent default
Device Code flow from triggering.
2. The current signature and expectations from async_auth function
seems to be tightly coupled with the internal implementation:
- Pieces of information need to be picked and updated in different
places in the PgConn structure.
- Function is expected to return PostgresPollingStatusType which
is used to communicate internal state to the client.
Would it make sense to separate the internal callback used to
communicate with Device Code flow from client facing API?
I.e. introduce a new client facing structure and enum to facilitate
callback and its return value.

-----------
On a separate note:
The backend code currently spawns an external command for token validation.
As we discussed before, an extension hook would be a more efficient
extensibility option.
We see clients make 10k+ connections using OAuth tokens per minute to
our service, and stating external processes would be too much overhead
here.

-----------

5) Does this maintenance tradeoff (full control over the client vs. a
large amount of RFC-governed code) seem like it could be okay?

It's nice for psql to have Device Code flow. Can be made even more
convenient with refresh tokens support.
And for clients on resource constrained devices to be able to
authenticate with Client Credentials (app secret) without bringing
more dependencies.

In most other cases, upstream PostgreSQL drivers written in higher
level languages have libraries / abstractions to implement OAUTH flows
for the platforms they support.

Show quoted text

On Fri, Jul 7, 2023 at 11:48 AM Jacob Champion <jchampion@timescale.com> wrote:

On Thu, Jul 6, 2023 at 1:48 PM Thomas Munro <thomas.munro@gmail.com> wrote:

On Fri, Jul 7, 2023 at 4:57 AM Jacob Champion <jchampion@timescale.com> wrote:

Something analogous to libcurl's socket and timeout callbacks [1],
then? Or is there an existing libpq API you were thinking about using?

Yeah. Libpq already has an event concept.

Thanks -- I don't know how I never noticed libpq-events.h before.

Per-connection events (or callbacks) might bring up the same
chicken-and-egg situation discussed above, with the notice hook. We'll
be fine as long as PQconnectStart is guaranteed to return before the
PQconnectPoll engine gets to authentication, and it looks like that's
true with today's implementation, which returns pessimistically at
several points instead of just trying to continue the exchange. But I
don't know if that's intended as a guarantee for the future. At the
very least we would have to pin that implementation detail.

Or, more likely in the
first version, you just can't do it at all... Doesn't seem that bad
to me.

Any initial opinions on whether it's worse or better than a worker thread?

My vote is that it's perfectly fine to make a new feature that only
works on some OSes. If/when someone wants to work on getting it going
on Windows/AIX/Solaris (that's the complete set of no-epoll, no-kqueue
OSes we target), they can write the patch.

Okay. I'm curious to hear others' thoughts on that, too, if anyone's lurking.

Thanks!
--Jacob

#67Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#63)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Jul 7, 2023 at 4:57 AM Jacob Champion <jchampion@timescale.com> wrote:

On Wed, Jul 5, 2023 at 3:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:

BTW I will happily do the epoll->kqueue port work if necessary.

And I will happily take you up on that; thanks!

Some initial hacking, about 2 coffees' worth:
https://github.com/macdice/postgres/commits/oauth-kqueue

This compiles on FreeBSD and macOS, but I didn't have time to figure
out all your Python testing magic so I don't know if it works yet and
it's still red on CI... one thing I wondered about is the *altsock =
timerfd part which I couldn't do.

The situation on macOS is a little odd: the man page says EVFILT_TIMER
is not implemented. But clearly it is, we can read the source code as
I had to do to find out which unit of time it defaults to[1]https://github.com/apple/darwin-xnu/blob/main/bsd/kern/kern_event.c#L1345 (huh,
Apple's github repo for Darwin appears to have been archived recently
-- no more source code updates? that'd be a shame!), and it works
exactly as expected in simple programs. So I would just assume it
works until we see evidence otherwise. (We already use a couple of
other things on macOS more or less by accident because configure finds
them, where they are undocumented or undeclared.)

[1]: https://github.com/apple/darwin-xnu/blob/main/bsd/kern/kern_event.c#L1345

#68Jacob Champion
jchampion@timescale.com
In reply to: Andrey Chudnovsky (#66)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Jul 7, 2023 at 2:16 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:

Please confirm my understanding of the flow is correct:
1. Client calls PQconnectStart.
- The client doesn't know yet what is the issuer and the scope.

Right. (Strictly speaking it doesn't even know that OAuth will be used
for the connection, yet, though at some point we'll be able to force
the issue with e.g. `require_auth=oauth`. That's not currently
implemented.)

- Parameters are strings, so callback is not provided yet.
2. Client gets PgConn from PQconnectStart return value and updates
conn->async_auth to its own callback.

This is where some sort of official authn callback registration (see
above reply to Daniele) would probably come in handy.

3. Client polls PQconnectPoll and checks conn->sasl_state until the
value is SASL_ASYNC

In my head, the client's custom callback would always be invoked
during the call to PQconnectPoll, rather than making the client do
work in between calls. That way, a client can use custom flows even
with a synchronous PQconnectdb().

4. Client accesses conn->oauth_issuer and conn->oauth_scope and uses
those info to trigger the token flow.

Right.

5. Expectations on async_auth:
a. It returns PGRES_POLLING_READING while token acquisition is going on
b. It returns PGRES_POLLING_OK and sets conn->sasl_state->token
when token acquisition succeeds.

Yes. Though the token should probably be returned through some
explicit part of the callback, now that you mention it...

6. Is the client supposed to do anything with the altsock parameter?

The callback needs to set the altsock up with a select()able
descriptor, which wakes up the client when more work is ready to be
done. Without that, you can't handle multiple connections on a single
thread.

If yes, it looks workable with a couple of improvements I think would be nice:
1. Currently, oauth_exchange function sets conn->async_auth =
pg_fe_run_oauth_flow and starts Device Code flow automatically when
receiving challenge and metadata from the server.
There probably should be a way for the client to prevent default
Device Code flow from triggering.

Agreed. I'd like the client to be able to override this directly.

2. The current signature and expectations from async_auth function
seems to be tightly coupled with the internal implementation:
- Pieces of information need to be picked and updated in different
places in the PgConn structure.
- Function is expected to return PostgresPollingStatusType which
is used to communicate internal state to the client.
Would it make sense to separate the internal callback used to
communicate with Device Code flow from client facing API?
I.e. introduce a new client facing structure and enum to facilitate
callback and its return value.

Yep, exactly right! I just wanted to check that the architecture
*looked* sufficient before pulling it up into an API.

On a separate note:
The backend code currently spawns an external command for token validation.
As we discussed before, an extension hook would be a more efficient
extensibility option.
We see clients make 10k+ connections using OAuth tokens per minute to
our service, and stating external processes would be too much overhead
here.

+1. I'm curious, though -- what language do you expect to use to write
a production validator hook? Surely not low-level C...?

5) Does this maintenance tradeoff (full control over the client vs. a
large amount of RFC-governed code) seem like it could be okay?

It's nice for psql to have Device Code flow. Can be made even more
convenient with refresh tokens support.
And for clients on resource constrained devices to be able to
authenticate with Client Credentials (app secret) without bringing
more dependencies.

In most other cases, upstream PostgreSQL drivers written in higher
level languages have libraries / abstractions to implement OAUTH flows
for the platforms they support.

Yeah, I'm really interested in seeing which existing high-level flows
can be mixed in through a driver. Trying not to get too far ahead of
myself :D

Thanks for the review!

--Jacob

#69Jacob Champion
jchampion@timescale.com
In reply to: Thomas Munro (#67)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Jul 7, 2023 at 6:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:

On Fri, Jul 7, 2023 at 4:57 AM Jacob Champion <jchampion@timescale.com> wrote:

On Wed, Jul 5, 2023 at 3:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:

BTW I will happily do the epoll->kqueue port work if necessary.

And I will happily take you up on that; thanks!

Some initial hacking, about 2 coffees' worth:
https://github.com/macdice/postgres/commits/oauth-kqueue

This compiles on FreeBSD and macOS, but I didn't have time to figure
out all your Python testing magic so I don't know if it works yet and
it's still red on CI...

This is awesome, thank you!

I need to look into the CI more, but it looks like the client tests
are passing, which is a good sign. (I don't understand why the
server-side tests are failing on FreeBSD, but they shouldn't be using
the libpq code at all, so I think your kqueue implementation is in the
clear. Cirrus doesn't have the logs from the server-side test failures
anywhere -- probably a bug in my Meson patch.)

one thing I wondered about is the *altsock =
timerfd part which I couldn't do.

I did that because I'm not entirely sure that libcurl is guaranteed to
have cleared out all its sockets from the mux, and I didn't want to
invite spurious wakeups. I should probably verify whether or not
that's possible. If so, we could just make that code resilient to
early wakeup, so that it matters less, or set up a second kqueue that
only holds the timer if that turns out to be unacceptable?

The situation on macOS is a little odd: the man page says EVFILT_TIMER
is not implemented. But clearly it is, we can read the source code as
I had to do to find out which unit of time it defaults to[1] (huh,
Apple's github repo for Darwin appears to have been archived recently
-- no more source code updates? that'd be a shame!), and it works
exactly as expected in simple programs. So I would just assume it
works until we see evidence otherwise. (We already use a couple of
other things on macOS more or less by accident because configure finds
them, where they are undocumented or undeclared.)

Huh. Something to keep an eye on... might be a problem with older versions?

Thanks!
--Jacob

#70Jacob Champion
jchampion@timescale.com
In reply to: Jacob Champion (#69)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Jul 10, 2023 at 4:50 PM Jacob Champion <jchampion@timescale.com> wrote:

I don't understand why the
server-side tests are failing on FreeBSD, but they shouldn't be using
the libpq code at all, so I think your kqueue implementation is in the
clear.

Oh, whoops, it's just the missed CLOEXEC flag in the final patch. (If
the write side of the pipe gets copied around, it hangs open and the
validator never sees the "end" of the token.) I'll switch the logic
around to set the flag on the write side instead of unsetting it on
the read side.

I have a WIP patch that passes tests on FreeBSD, which I'll clean up
and post Sometime Soon. macOS builds now but still fails before it
runs the test; looks like it's having trouble finding OpenSSL during
`pip install` of the test modules...

Thanks!
--Jacob

#71Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#70)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Jul 12, 2023 at 5:50 AM Jacob Champion <jchampion@timescale.com> wrote:

Oh, whoops, it's just the missed CLOEXEC flag in the final patch. (If
the write side of the pipe gets copied around, it hangs open and the
validator never sees the "end" of the token.) I'll switch the logic
around to set the flag on the write side instead of unsetting it on
the read side.

Oops, sorry about that. Glad to hear it's all working!

(FTR my parenthetical note about macOS/XNU sources on Github was a
false alarm: the "apple" account has stopped publishing a redundant
copy of that, but "apple-oss-distributions" is the account I should
have been looking at and it is live. I guess it migrated at some
point, or something. Phew.)

#72Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Thomas Munro (#71)
Re: [PoC] Federated Authn/z with OAUTHBEARER

- Parameters are strings, so callback is not provided yet.
2. Client gets PgConn from PQconnectStart return value and updates
conn->async_auth to its own callback.

This is where some sort of official authn callback registration (see
above reply to Daniele) would probably come in handy.

+1

3. Client polls PQconnectPoll and checks conn->sasl_state until the
value is SASL_ASYNC

In my head, the client's custom callback would always be invoked
during the call to PQconnectPoll, rather than making the client do
work in between calls. That way, a client can use custom flows even
with a synchronous PQconnectdb().

The way I see this API working is the asynchronous client needs at least 2
PQConnectPoll calls:
1. To be notified of what the authentication requirements are and get
parameters.
2. When it acquires the token, the callback is used to inform libpq of the
token and return PGRES_POLLING_OK.

For the synchronous client, the callback implementation would need to be
aware of the fact that synchronous implementation invokes callback
frequently and be implemented accordingly.

Bottom lime, I don't see much problem with the current proposal. Just the
way of callback to know that OAUTH token is requested and get parameters
relies on PQconnectPoll being invoked after corresponding parameters of
conn object are populated.

5. Expectations on async_auth:
a. It returns PGRES_POLLING_READING while token acquisition is

going on

b. It returns PGRES_POLLING_OK and sets conn->sasl_state->token
when token acquisition succeeds.

Yes. Though the token should probably be returned through some
explicit part of the callback, now that you mention it...

6. Is the client supposed to do anything with the altsock parameter?

The callback needs to set the altsock up with a select()able
descriptor, which wakes up the client when more work is ready to be
done. Without that, you can't handle multiple connections on a single
thread.

Ok, thanks for clarification.

On a separate note:
The backend code currently spawns an external command for token

validation.

As we discussed before, an extension hook would be a more efficient
extensibility option.
We see clients make 10k+ connections using OAuth tokens per minute to
our service, and stating external processes would be too much overhead
here.

+1. I'm curious, though -- what language do you expect to use to write
a production validator hook? Surely not low-level C...?

For the server side code, it would likely be identity providers publishing
extensions to validate their tokens.
Those can do that in C too. Or extensions now can be implemented in Rust
using pgrx. Which is developer friendly enough in my opinion.

Yeah, I'm really interested in seeing which existing high-level flows
can be mixed in through a driver. Trying not to get too far ahead of
myself :D

I can think of the following as the most common:
1. Authorization code with PKCE. This is by far the most common for the
user login flows. Requires to spin up a browser and listen to redirect
URL/Port. Most high level platforms have libraries to do both.
2. Client Certificates. This requires an identity provider specific library
to construct and sign the token. The providers publish SDKs to do that for
most common app development platforms.

#73Jacob Champion
jchampion@timescale.com
In reply to: Jacob Champion (#70)
7 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Jul 11, 2023 at 10:50 AM Jacob Champion
<jchampion@timescale.com> wrote:

I have a WIP patch that passes tests on FreeBSD, which I'll clean up
and post Sometime Soon. macOS builds now but still fails before it
runs the test; looks like it's having trouble finding OpenSSL during
`pip install` of the test modules...

Hi Thomas,

v9 folds in your kqueue implementation (thanks again!) and I have a
quick question to check my understanding:

+       case CURL_POLL_REMOVE:
+           /*
+            * We don't know which of these is currently registered, perhaps
+            * both, so we try to remove both.  This means we need to tolerate
+            * ENOENT below.
+            */
+           EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE, 0, 0, 0);
+           nev++;
+           EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE, 0, 0, 0);
+           nev++;
+           break;

We're not setting EV_RECEIPT for these -- is that because none of the
filters we're using are EV_CLEAR, and so it doesn't matter if we
accidentally pull pending events off the queue during the kevent() call?

v9 also improves the Cirrus debugging experience and fixes more issues
on macOS, so the tests should be green there now. The final patch in the
series works around what I think is a build bug in psycopg2 2.9 [1]https://github.com/psycopg/psycopg2/issues/1599 for
the BSDs+meson.

Thanks,
--Jacob

[1]: https://github.com/psycopg/psycopg2/issues/1599

Attachments:

since-v8.diff.txttext/plain; charset=US-ASCII; name=since-v8.diff.txtDownload
1:  6434d90105 = 1:  9c6a340119 common/jsonapi: support FRONTEND clients
2:  13ddf2b6b3 ! 2:  8072d0416e libpq: add OAUTHBEARER SASL mechanism
    @@ Commit message
         development headers. Pass `curl` or `iddawc` to --with-oauth/-Doauth
         during configuration.
     
    +    Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!
    +
         Several TODOs:
         - don't retry forever if the server won't accept our token
         - perform several sanity checks on the OAuth issuer's responses
    @@ meson.build: if meson.version().version_compare('>=0.57')
            'plperl': perl_dep,
     
      ## meson_options.txt ##
    -@@ meson_options.txt: option('lz4', type : 'feature', value: 'auto',
    +@@ meson_options.txt: option('lz4', type: 'feature', value: 'auto',
      option('nls', type: 'feature', value: 'auto',
    -   description: 'native language support')
    +   description: 'Native language support')
      
     +option('oauth', type : 'combo', choices : ['auto', 'none', 'curl', 'iddawc'],
     +  value: 'auto',
     +  description: 'use LIB for OAuth 2.0 support (curl, iddawc)')
     +
    - option('pam', type : 'feature', value: 'auto',
    -   description: 'build with PAM support')
    + option('pam', type: 'feature', value: 'auto',
    +   description: 'PAM support')
      
     
      ## src/Makefile.global.in ##
    @@ src/Makefile.global.in: with_ldap	= @with_ldap@
     
      ## src/common/meson.build ##
     @@ src/common/meson.build: common_sources_frontend_static += files(
    - # For the server build of pgcommon, depend on lwlocknames_h, because at least
    - # cryptohash_openssl.c, hmac_openssl.c depend on it. That's arguably a
    + # least cryptohash_openssl.c, hmac_openssl.c depend on it.
    + # controldata_utils.c depends on wait_event_types_h. That's arguably a
      # layering violation, but ...
     +#
     +# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
    @@ src/common/meson.build: common_sources_frontend_static += files(
      pgcommon_variants = {
        '_srv': internal_lib_args + {
     +    'include_directories': include_directories('.'),
    -     'sources': common_sources + [lwlocknames_h],
    +     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
          'dependencies': [backend_common_code],
        },
        '': default_lib_args + {
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +#include <curl/curl.h>
     +#include <math.h>
    ++#ifdef HAVE_SYS_EPOLL_H
     +#include <sys/epoll.h>
     +#include <sys/timerfd.h>
    ++#endif
    ++#ifdef HAVE_SYS_EVENT_H
    ++#include <sys/event.h>
    ++#endif
     +#include <unistd.h>
     +
     +#include "common/jsonapi.h"
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +#include "libpq-int.h"
     +#include "mb/pg_wchar.h"
     +
    ++#ifdef HAVE_SYS_EVENT_H
    ++/* macOS doesn't define the time unit macros, but uses milliseconds by default. */
    ++#ifndef NOTE_MSECONDS
    ++#define NOTE_MSECONDS 0
    ++#endif
    ++#endif
    ++
     +/*
     + * Parsed JSON Representations
     + *
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +{
     +	OAuthStep	step;		/* where are we in the flow? */
     +
    ++#ifdef HAVE_SYS_EPOLL_H
     +	int			timerfd;	/* a timerfd for signaling async timeouts */
    ++#endif
     +	pgsocket	mux;		/* the multiplexer socket containing all descriptors
     +							   tracked by cURL, plus the timerfd */
     +	CURLM	   *curlm;		/* top-level multi handle for cURL operations */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (actx->mux != PGINVALID_SOCKET)
     +		close(actx->mux);
    ++#ifdef HAVE_SYS_EPOLL_H
     +	if (actx->timerfd >= 0)
     +		close(actx->timerfd);
    ++#endif
     +
     +	free(actx);
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
     + * select() on instead of the Postgres socket during OAuth negotiation.
     + *
    -+ * This is just an epoll set abstracting multiple other descriptors. A timerfd
    -+ * is always part of the set; it's just disabled when we're not using it.
    ++ * This is just an epoll set or kqueue abstracting multiple other descriptors.
    ++ * A timerfd is always part of the set when using epoll; it's just disabled
    ++ * when we're not using it.
     + */
     +static bool
     +setup_multiplexer(struct async_ctx *actx)
     +{
    ++#ifdef HAVE_SYS_EPOLL_H
     +	struct epoll_event ev = {.events = EPOLLIN};
     +
     +	actx->mux = epoll_create1(EPOLL_CLOEXEC);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	}
     +
     +	return true;
    ++#endif
    ++#ifdef HAVE_SYS_EVENT_H
    ++	actx->mux = kqueue();
    ++	if (actx->mux < 0)
    ++	{
    ++		actx_error(actx, "failed to create kqueue: %m");
    ++		return false;
    ++	}
    ++
    ++	return true;
    ++#endif
    ++
    ++	actx_error(actx, "here's a nickel kid, get yourself a better computer");
    ++	return false;
     +}
     +
     +/*
    -+ * Adds and removes sockets from the multiplexer epoll set, as directed by the
    ++ * Adds and removes sockets from the multiplexer set, as directed by the
     + * cURL multi handle.
     + */
     +static int
     +register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
     +				void *socketp)
     +{
    ++#ifdef HAVE_SYS_EPOLL_H
     +	struct async_ctx *actx = ctx;
     +	struct epoll_event ev = {0};
     +	int			res;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +		return -1;
     +	}
    ++#endif
    ++#ifdef HAVE_SYS_EVENT_H
    ++	struct async_ctx *actx = ctx;
    ++	struct kevent ev[2] = {{0}};
    ++	struct kevent ev_out[2];
    ++	struct timespec timeout = {0};
    ++	int			nev = 0;
    ++	int			res;
    ++
    ++	switch (what)
    ++	{
    ++		case CURL_POLL_IN:
    ++			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD, 0, 0, 0);
    ++			nev++;
    ++			break;
    ++
    ++		case CURL_POLL_OUT:
    ++			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD, 0, 0, 0);
    ++			nev++;
    ++			break;
    ++
    ++		case CURL_POLL_INOUT:
    ++			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD, 0, 0, 0);
    ++			nev++;
    ++			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD, 0, 0, 0);
    ++			nev++;
    ++			break;
    ++
    ++		case CURL_POLL_REMOVE:
    ++			/*
    ++			 * We don't know which of these is currently registered, perhaps
    ++			 * both, so we try to remove both.  This means we need to tolerate
    ++			 * ENOENT below.
    ++			 */
    ++			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE, 0, 0, 0);
    ++			nev++;
    ++			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE, 0, 0, 0);
    ++			nev++;
    ++			break;
    ++
    ++		default:
    ++			actx_error(actx, "unknown cURL socket operation (%d)", what);
    ++			return -1;
    ++	}
    ++
    ++	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
    ++	if (res < 0)
    ++	{
    ++		actx_error(actx, "could not modify kqueue: %m");
    ++		return -1;
    ++	}
    ++
    ++	/*
    ++	 * We can't use the simple errno version of kevent, because we need to skip
    ++	 * over ENOENT while still allowing a second change to be processed.  So we
    ++	 * need a longer-form error checking loop.
    ++	 */
    ++	for (int i = 0; i < res; ++i)
    ++	{
    ++		if (ev_out[i].flags & EV_ERROR && ev_out[i].data != ENOENT)
    ++		{
    ++			errno = ev_out[i].data;
    ++			switch (what)
    ++			{
    ++			case CURL_POLL_REMOVE:
    ++				actx_error(actx, "could not delete from kqueue: %m");
    ++				break;
    ++			default:
    ++				actx_error(actx, "could not add to kqueue: %m");
    ++			}
    ++			return -1;
    ++		}
    ++	}
    ++#endif
     +
     +	return 0;
     +}
     +
     +/*
    -+ * Adds or removes timeouts from the multiplexer epoll set, as directed by the
    -+ * cURL multi handle. Rather than continually adding and removing the timerfd,
    -+ * we keep it in the epoll set at all times and just disarm it when it's not
    ++ * Adds or removes timeouts from the multiplexer set, as directed by the
    ++ * cURL multi handle. Rather than continually adding and removing the timer,
    ++ * we keep it in the set at all times and just disarm it when it's not
     + * needed.
     + */
     +static int
     +register_timer(CURLM *curlm, long timeout, void *ctx)
     +{
    ++#if HAVE_SYS_EPOLL_H
     +	struct async_ctx *actx = ctx;
     +	struct itimerspec spec = {0};
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		actx_error(actx, "setting timerfd to %ld: %m", timeout);
     +		return -1;
     +	}
    ++#endif
    ++#ifdef HAVE_SYS_EVENT_H
    ++	struct async_ctx *actx = ctx;
    ++	struct kevent ev;
    ++
    ++	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
    ++		   NOTE_MSECONDS, timeout, 0);
    ++	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
    ++	{
    ++		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
    ++		return -1;
    ++	}
    ++#endif
     +
     +	return 0;
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		}
     +
     +		actx->mux = PGINVALID_SOCKET;
    ++#ifdef HAVE_SYS_EPOLL_H
     +		actx->timerfd = -1;
    ++#endif
     +
     +		state->async_ctx = actx;
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		case OAUTH_STEP_TOKEN_REQUEST:
     +		{
     +			const struct token_error *err;
    ++#ifdef HAVE_SYS_EPOLL_H
     +			struct itimerspec spec = {0};
    ++#endif
    ++#ifdef HAVE_SYS_EVENT_H
    ++			struct kevent ev = {0};
    ++#endif
     +
     +			if (!finish_token_request(actx, &tok))
     +				goto error_return;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			 * Wait for the required interval before issuing the next request.
     +			 */
     +			Assert(actx->authz.interval > 0);
    ++#ifdef HAVE_SYS_EPOLL_H
     +			spec.it_value.tv_sec = actx->authz.interval;
     +
     +			if (timerfd_settime(actx->timerfd, 0 /* no flags */, &spec, NULL) < 0)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			}
     +
     +			*altsock = actx->timerfd;
    ++#endif
    ++#ifdef HAVE_SYS_EVENT_H
    ++			// XXX: I guess this wants to be hidden in a routine
    ++			EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, NOTE_MSECONDS,
    ++				   actx->authz.interval * 1000, 0);
    ++			if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
    ++			{
    ++				actx_error(actx, "failed to set kqueue timer: %m");
    ++				goto error_return;
    ++			}
    ++			// XXX: why did we change the altsock in the epoll version?
    ++#endif
     +			actx->step = OAUTH_STEP_WAIT_INTERVAL;
     +			break;
     +		}
3:  0b0b0f2b33 ! 3:  07be9375aa backend: add OAUTHBEARER SASL mechanism
    @@ Commit message
         correctly is delegated entirely to the oauth_validator_command).
     
         Several TODOs:
    -    - port to platforms other than "modern Linux"
    +    - port to platforms other than "modern Linux/BSD"
         - overhaul the communication with oauth_validator_command, which is
           currently a bad hack on OpenPipeStream()
         - implement more sanity checks on the OAUTHBEARER message format and
    @@ src/backend/libpq/auth-oauth.c (new)
     +static bool validate(Port *port, const char *auth, const char **logdetail);
     +static bool run_validator_command(Port *port, const char *token);
     +static bool check_exit(FILE **fh, const char *command);
    -+static bool unset_cloexec(int fd);
    ++static bool set_cloexec(int fd);
     +static bool username_ok_for_shell(const char *username);
     +
     +#define KVSEP 0x01
    @@ src/backend/libpq/auth-oauth.c (new)
     +	 * MUST read all data off of the pipe before writing anything).
     +	 * TODO: port to Windows using _pipe().
     +	 */
    -+	rc = pipe2(pipefd, O_CLOEXEC);
    ++	rc = pipe(pipefd);
     +	if (rc < 0)
     +	{
     +		ereport(COMMERROR,
    @@ src/backend/libpq/auth-oauth.c (new)
     +	rfd = pipefd[0];
     +	wfd = pipefd[1];
     +
    -+	/* Allow the read pipe be passed to the child. */
    -+	if (!unset_cloexec(rfd))
    ++	if (!set_cloexec(wfd))
     +	{
     +		/* error message was already logged */
     +		goto cleanup;
    @@ src/backend/libpq/auth-oauth.c (new)
     +	}
     +
     +	/* Execute the command. */
    -+	fh = OpenPipeStream(command.data, "re");
    -+	/* TODO: handle failures */
    ++	fh = OpenPipeStream(command.data, "r");
    ++	if (!fh)
    ++	{
    ++		ereport(COMMERROR,
    ++				(errcode_for_file_access(),
    ++				 errmsg("opening pipe to OAuth validator: %m")));
    ++		goto cleanup;
    ++	}
     +
     +	/* We don't need the read end of the pipe anymore. */
     +	close(rfd);
    @@ src/backend/libpq/auth-oauth.c (new)
     +}
     +
     +static bool
    -+unset_cloexec(int fd)
    ++set_cloexec(int fd)
     +{
     +	int			flags;
     +	int			rc;
    @@ src/backend/libpq/auth-oauth.c (new)
     +		return false;
     +	}
     +
    -+	rc = fcntl(fd, F_SETFD, flags & ~FD_CLOEXEC);
    ++	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
     +	if (rc < 0)
     +	{
     +		ereport(COMMERROR,
     +				(errcode_for_file_access(),
    -+				 errmsg("could not unset FD_CLOEXEC for child pipe: %m")));
    ++				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
     +		return false;
     +	}
     +
    @@ src/include/libpq/auth.h
     +
      extern PGDLLIMPORT char *pg_krb_server_keyfile;
      extern PGDLLIMPORT bool pg_krb_caseins_users;
    - extern PGDLLIMPORT bool pg_gss_accept_deleg;
    -@@ src/include/libpq/auth.h: extern PGDLLIMPORT char *pg_krb_realm;
    + extern PGDLLIMPORT bool pg_gss_accept_delegation;
    +@@ src/include/libpq/auth.h: extern PGDLLIMPORT bool pg_gss_accept_delegation;
      extern void ClientAuthentication(Port *port);
      extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
      							int extralen);
4:  0e8ddadcbf ! 4:  71cedc6ff5 Add pytest suite for OAuth
    @@ src/test/python/requirements.txt (new)
     +construct~=2.10.61
     +isort~=5.6
     +# TODO: update to psycopg[c] 3.1
    -+psycopg2~=2.8.6
    ++psycopg2~=2.9.6
     +pytest~=7.3
     +pytest-asyncio~=0.21.0
     
5:  65c319a6a3 ! 5:  9b02e14829 squash! Add pytest suite for OAuth
    @@ Commit message
         TODOs:
         - The --tap-stream option to pytest-tap is slightly broken during test
           failures (it suppresses error information), which impedes debugging.
    -    - Unsurprisingly, Windows and Mac builds fail on the Linux-specific
    -      backend changes.
    +    - Unsurprisingly, Windows builds fail on the Linux-/BSD-specific backend
    +      changes. 32-bit builds on Ubuntu fail during testing as well.
         - pyca/cryptography is pinned at an old version. Since we use it for
           testing and not security, this isn't a critical problem yet, but it's
           not ideal. Newer versions require a Rust compiler to build, and while
    @@ meson.build: foreach test_dir : tests
     +      test_command = [
     +        pytest,
     +        # Avoid running these tests against an existing database.
    -+        '--temp-instance', test_output / 'tmp_check',
    ++        '--temp-instance', test_output / 'data',
     +
     +        # FIXME pytest-tap's stream feature accidentally suppresses errors that
     +        # are critical for debugging:
    @@ src/test/python/meson.build (new)
     @@
     +# Copyright (c) 2023, PostgreSQL Global Development Group
     +
    ++pytest_env = {
    ++  'with_oauth': oauth_library,
    ++
    ++  # Point to the default database; the tests will create their own databases as
    ++  # needed.
    ++  'PGDATABASE': 'postgres',
    ++
    ++  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
    ++  # pyca/cryptography.
    ++  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
    ++}
    ++
    ++# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
    ++# might have multiple implementations installed (macOS+brew), try to use the
    ++# same one that libpq is using.
    ++if ssl.found()
    ++  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
    ++  if pytest_incdir != ''
    ++    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
    ++  endif
    ++
    ++  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
    ++  if pytest_libdir != ''
    ++    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
    ++  endif
    ++endif
    ++
     +tests += {
     +  'name': 'python',
     +  'sd': meson.current_source_dir(),
    @@ src/test/python/meson.build (new)
     +      './test_internals.py',
     +      './test_pq3.py',
     +    ],
    -+    'env': {
    -+      'with_oauth': oauth_library,
    -+
    -+      # Point to the default database; the tests will create their own databases
    -+      # as needed.
    -+      'PGDATABASE': 'postgres',
    -+
    -+      # Avoid the need for a Rust compiler on platforms without prebuilt wheels
    -+      # for pyca/cryptography.
    -+      'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
    -+    },
    ++    'env': pytest_env,
     +  },
     +}
     
    @@ src/test/python/server/conftest.py: import pq3
     +        cleanup_prior_instance(datadir)
     +        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
     +
    ++        # The CI looks for *.log files to upload, so the file name here isn't
    ++        # completely arbitrary.
    ++        log = os.path.join(datadir, "postmaster.log")
     +        port = unused_tcp_port_factory()
    -+        log = os.path.join(datadir, "logfile")
     +
     +        subprocess.run(
     +            [
    @@ src/test/python/server/conftest.py: import pq3
     +                "-l",
     +                log,
     +                "-o",
    -+                f"-c port={port} -c listen_addresses=localhost",
    ++                " ".join(
    ++                    [
    ++                        f"-c port={port}",
    ++                        "-c listen_addresses=localhost",
    ++                        "-c log_connections=on",
    ++                    ]
    ++                ),
     +                "start",
     +            ],
     +            check=True,
-:  ---------- > 6:  7d179f7e53 XXX work around psycopg2 build failures
v9-0001-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/gzip; name=v9-0001-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v9-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v9-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
�!��dv9-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch�<ks�6���_��q��H��~8q�?f�x,��ws�u1	I�)�CR~$7����"%J�=�n]�9K��F��r��3�V;u������mv�V��k�z�q�1j���k��F��z��u`�=�u�Z��P�Vk�SD�?2�����<G~��������\3���c�}���2���|DQ�A��_��/T��j57��~�F�����������^�lk�}�f�0��\|��:��ax3�:V0���p1����	\<���`p
�{4�8:����j;�9�>\h�������;a
m���:�z������Z���ep�g��~g!�2��gNH��B.�S�m��2���r��{��@��x+�a#������/������w��#6+���v�>����Ax��-@s�	���rN�2����n�k�j���h������:
C/���@Z���N,G,�p_.�5Qv��?�+���?�r��(�b�G��.t�y��)���\�����
pl��A�����\�3X�4���I��|�yn�<n	����W|�L�=b	�k�9����'�<���~��%���~%�_%��$�J��Vo�*��3g�}��:ck2����Hw��4w|}7��c���|��e��8��W����������
�F��p=8��+`�N>����3AQ7Xc�\j
<��0��;GQ���AP��1�qg��>�1���R����Nx����&�F�3|�Gd0O���;���<���+dv�	�z���>����uB�H����w�Q����5s�$����#�L>�O&ld��%��2��`��9��	J�����x����h�?�v}x�I�'�����O��yF�}����+0��b������.#*���1��|�K�F��K�8Qu8
���H����(���&T���\��U,t��
Y�l1x�>�C���3�-W�Tr�~r���?�nJ�����A�5��hn��K&�f+��]�xh�S�
����'���BG5���5tT��j�����m%V@u�T�`�c�s�G��j��6]� YJCy]�T�f�)��(t�c�r��.^iT=A���W����k W^�G�F���7��0/����-L^<���Bkjk�A,�BP��l
��RdB��L[�2����+`��`�AfC�		���6�#�H�
D���1_�2���`���z�X+�5�(&�-i�L �b�c�7@��n������y�c��P*����hbs�P)��Cg��bF��V��������&�V����v�i.G�3�	&�loeF��9�1��y��5��jG���w�U�aP�_��Z���9�)��P���e4��6P��s����m9\=�A�gQg��k��+�'����d��z����p�,�{�Fb�n�C�w[����������S_�du�����)��i�i�Z�����S�?q��Kf��	���]��HF��������)��H�l������P
d�Lg-p��������D����}BY=R6��\E��	������K&fN�^�����	5<9}PR(�PQ_%���
Cz�i���In�:nM]���Uc~���w�%���;�]�@<���o3���~��;ne�����b�e_����b�.�A�N&��X�4o+W�0����L�I/�����1��Iac����dZ��h��q{�����.J�{�����m����24�������Q;mF`W��K�6�g�7\����va������\l��n����R��;H�d��"l�}'�9�_<��Ebf������%��i'�b���H�E��}�Ao�%+����;�L���*�.�"PC��@>{�-��^2���$
��"�������W�7��qw<^V���e
���5~�����)�:n��v�)�R�;K}	C�w��Gzq���A�`�������.��6���X����Ud"�D���,�6�h�P��:n�R��	�G*�x@BF��Z-��1���=�y����DKJp���H���ud��]��+��;�&�G�;��RX�T���������&��]D��0j�}������=�$�P�,Q!x�GG �6�<�
��������Z&�d��j���������7`�Z������"������������C����aV@��?�s Ah�60�����,Z
�������!K��v�9B��5[cdQ�,�B�"��+��P@��G��h������!U|@��<���F;6��&]�_�gR6����^lyd�1
�����QH��w���%�B�%5�S���H%�VN9�����%�z���c����i+:`�rR&dKI�������8��� ��;��4�����;R�3��4>V�>�H����e���J���S�-�U���+Fqg�C]����!z�/v��W��T��T�D���f���]j�������o;��5m��k6��50_�W�L:��*;}�o�[���< ���k���L������Y�����q�2U]���x���c���t�e��0��UW(�T� f'��$h��9"Cl������A�X\�\��G����i	/M�����*rn�	}k�po|�s��Ty���i�:��BY�1�Z�3.y��C��z�����)��Ele��FSle��j�\O�d�z`y/���Il�Rb������L|n'NP{b�1���F/r�"��Zr��L:�,�g/|�5IhQ������bV���iH�S���Ie+<R��
Z@��;I�b���{.W��O'D����4�q1&�TNoVG�GU0o���N�mj�i���f�j��:�\0'���95�!��bGOuQ�����������u�vQ��+��sx�~x]�[�kK�
�������.>n_P2�9J��E$�,�)��e���p����G-�7�'�����������c�Tq����'����a��l��^�=1(>S3)��P����X��6.N��\�juN�������p���M"$�^<�I�\��[�YSA�Y�5��#�W%"��OO��������8D ��`��,����dqi&�v�{WL�d��6E��-/�/6����N��[��������a�6+	���+��_�����G�I�V�Q��n���:���D�u)M���i�g����H����#.����J�/������N�D���X��?���O�
�|%S^���LI��%�i��JHB���(sX�fX0�P&�����"y4b�|R�E��Y�������h��Qk����"	%S�d�x���FM�Y��B���~(�8���/��?e�$��������11��T�y��;�G��v��I�zqW���|Q��A�������7}����X��V ��/J���RU�4F���J�e���bp�E��;�s!��i�>R����z���	.���>U1Y�����|�q�Zq}��`�1j�����d�e/_��MPC��@
x%��� �5J�{k4D�\h33w��Y1)_�^ ���l�x�%����y#���C��;���e���3�"[Qg��6���@M���Ntb9}�:�� C�i���^�����@"�����Q�W��Z���y!q|�|M��%�>	�Ye�J�,��_Q���.�Z.Z�[��*�o4:"�l4q�9+���)y*�o"g`>/�8�j�N���m?���@�E��Mj���hr�����f��T�fT-$;$���c4�M4��i }o�Q,u"x���"�Y�Z��
M��F�v�5�-��k�v�N�����U����'�E�~�Jg����sj��*'��`8e`�����T��1g2gm~�"�O�!�e="_iL]��6-��3QI�S!�%jJ���fC.�b�|����8�&��R�5��Gk:"���z�5k��Gl����N��RO�t
e�5�����x Y�7p?F�����^t��dg`��^|���Z}�p�<W�����QB=���8DCR��PJ�c$�u��t��p�*��#��5J&�F�Yg=�F��V�h��5��L2,d��)dR�mP���u����7�h���]=��a�c���m�0-7�g/t1�Nu��4��Ng��(zV��n�X�F�b�����C��A���#�].�+%:�E�O5���0/_�a�{&h`�\\iy�\[6ywF�4U����_�T-C�fp8n6D�A��g����t��v.������
R�W�+����1��]��psI��������X���a���"\�7�E�����Q`��n�`;Yv�����^��WP�`@	�p�W��&fmF�������B^�U�+I���h���h��=m���w�����Q2!Bl�#f��f(�t�F��"��q�*I�b��C���_��e�]��x������]�"�-�����Ck$��Dy~��~��O��{ �����,�`��P���r6�S��iJ�eS�
EdR4��f��V|�;O��S��IYE��i��=������F��XX�����q�d��1Ez�)&��yUds��Z�����b�/IE}+':����s��Z>#`wR�Y���������3kI'i�m�Q� �w1�/�!t�����19��(�P��Q�'������Z�����{&�s�������!yW�U�v�F�:4S��~��|K�`iRl���cn�L^�
DF:����h;��D�mopv|$�/�0�\�����/���"�z�v�����[������H���w|s�����-��e��N!��wuoK���gj9b���:����5�Oe�<=���!U�|�4��?.�
,���R���xO�:8�_�%�$y�^�z'�$Z��y@	�Vj2B��`�nV�6t*k��^��m�����vq����j�����,
�����C�FhQ��]��R� f'^�I^G�;�=���1�F|�ws!&U�9\���g���������B��������?�s��y����
RlR�aj������#��/aN����oNnNT��.����W�'�){>.*�l*�NE�A����z��waI��P\��:c�Lc3��[�R���D+��/�2�6Sh���L��E��N]�;�^��Zev�m�"��p9���M3�M]*�%qS��l�X��^}�����d���W�TN2&��f0{�[�LD�����t6��2[Z6������1a8P[��2���(�<����hs�r��v��hw�#�{^�U[�,��O�?�CJ���+/���8�D�_9�f�+��'��Z����wL%�S�3v�/���^V�"}Y���U��Fa6\t����e�o�����!�0z�W���y�h9���	c����������[8>�����R��,���(���d�I����BP�t��"	y����������Yp�>��V�����������sW�8/��P��|���o{_������7|�2�bKF3���4��������
�u-��*��N�}���3� 	�In�u��
�p��������Bn��j�6�3�*��#�~�� ���!F_��~y�g�\��^����\�0�t���ba�#��O2��1��)�=�A0X�Z�_��
������&�\������u�E&��% ����v����l����d��s�W�p����e5��n���������������f�Q��QC��������~�Z�XV/������I�����u|�(=2����7R�Pm[��X@�:�������T�����5�$�����4/�OOI�2����F4N�6��4�$����A�&�,h�������)���T���N��8X{3���s�A���k���01�L]I�A����v�K��D�D"�Yf���L����3����A2E����{5y6<������y�n�Rs����w9�t&�8�7d��4@��e4b���B>B$7�}������S1�eE�NqS?���=���fcE�����^�������t�P]( �=��DQ�i�jT1�V��{�4�Q��$i��������b�|�{��?*����������FnE}�/jpMCt��U��Nb{��5X!��1c�'q�E��>�'���v�v��������(�n`�g*C�����[����t8�5qww������O�	����������pL����LV:�����E�C)T+H����2 ��{��:@Ec����u������QTg
�=
�� ���8D��z��u�)�.L[[]'��V2��o���OW�Z�[	f�e��*zb���;��2���X��3�X��
�]��5�j,&�9t����3�ehAn���[u�|�_2��rPwf���I����l:�u^����u��s� ��3��3��U�^��`������h�s��e�`��S��R]0�����"��D��d'#��;!E�q�44��P�F��ic�������chc�i�^?cP�pJ0�����
����
K'����1M�1������S�lGa���a�J�	����N�����-w�����t�.���~�h����o(����nc��k8�����AkVs
���������>����*t��%�:�J
���^�0�G3�o��Y�������=�������1H���<�	����+B��64�[����xDV���������g�
�<��NHO0YpH��5a�����
	�]D���]��?�����H�l��
���SH��<J�G��I2Jfi���E����E�k��`��%�MoA�d>��F�3���o������Y���_:i���l^�={��c��������"%)�����3L�z�lg����u�F��N��'���h|���4�Nw�1�M��b0SRa��Q�K�Dq��1�dL8��>�D|�Rg2�Q��[�RmG�!�Sn�{�%�vAe$�)�=�1��w�}��M/uRe4@e0�$DF��Mzgv~�;����dr6��u�m����;&����+������\����
UV����#��N�G�
f���nj�H�%�h����6�`�
������Z�F-���.C3�[��(g
����>@�?���Z5����6b��}��R2�����F�0�Q���9�R��B6x*��4�e$�8bz�if������W�t����]2y�A���@LOD^�P(jC�/4�V&�9��L%OftQ8!h��$���X��I:}T�A��XJn�6���� @�����a����ao���6ss�)8I�P���"n`Rt���(?_����V0�\A�i�|g�^=�8��v�[�)p0�G��]#m���M��F�F���d���&�`�L�9��3���!4�|���R4����������(����4G�x6`���FY}���r�F�3���:f��E�p:O����{/�D������P��{��G��D"6<��'(�=x�������N,�����"`G�v�*`�o<�������	&�x����Iy��`z4�4OLF�ao(����a<����$)l+�F��K�v��5�b'����1}��%S!!����"b��W0�h3cF���C^k������2J���7�7������9�tu���j�r��m��^�%(�����&0X��f�qz`&Dq[�zp�o��{�6�<d�����Q�]Mk��5���0�#[kt'��,��m�Ex��d��������W�
�!���>���$
�T1�����H���������1�h�Dy���B<@����@��5�Q����L����lzKp807�@���tr�+��"��#�����Z�j��'	�6b�<����*f���I�,�����L�		��Hj���E8�J)(B��/�t"��Q�+x�z�.d��Pt���q>i�0;TK�.2��K<Q@er��LP�;�
��gk��'�=*�6wB�t���}����p4z�8��i��5����gM�Fz�!�M��9�F�fZ$�'�r��|�����7'�jkgP�4�s�9��7��1�}����]�O�{��0E��N���&�A���YX��@�)���N�G� �E��`Yt�g�4�NK�1���C `���rJ9�U������ZZ��@�o"e��%!.x�8�"#EcQ��|O�n%�X��hnS5�!�3�-a�++��DTA�B�'�;"����H_�cT��o�5��o(#Q��U����A�Q����&Pv���Q�K@�����$�{�!S8���](	��1R`�O���;���^r�9fQi��|
���9���ff���;\fU��
�x
=�@>�����sO������QQ�b3"s�"1?%�5
���!� �{�E��]��i��M�4��I�}~�C��f�;6g��w����IU��}l�������6�����,������L��S#�5�#��$���
�s4*�E�?t���<�1g������X	��
u4�~-�5�7G�W���^�Pd�":�
�[!z����I��M��i^���?������f��N��{4�f��c�i�����)��A�C�)��QIQK�wdL���������V��\'���~��$���`�A�g����=��i����<�vfl�:I@>�������m7��/�d���Z[��x���K����g�����3�V����J�;��W��M���v����Z������)p08��.s�k%�O��"��|U;�"���~I�>��O(d�lV������dV���O�:����?�o�W����2��+���O*ltT���*��i�_u����������+��V�����O����O��;��p������A���pX&���C��R�1���x�|�N���U4���'�0����D�����M��0(�R�z�H�C��������	��])��K9(I�Q�$W:��}�n�'�np1���pB��4��	xC���l���oV�����_�i���#Ya>����F��l$
�HQd�i�A[�1�����pT�&���;'����@f�w���P3t������U&x���f_b������Q�9U5�������`s�D�Y��:�Bl��h_]���X���={��U��
,���������G�W,i�Fl�$��:<}�t���3
ho�XL/��xJ�x�/������}`S�
�����Z@GL��y��YC���\������SvC�)'8������a����7���!��t�q�;���+���,;���A1�8����v�6Q�c�%e���&yz%�%�(*��&�28)�)�I4G��Sf�pt�� f�,��kx,x��A�>����e<�-�����3Gx������?��t��R1^.s=?)�$gg�B]�/�k�2v�az���Q���dT}�K��i�f�9^�q]fd������k�L�e2�*���I
��[to�;�)���v,�c(�Px\������o����j2k�9�\O��=�5�*3�%[�.���C�kZi�NU�W$��[��������`u��H����Ym~ms{��[#��U�UOb;M|�	E���'��C���J��7�fiYR�#�����3�i�G�/�����_0��@jIh�8�g�%	���5���]�P�F��h�<�.��:�R'\��{w|�l�?k�i"�kPs��)���1r�!�<�'���p��	k�4/�F��j�L'����s�6������w�����3�=4^+C��Z��C�H�1F� ����#��Dk��r�m���C�(ez�)d��������%k�#A6�b��e=���-L@�f1��lq�����|�f���nnL�C���y���4v��x�&���Lxw8��969��+N��K~����T�+��
�}��g?�(�j�
�����z�k�9b�_��9�2�c�^��-���	��� vT�v3���F�j�!H��!�G,�	�`^A�Du��F��[�X*�XY����m|�\Chs>�]<b�D�>f��"]x���f�T���(�2�x��Y'�#�P�����qk[b&W�8q�\'���y��R��J�����4g�NO-Y~���c��H��� �/�z+�oK���e�������\�n�������75+*hf�fL$�}p{R1z���R������?b���(m*���f�'4�5��9�\�YW99�0����JB3���������`�_���c��/��z$��x��������\�+���Y,$#�� a����@\&�,SI�[���p
)�H7�-���3���b���b�����^&��������d�H���Y�M��s��c���c���X�0�\?>�u�}|V���*eC�1}���h��uOz��Ap	�"&�{�KR[���X�Q\g�$W������Y�R���^�;����i+x����S�i��v���r�:A ��
/�a�)h����[����?�	*�����g1��V�{��4T]���	�
7x�X�,��'XE�[	)M���6tR�8��89�^����
�A|��1B�$��D�V�U	�i*�X"d�8�&���`K5���x���L��1V�`�����d2%�^�8��
�!���@g�i��/���"<����7����V6{4���g�
�B+,/��lh��8�
�,��������g�G�}�!GR����Dv��i<���?�
ss��~����9p�'y�u�8/qx��)O�yGKxHO�J�|����(!���_���R���Q��Z���9}q�^^��4�V�c�'�@�����T���O���'���
�M0����Y��O~��O��,��BgA�����O-�x�yw��y��v3����]���Z�+<H���H66���T��L�#�p�{�Xl�;��v�Q��8�3ob,h�p]�t5�Q���l�Qs�_�����3���������xq���6��r^�{]�^!�u����{
+
v�J4,_�:'EO�rSmL|y��.#W��m�@V����� ��4ip�}I���QU��
<���5s�Kr�E���Z���U�
�������zU���F����M��Q��S<��z�Z[���]_���lw�aF��P7���B�4��B�����
��_pgs�s�������������
�8fM��;��_�������}��G?��Q?�_���6�5�m�+7���L�F����������c�F@KS =q�}�~cg��}�>��hsAO�l��y �1+,��$3�����pp�a[�i���T���/�Y�������R_�Y�B��b:q�F\����������7����m�Ny����m��X�_�d�5���D�e��%�o�P�N$���A���E��I5���t6bF�(��?�$<�A��1z��g\��
F7��)u��{��D��A��������[��#�
�����������������G�`bm���IR�w���n��Hu��5�F)*�#�"���p�$7����ax��W�������E)$�!�I�4�>{M��*��!|>�A|�;_����L=:�����H�� 	A�`u�pU�)���j����D�������[B���CW^��$�N%|�����K��z?��v�!OIO�S81G}������Zr����a�>���I����T,9�����s�lw�� ���;����Q<�W�F��������-j���)��8����RQqM�W`
f����V���o�����h�n�r3�/z+�{��n���hK��L�2��1�a���!k>,S���y�9���f�AJ�0��g7q8i4����4����pvX�Z��"}�!���gS`��c�l��f�kdl�6q�\��E%(y�6�>d��Z8�J�G���`'���NTeT@��V�_dZ�Z��5�� �0RcN<���k-:k"��m�P��]��Q�bG\n{S+�����n
}��pY�-��%�(�$/�V��"�XH����*�L���@�*J�|D���k=��X-m(0�kgH��������9����s�j�5�`���T���,EF.f�R�~�t�s�+9�qT-�*����>�p.�v�K�<��2����)�Z�k�W��!�q�����@�Do)a�
��7�������,k����	���@�N/7[��P����^�rTy���"h�������l�Qt�L�����+�rQ'_l��G��)��������|���P4��w�V&��O	��A�s�l
�.hO���A���~%��$R\����R�!y���
�[�1�cDpG6�4����3!���R����[n�������1�p~n���Z�-�������� ���s_��H�*�#L-;?;�:?k����'%��6�UymaG�F�s6T��!Swu�9:9�<iiT��Cu�.����I�[z}Ib!��G�s*_����g�U:�y,A�=����fc��A����
���d��Z��=�i�������'�hz�R4eH92o�`Y���������&�7��g*�c|$����������Q��!/HD���deP����l�l�H��P���_#a��i?���S���G�Gr��m���w�����������[g�����/��e����Mn*3'����^h}�U������M��V#B�c
�[�VJ-'��R2��B��_�}�~	��(ASI���V��qa���d�aO��E	��o���(��|�=?��ZN+�4}RK��}B���c�Tw9z�e����5����.n����|d��m��wO�0��%y�{e?o����S��N2��#�-<�^�X83|eD�f3�h�L�?brt��gh�KO�_�N�:Xv���y��'k/��l9���S?]���_�-�Q}����vy�
��I�&2����B5>S'�S�G��w�Sm�:����1�`���Mw�`�E���5#b��������9�}A��������&zaS���e���&��ds��A4���&�
_�����=Y�H���
�P����E��P2���)���)&����x4��G���F�>������%;G���j��R�1�l���Si�vb����0$������B����V����1th1qA��g�����b �5��A�D��HCGG��K�u�^���9�3�2=dY��3g�/}��V����{��:���<Ik��R���"�li� �����M����x4c@.�`TE�$��1^�T�G�DE�B�2��Tz�X��b�Q�Yf���"8�#�U�F�
�k��W�"�M���1}�D�G��*&��C�c9���NA�2x3}�Q��u�a3��K��U��q���%��wT�zF������ �������kH4��Ws�X1��P�u��D
M�^��
�Y���X�m��0��f"Fz��Cd#i��Nw�AhN��<�nu���8��x�����(��R��y>r��:�X[����f�}C�V�[[\���,D�I�
����%�n`�|�F����/
��X^��A����� i����D-�2R�U�-����|ge��@9Z��/OS��z����s_���F��?�0z���)�,����k9oa�Y8��%1Y��<|*��]N]�D��%u��RjP@w�)C:"��\#���q��r��\�D,��1w��j��
��"�*��
���r���H������&�t�j�B��X��%8�$m��c���{�?I6�
�U'������p����q��.�(+79����a���`>r��X5X-��E8�����)#�(�9�	��Z�v)�gN�ku�m�U�Q�i%�� JW�MC ���B���r
a���1�l��[�S�_�;��Z�1b���h�����I���
�H�c����9s��|��M� (�
�M,v<:�a/lz�~`��}	@U
��c_��
��C+�����^�������!r'�A��Q�����<;o�����6/&������$�8I���(�^�h]4���R���jU�gipr�&�����,>
0dY�b��u� �h��)	o0�%dSd�m���� �gE�e�g2,��8���\��1�u�#�f�|E�
�iC���J�,�9�+��%`w���=5��~��%(�7����5����3�1�F��I���7���^�������M���GI�|���kB�2t��:(N�}�"�]n9�a��{�������_�;���/Y+$ �L�)�(Z&(y���2�~qy~u~|~��K�g��w�B;�S������#*�.)����e,��)5P��3
�TZ����3���1&��OPRO:kR/��������!�h�b�a���m�!N&����v7E��I�#F=��
��$�e8�RJ�'���<�UM����G��Y�^��7�N,:����!U.>!<����*��~��jn���l@�{h�m�n6�u��w3M.`C+��X
�l�@`�W+�K _|�~�?F����:>����$KQ� ��Z������_(�b{�F��	9d~�Xj~�V�:�*��{�����'�!���Z��w=��)�!�.�z��v��\���O7�HlY���
-7����%�4��i��j"�z����;����.�&'E>I3knb]���
g\�$�S_C�X��c�Y*%JVJ�1��a�����*���}0p-����N
<@�p	��KEz
z�HK<"�$��u�|�������0CDw*�~!
��3�J���Y�M>}�>����P�kT|P}�$���K�.p:�L��^"��a�9��S*�����%u����]���X�i�y��=�p��9*s�:Y&y����h�X�P��U���S1��tgi��P�SNy+���6!q�&VY�2�3,�����Z,���`5�A@����-��/�K��j���hlp����4Z>����}��tP���\[�9������v���$����\���]|�8�'��aZ�\^�$�P�>o�u����?�5D���~�z�,e��a~3�'�E����ig�)j������Jx�nuS�@�����|�����p���B@0��g�A��.f�U=��6�[���q��M����f
H�92SgV�Jy�d]utp��-�7i�o:'�gM�g���gS4,��.����	$,	*.%�dt�j;������C��q�c�*����n�EEn���P�+�l���kX���t���I��]f��9��0����.� �6W���e��fE�s�XQ���P����������x*�{HU�6������d���8P	g�6[v�d?��������(���Q9d���0��������=�'>t�C��h��^��y��}��B�t�	��\��Q�?�)E���)Gh��u�P�����"��2��G����|'����cu
KIX�%$M���T��a�#�\�������L�Adj�� ��3�{�Q4�@u(�u^��8�b��}}�������x_��<�P����S(}+�bD���i�)��Y��.[^�A�t���a+�@�����/\�����GN_�hf*U���T=��jy�����2���~�43Qb�a�M�j3U�����'�P���|bQnr��A\��I7 �����=�����ur��G������Nv�l��������w�+=������|C�<�"�����W{B�0�12�9�US��6G+��x�C$��U�#��A��� ����F�����`�c��m���*G� 5�	����@��qG���5�OD4�A��e�n��U���d2QDfh��bH�Q���b�\�6I�[��MfDiX��T����Sc���!tA(�g���3#kl/��C�������� :�G�N��w% ����t&x���L���H�����q��+i����<�P�uMk�/��k�f������A�@r��L
���BG��5�����q�<>��yv�12�������B|�|���:\�������k>��$���CN�\���2*g�EX�oD��%��Gi+n�����i�Q��[6@@�6#]�9��J�������R�23g ����	�sy$�t�����cf��'��8�����0=�$��h?z�8�Xs$R�`�Q�T���',l����7��������Th�'P7�|���J1����"R�-���N�2��q������G��3Y�7�����'h�U�{�+�'����#���n��&��20�J6��*�<���V�Zc9���nNd�������������#��	-*&A��p�dw;�D�����.RS@�1��a0	I9$�d�y,@0r,%t��x�?������&�R�A�Z�F��	���N�1��7=��8���	�5�K��� ���pm]y�+?��]������e���i1R�������f��R�w<��+����Z����s�����s�G���@~��)$L��zJ��� ��(:������=3������^��}���!�d�d�;��C�LFX�97��(J�r�n��X$}4`Q^So����.��<��R�@n��j����A��$����������&^�B ]��9��1i��
�z��.��k����.>���*
���w�+�!���z4"B��y:�1���W�r�@S��%���y���%,
FY����h�}n���/r�+��39��Ql��������/�~q��z�j�b��3 A��z\��W^�W����zE��.`:b��q����^��������
D#�n�lm/Z��`�S���z�t"OhH�X+��.1M�H!72�����U��A��?wz���A<��b�B���l��w�HT���4�hj�bf��M1����X�1���\Cg��(w�B��i�(�A\xf9�p�j��J|Mq���J'|���WG��G���v����ms�~�Yv�n�t~yR����������q�~�/��->������A�aX���k��`HhE��,Ou�,�n�m���������i�d|�v\������]�������FD�v��&��0�v�l,���"����\NV����c�%�s�6/��F�K@��u��5���@`R���A,n��MZu[�7���������������a?�N�P�Q�k�T��s�x<@����j������b��[i$��-�����I�'�A��Q�G /�����9�����Q,���!j��v*�D��*�����L�8,�(�;��r:7R(�������>�}������c�x3���2Zs�L��(�O����%|g���7
��`�E/�������Ewp����U�WiXNiX��~�����>[�Y��,7��u���'�����8Rt8>�	�+��?#�( ��z&�)����ndE�|���z�6	��K���-���C�(���zn>�����R{~�/����z�rtA}=����9@���5�����Q����Q�����Z��[��j�S�$������r
��LQM���������T��aPpgRi�v����h�H��[8��v���[[�r89���	����os�1$��q�@��I�m���+���z�t��T���5�+���~H �(R	%*�#3��;�cN���'3JX��d>����@��������d���&8��;(/QG	�&����� k� 9A��7�O���}��$g�Zn���q��OG��N���y���)�~����<�nt"==jN������:(��yJ6}W}���z*��Z4CA�y��j�2>��0tLE������w�����Ey����7���K2�G
��N�,m42��!��u3qn��f�`'���:�'b�hz�����p ��(���%Z9f"���Bm0,��l�hn,�
�k�v�Z}���m���G8�pTT���,X�k����k����H����6QJ����&���1I���:%�f��g��P9(����8�����@����/����������'�4)y��\����7�!q#R&.7�3��1d�=�M�� &�[��|.h�y��q��������T���������
�n����okx��������V���m���u�O�#��%�E
r�����"�1�Wk���N.���#4 �~����cvP_��H_1��}S�w�"�/�<�����n�R��d#�M,�ugS��0���#�6y������7�M�I���[xD8}Q�����B�X	J�
�x�~����a��hf&\R9��3Pv8�{�bz:e�*%*�6��4;�Y�Q|s;F��<A8z���r
����::������q���;?��v���r7��2@1�������p���b���8��$y��jr�Q�&)sN'�������#:�U�
�����r��{�D(�����,v�ne���~�].N���`��NU�$8��G�[�3lnP�����m����:����y8�9wBqr�V��&VE�7��/L.{���j�g*������dP�D������@���`�c}f�z��Ltk-.�Uh�q���F��Q�!#bS��ea�=�\�������HW���`��@]��NZ�����?(���=�p~���#4>��y��KE51�K�V�t�2�#��hH?����y���o��L�<�6"�lZ�|���(/�^+�VAJA/�A�������'�~U����,�TCq��B��'&~����F��
R�6?||��k/��������2g��N�����XH���\����;�����<-���@�H��D�*u�I������Z��%�������bv�>�
��+V�k������'�5��'��8~���X��i)��^�u���c|R�b����e����M�ck����������pY?p����N��e�����������p�O��k<e�)Q<,5$ [w�$���s��c����
�_���c�d�
��F����hX�2�]wl�Ha�s�����Q	��pm�����p��@���d6�Q�r��P�$�\����|����(hN�'������������%�PJE=��D�D����7g?�p������`?R�L����h�J������ e����Mp�C�Dr
�G=!���A?��J!�������`{5�f$�]���{�8(z��e�Z��I8���AINb�
��w���+i�P�C���!�r	�W��Ye�j3��d��=�=)!d�>r6����N�M^��P{n�0���������A���+�.��f�g�_{�"���#;D��;�\~~B�S�c��yf@lmBK��d6�o0�Q��o�x�!X�h]���V�<�7�e`;h��,[Hn9�Q�X�`��@��A�
nf��1F:F��{=�!��x��i.�����y<�v]���nKg<��A��W/9��d���b�"N�D��r]�>gWc���)������?H�Q���d`![*�	B�6�,���G����Z	�B
Q�MCC���n���r����shu������ S^^.���������5�(n����h��7�G���9M<�th�d�O���r['u��kO��c�:��}����������q����c�6���`-S�k�`'���|��v<+�Q?4�f�!�=kd����"�����
�f�N��,�`��"p�M���@>@��!W'VH�[Jc;��>��7���"�'���?���2���}lcI��Gq~�g��
{<����c�^�J����p�g���l�?�����Z�<}�c�&s[d��oY����z���p#�t7����U7�7�]��V�#4��6���C�^E��>:�������j<�E�M�_��n��n�����z�l����Y��?���KP��m�[�����/������D0�|n�t���K������&;����1�6D�"���~���'��
*�j����~}{s{������o������ ���7 ����[.�r�	���QL�=������(M�Z' x�^����%�}��HC_uE������R�`m,&�N?j��y��,����E:K����2�h�u����P[�������E��u���u�
�ap>�x��/$2���'�$�=qkQ?��	��(]�R-"�3���X�t��7p
��/�V.�Rt��W���*��'�i�����U�Z�	��7a>kxe����=�],z�sqty���q�U�����s5�g��{���;SKn���e���&� �6��'���H��Yj�}���#E�le2�A���\��FVl��A�����9�L�d0����t������W6������Ed�N*oJ����f��&�����T?�$���%��7�f���*!���@^�������
t��V8��9���X	�JU��t�f�1�+:J��T��l���hlwd��bBy��K�T�������cM��o�Db��zV3���5�(W��R��9i��/[�b�.Ic/���\����R�g�8K�	r���\\��@	�1�	��Ho��R����`]S�o������Z�D���1��q%vbD�pB,_5�.���NT����
�^�PXh&MO,�B)�\Y�g���Y��'�F0�.�=� G.l2��6!��"_�x���e����e�%���a����!�����Ew��8�g@����(���c�E��<W�qI�2$��2�tQ8��
��]��v�~P7�'8u�g'����\�b�wX2����I��L���K��x}���M��thQ���^��1����K�����,
����$�����v����[��x��J�;6��M*nEop&Q�03�3��E�2y(������.�xK�,�o�0�'����*�C������K|��<�l^qZ[�w:� \��Ol��������Vq�Z��?�_����������g4�3�c�ah5�#8������8�c�����d)��"6f�e��/vs�,Ac��Z2�g^���b��6�]`1c����l��*��o�F�^��&�C��!����qc���I����a����0���hR�J08��6�v�M������B��}�i!�[�����I�;�����H0h��x:����jZHn�������A=\A�%1����	�\�������Q������i5L��@���Td���e����bJt7r���2#)6�B^��f�����F�a$?�-��"q�@8r&^D�./�����r�"�=x8�,���gS|������<�����K�8���Q���Y�Z��dD}�$�S��,Z��#��QdY]� w~���O�^,�Z�V����FS��`Z���D��J��y"�#�Os����v�
��Zq1_9�J�tI[�
t;��%���v�W���(w�.�f�.��z�	�����s�%�tE��{��-����R��`,d1Fo88��l��|���_��d�MI.�S��,Z���Re�0XS>e��9���I���^�Kj
�������jwP����S-R3�2�Q�B������eP���!����}Q>&�����.�V:H$:�F�{��bM#K�]G���v�%|j��P�l��T�g�#�+p=r��?��aC!��&�t���ps��N;_f+��1�����b�A���8�L���YX�R�})�j�w�s����o��!M�^�}����_f(S����l�����R���yw���H{3?\hf5���t'��!KN���}><S��_3aD���j��
�a^�d<�%6Idy����Map2n��=�d���������vx���g�K��{[�~�@	?B�d7�c2}�-��
��9�c$�@�&�
$����=���a�-�����q\|�cA���X�D`�7�1��*+C�`�g�Q�np:�)�=�0�5N�
���<��
��������M����'����i[
�W��r[�O���4�Q$�Tn:�gS�`���h��)x�1�Cj\;�T���(t�� ���i�('p��Cp����5������D���ra;�5������u�^�o�_G�-����
��������3W�$F
������a�fS^�4Xr��P����5~�Lq�C���K��5���A�7
3P	�}�����Uk#��W���q�A�
�nv�E�jQD���e`I�0/M��� ��~��3�q�H�q�����[kn0�8����h
��2�<4��W�U&2J���:���p�Q���]ARA�I����qd��1��s�����5��|XG��������1&�;<�������7Ao61�DlBs�>r?A�1E�"*��x��6�`N����S+YsU�@/qa
`D�d�,���v��m ���_�S�J��Y4���,Hq�V�:]��8���!����	�FP���H���,�K������v�A�5��_�=��&���r�x�w���7lAM[-S��G���{T���z�|a�� 4�aTV��JO�l�qJ)f�$HT���N��1��q�;-<ER(�bezZ���W
g!�����I���}�kw4G@���	V�3M��";(N	�X�����?��8$(il��X�X���2����!"e?��]���9>�����+�_���C=5��T���
S�z�,��VJ����@�ts���
?����Z�p[��^�J�mG�[������st����T�%�����7h����q}lV�S��B�.Y��=c-��3���P
��;z:�B�h�p�3J��=X�jg���ND~�H'�z7�tg�9'_;��9Td��xS`J��Z�J���:D�k'*�um#��=�9���}�X����n9�.7n�|sd1��)������1�,���|z��FF�����^_br>g��9
��4�k$�F�������+Cf��!���:qA9�^u�������������Pad���_�dtI4@9�Li�J��c��t����7����L������7����c��x�	���A8�����X�#���'^���$�pb�T�#�,;I�g��h��?����EO�� �1<	���
y�w��e|F�X�����r��G�%�}^�������<-b�)����aGVY.������;p����"���C�?C!�w��3"����T�c�\�.���w�i���3�1���E���hxD���J
�{�v�8��~Z��$����(��bb��������*���f�f�����e�f0�1�=����/^G���^�RQr��`��3]�b�.�r-��q`�������k�����s�#�(������5���t��"7KT�C���j�J��'!��
������+�
3,���c|���c�t$���>���
>����������/3�P�-��g�����.��������Q�3M�{�qa�����l���PC�e��Z���O��r�S&�?K.I����f#�����e���[������|u�n�3VE����x��F�8�bxu�;���_��"���{�i��������_��PCN�������Z�+[}5�SW5��!_7��'w^�1�x��y�c�-w}��j��.�G3"s�5�rN�0�`�4�LP�D�H������;j��&��}q��Q4�Y���Z*����cq-e���3	���� *����;�H�8�!Q�'�S��$o��B��������Jo��%+0��������p�'��"B��zU)((��#����{�ta �{Wj��M�.8�v2���rG����L
*> -$3`��Q�U4�GE�����7�Q�C�J�
���2hCR��Fd�#t���2��7o����8?���h����j��'�"9��r���u{w��2��R/�+�;wc�a���_z�q
uJ����yz�)jY(0z��79���zwm���F��8���+�U)	y�wD�;�k�Q	�;ts��4g39>jwK������~@?����DY���H9�,��
����v��<���������M���u������1"d��P��=���Bn�b����D��9��>�����U��yN����4�o�#{���,@�>�L�E�u�M�0���
A��S��G��r�$��n61
����8�	2zD����8A�K���m�����X=�>H��H�������7-���^Uj��x���NN8J���5�N���z��$��2�'d��,8R���2(�\��.��<��$s�a�x��?;��yN�):>�QO`-�)`����{/�a@G�9e0j���?��+�0�������I� ������/?/��wM)��!��>����[��-��SfK��}^�]7�����A� ��FJ���������y�e�e�<�����3���B���p�/������(Y��������>s|~���yX4���(���"����+�IR��S�c�������i��Y�������"�m�[������pDr�	�pl_.iu{��<���l8�7��`&�3�$J9�h��S�{jIHv�d���]w4����J(#s��������Af��S2�h\���t���.e�WL*mK���(,�
�GZ/KQ�	�8��o�NZxMrp��r�����I�%�y����S��0�7K���B�)��u$�I]��5�uiE:����O=T4�#f���	��Lz��
&��'����Kl�<a�U�P���-����aRj�w��|���Az�R73N�?����u�Q�&��s�L kXCZ�cJL��n���+K^J�_�K��:;�*^+s�O]�I����>/w�v����esG6w�����g����;r����n�N�#�z��d&1��(�1���!�6]�.��Ljh���
s9�#D�|-9�|�|o��r7�����Q�j����^�L�o0O������w��}���y������;��C�'�c,�������h��i`��
r��G�v���!�ni���J�;4
/��d7�|�a�7��H5����s~JxL�������N����vz;��/�����r���2lH�D.��U�����=�C�jPL�e��(���0��*�>y�����^���f���g.�:Kmm`�P d���'$����#���<F�����}�Y��D���	��T"��#!&#��a+�a%�8�5��0M���hM�
G9��A��J�Z<���a�{�j+�����ER� �
p~�h�������#�Qa��+v�h�S�9*���u����@Hr_uw�*�+����[���ttj�	��'(�S��:23�#/���]}@��b��8%o�ldm1�l����	��N����IP��� "4k�`nV2we�|;�$��kB<�V5�^������"�k�.�t{ �@��nv���(��"g&Y�3h��\��R�$�B�7��.����&��
�vh@�X��vs�T^�K���5k�|(W��7A3 ��������3�T��Q����i���({���P�4*5^A~@������6�u�{���%Y
��2���A��-�4���}y���k�*�u�W��&Ke�����i�l�����S!aKw\�&aC�R�������E*M�%m����5������������c����9�eBk�G������N���$5h:xN�1p�J��R����Lx\�x45��YF4���-k��6p,���n�2,��2�c{��#J�\O>�����_gAo��T?Q
�BF=NuU��������E���� ����������e�������OO�������]`���1����n��no���^�a�|sg3z��(�-#-��$.��^�������Mo_�-Kf_A����6s�����Bg�b:]�u�K���9��
S���$]����'2��0Aln���I�OT!J�7b{S�S(���(��H6p�����-�j��Wt����������p���6@��>�1�U�7�b����,�!�*�Y����`������Z���9/��6�F "mo��OC0]��]*�le��_�W,p����1^�e��,��*:�4l��r#t�>�l�'	]�~���3�N�����O�����-X������_�h1y��w���8O@|W�4��F�P������g����$�wd���7��$�3����P<�WK|��Z���B�
��������xEa�][�ume�8�9���A��)�}�`O�Wc]���2��k�H�?]4��g�~������b]\�������m85��dmL�UeQ|^�r�2���Wn.?���]!��z���60�y�9����-�J���fhW7_��>YT�{����A,�I�����y�1���&~�y�"f�5���)`��@�>|��O�3`E����������o<�&�I���:����G��:��_��Z���������QO������A�	��Ma�A.\��q�����(�4	���d�|�}�e���=�)��7�^���#��p��M�[7����ZZ�[������g�p/zm5�7�����������R���t;/���] ��<O���
{{����_%���	�,U|k�����d��no��vQ^�t���'@����fi�E@�0Q4��c�7���HJ���_bB)���R����Z��#M���/�/���d���������>
�%s�j{��\BB�	�g|��V�&"�,`nr�@�vPuv�Y�� `�����"^�}���� ����?��\�����o�k�?������J���3[��h��0R[)�6��\c���(
{�Y�9~<|����SF#�S�cu��{�f�#�)��i�\�O�|��k� �AP'��4�����G��������6��]�����S����'�	��h`���� �^D��TM`�o� A�����[;j������4X�og��;�rxX��|�����YK+���|��L_g����K���
_f�[��p~D7n�t&<5�	O,�Lm��u{���c�3���ZbLv8�p(�L��wm���Mb���y��rA
����Q~$N��0�-�cpia"���04J9�E.�����B���vw`%��k�{]I���F���rswC_��&TK�^��:\��L
�E������L\c�t�����F�sK��,	�%�<��$o�x,}J2��	�fz�w��(�|���6��>�x�l���$�+��)2:�{�����O���i��1���k������w0�O���)��7���a��|��|i�=%�\V���q���Er6����tfJ��35��?'��d��
�'�{��A�U��n����k
M���f�r���ro��D��t@0�/�
��C�%SP���L� ��F1F`�����������p�,���-1b6�l���.p�C�vyO�,I�3�~����4��9*�����u���t����{�pB�W�f�P�7�
���]N�JY�')��P^�7��F�iW�$k��e��j%w�Lx�����4y�Y�O����*1���AW)#8f����]T�iM������Y�"~�����������9%����2��r��;�b����n9����%����$jO��f"��{/]K^W+~�����8;�L�A�o�2��� 7��38xH�XJI[��[-q�������4V���Ij8b���4S�JG���.QBMJEY�
��26�3m7X��{V��A���2m��5��&�y�
TY�.��V���\����=����0c�@x�n�&�l������#��4b:x=����u��c�5x�v"�uB�����U[���Qc����Acx���a?����[�0&t�\8'^�t������p���:$��V~ G3
���[~���_��e�*�z�r�|�H�`�}�z����C����I�����qX%��?���gFW���X�wC}Uc>#�@��{�~}�x���
�lM
��6}�gm���F�q�F��gj�Y��������0�N��3����q��jh�oT�,��o�f�=���$GI.R��qA���	p��(&J�"$��V��6����WX���������M��<���^�)���x?)\���:(�	'�?+xL\j�<\@|:;N�8�)nK(����fP-�	$�8��[�T��>)JB�z������'��d�H�xdD=�l
���
"�!�d��I��a���o@jQ����*��{5�N��� �ZV�I�5B����+UlLI*�M��[e�6*�`��z;�XY2�'}��E��H9����k���^S������]�4)�j����m��[[��fS~�|HX��d��k���F������A�������hke��|��g���0kx���l[\0��+��w���n��`����N�U���v��2<��=�J�@�g���p�Z�La�I1971���!�LE~�u��Z4�_re�%��]*�t�a��cUF*���3r���F��t`���= n{av�I�������z�;[���f/������|����I:����DUx�Q��j��������H�s�Q`Cp�I"%2�/#I����m9���B�OH�8}�6]�ig��2C�@.s�I�Xd��[��RK�^�P�5��wD-h�}R1����Py�;���,�c���=���&p��4�z��l-E���������S��|Z�������g4
f�Ywog{+������; �-�sl[�7�}�ba�8~l�=%uKZ�R��q�P�9�q������n��Z;���
�&���S4[��?�Q$W[��)�l�U������~�i���C�6�\s�tv��O56\��6�p���|k5��9���Z���-}���IE����!���������c�i�u��OZh�����Z��m}��_��d{�����-������"������s%�RF�*�����U����O��_s���+���O��������?O&��M���S�
�n�j��Jv\4r�{u�qO����������x�rT*����D#"s<3����M��7%v�Cn���#wA'>kr��f��yrPt�M��EM.��]�?��y�Cx�������p)�-={�����B2�y���9����h�!D��mBW�`���������H>��Y�4��C$N#{�����NX�������<LF
��;Z"[F�8��VF�T�!��
|�G�l�G����~�P��cT���"N#5��G�3;t	�E��{����K[��PO��(�l��)����J�
iq������;*��y�M�Qt�*�Q9F9���$vF�����S��<;zu�DJ�U������[(@�<�����/_$qd2U��O-S������C��bR�&B�EA��!�\G�$e��^���� i��$y_���&k�H�y��,|�:%S�h�&�{�7j�cI-i�ux��
��*79TP��J�>�fv���A���yZJ��m(�������]M�	%+!f����0��b�dYEp�C���
���g�p������=�)�?��-e����$�H��]b#����v7����A�K��u����6N�+d=�`�"6�D�fH%W����8������/�h�����Q"8wk=���
�d�2��Nw����C����J��9_��1Gm��("��(/@Y��7������QT:���7��QZ<}�0���nX	���f_�pB����:j�L���Y���z��8����N['�����+�� �x@WLJ����w���,S���������Q�M^�a]|����PH������ibJ'����h�WX���*��D��������w�WVb��E/������"K�oWl�k;O������Y6�\�9}=sj���}��v�v�j[G�!��J�Nm�Ir�����RX)E�,'3�N��3�&'W���y_\�#��U������-��A���=���|�x�m���lgo�m�e>.v�������v����%���(On��C���8dW�D�]�<q@�'Uj)���m���PT���n/����6����������r�ih��F"K���B��_\c
:z@�m���zP�j�'��0���l���;�OK��s������9�e1����YB�p�1����taU:p��q�~l9��&��P}o/�Wa�-����p�����*��Z��+��O��m���.Y��%(��"��.!������������85������M�&�L��;g��������2��W�O��-_<{�������5Qw�����^wo�m��4g_:O���,����Eh���kH���f���c������n��R�J*6����3������E+��k�����tj�-��E�i,����O������&������Q����d�����4��#���&����I�$���&��@i����t	^�'\��Uc�d�]�\	u>��#B��g�{�D?�"�>��m��/M���B����
��F�->���m�Q��.���]�@��N������r=���u&\��3t�.~j�u��t���A�9���Sx'� $A��4����(A%���!�s���[[�x�O�n)�6��"=�U&��*�\��bb����I�^�2�{[W�����l*�zZ=/�.O����������w��N�^���qQcF�=�����{�%�C�=�R����a�����������N:��x��!R������2{���e����t9�������'*�
���_��TR[�=�t����.�*��!�[��h�
}7������;%(o#�����g�N7����B3�3�����)A�����,����������0	����1:���a�����o���c��qV���6�9��zQJ�#D%t���|�pj'�9������zz7��pk/���ngks���N��w������<��,�3�d�q��~�,��P�I�s����S 3�N���6CZy�q�?�����5��\L�{�����/�\�_�V9�f��������^�`����Aw����4RyB�7���I
����K0��#��%�[ ��%��T@ ���2�����g�;�;����]�7��Q��%����M�!�~{�DL;�h��?��O�Q���������)�_�a�9\����������+F��
?�kr�N�^�_v�ru�����Z]������
v9-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v9-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v9-0004-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v9-0004-Add-pytest-suite-for-OAuth.patch.gzDownload
v9-0005-squash-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v9-0005-squash-Add-pytest-suite-for-OAuth.patch.gzDownload
v9-0006-XXX-work-around-psycopg2-build-failures.patch.gzapplication/gzip; name=v9-0006-XXX-work-around-psycopg2-build-failures.patch.gzDownload
#74Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#73)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Jul 18, 2023 at 11:55 AM Jacob Champion <jchampion@timescale.com> wrote:

We're not setting EV_RECEIPT for these -- is that because none of the
filters we're using are EV_CLEAR, and so it doesn't matter if we
accidentally pull pending events off the queue during the kevent() call?

+1 for EV_RECEIPT ("just tell me about errors, don't drain any
events"). I had a vague memory that it caused portability problems.
Just checked... it was OpenBSD I was thinking of, but they finally
added that flag in 6.2 (2017). Our older-than-that BF OpenBSD animal
recently retired so that should be fine. (Yes, without EV_CLEAR it's
"level triggered" not "edge triggered" in epoll terminology, so the
way I had it was not broken, but the way you're suggesting would be
nicer.) Note that you'll have to skip data == 0 (no error) too.

+ #ifdef HAVE_SYS_EVENT_H
+ /* macOS doesn't define the time unit macros, but uses milliseconds
by default. */
+ #ifndef NOTE_MSECONDS
+ #define NOTE_MSECONDS 0
+ #endif
+ #endif

While comparing the cousin OSs' man pages just now, I noticed that
it's not only macOS that lacks NOTE_MSECONDS, it's also OpenBSD and
NetBSD < 10. Maybe just delete that cruft ^^^ and use literal 0 in
fflags directly. FreeBSD, and recently also NetBSD, decided to get
fancy with high resolution timers, but 0 gets the traditional unit of
milliseconds on all platforms (I just wrote it like that because I
started from FreeBSD and didn't know the history/portability story).

#75Jacob Champion
jchampion@timescale.com
In reply to: Thomas Munro (#74)
7 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Jul 18, 2023 at 4:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:

On Tue, Jul 18, 2023 at 11:55 AM Jacob Champion <jchampion@timescale.com> wrote:
+1 for EV_RECEIPT ("just tell me about errors, don't drain any
events").

Sounds good.

While comparing the cousin OSs' man pages just now, I noticed that
it's not only macOS that lacks NOTE_MSECONDS, it's also OpenBSD and
NetBSD < 10. Maybe just delete that cruft ^^^ and use literal 0 in
fflags directly.

So I don't lose track of it, here's a v10 with those two changes.

Thanks!
--Jacob

Attachments:

since-v9.diff.txttext/plain; charset=US-ASCII; name=since-v9.diff.txtDownload
1:  9c6a340119 = 1:  0278c7ba90 common/jsonapi: support FRONTEND clients
2:  8072d0416e ! 2:  bb3ce4b6a9 libpq: add OAUTHBEARER SASL mechanism
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +#include "libpq-int.h"
     +#include "mb/pg_wchar.h"
     +
    -+#ifdef HAVE_SYS_EVENT_H
    -+/* macOS doesn't define the time unit macros, but uses milliseconds by default. */
    -+#ifndef NOTE_MSECONDS
    -+#define NOTE_MSECONDS 0
    -+#endif
    -+#endif
    -+
     +/*
     + * Parsed JSON Representations
     + *
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	switch (what)
     +	{
     +		case CURL_POLL_IN:
    -+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD, 0, 0, 0);
    ++			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
     +			nev++;
     +			break;
     +
     +		case CURL_POLL_OUT:
    -+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD, 0, 0, 0);
    ++			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
     +			nev++;
     +			break;
     +
     +		case CURL_POLL_INOUT:
    -+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD, 0, 0, 0);
    ++			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
     +			nev++;
    -+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD, 0, 0, 0);
    ++			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
     +			nev++;
     +			break;
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			 * both, so we try to remove both.  This means we need to tolerate
     +			 * ENOENT below.
     +			 */
    -+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE, 0, 0, 0);
    ++			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
     +			nev++;
    -+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE, 0, 0, 0);
    ++			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
     +			nev++;
     +			break;
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 */
     +	for (int i = 0; i < res; ++i)
     +	{
    -+		if (ev_out[i].flags & EV_ERROR && ev_out[i].data != ENOENT)
    ++		/*
    ++		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
    ++		 * whether successful or not. Failed entries contain a non-zero errno in
    ++		 * the `data` field.
    ++		 */
    ++		Assert(ev_out[i].flags & EV_ERROR);
    ++
    ++		errno = ev_out[i].data;
    ++		if (errno && errno != ENOENT)
     +		{
    -+			errno = ev_out[i].data;
     +			switch (what)
     +			{
     +			case CURL_POLL_REMOVE:
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	struct kevent ev;
     +
     +	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
    -+		   NOTE_MSECONDS, timeout, 0);
    ++		   0, timeout, 0);
     +	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
     +	{
     +		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +#endif
     +#ifdef HAVE_SYS_EVENT_H
     +			// XXX: I guess this wants to be hidden in a routine
    -+			EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, NOTE_MSECONDS,
    ++			EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
     +				   actx->authz.interval * 1000, 0);
     +			if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
     +			{
3:  07be9375aa = 3:  20b7522228 backend: add OAUTHBEARER SASL mechanism
4:  71cedc6ff5 = 4:  f3cec068f9 Add pytest suite for OAuth
5:  9b02e14829 = 5:  da1933ac1d squash! Add pytest suite for OAuth
6:  7d179f7e53 = 6:  8f36b5c124 XXX work around psycopg2 build failures
v10-0001-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/gzip; name=v10-0001-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v10-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v10-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v10-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v10-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
�[K�dv10-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch�<kw�F�����e|.@�
v���8��������4R��������~��[B����{�u<uwUW��K�9��9��M��������{�z���u>i4��z�����
n4���u`�=�w�V�������D�?2�����=G~��������\3���c�}�Z�
����|D��C�����/Tk�Z-7Z��F�7����p_��v;�`[c��>0��a�����A�rp	����gt�`���|x��\}<���px�{4<?\\��9�_]���\�t��|��0��xyr�N�W����p
����28���[������3'$PO!���s`�\��������L!t�I�U�����\��{�W�
�v�����u����r�?;��m(�� <������\��t9�n��Z��4;�j��lW5M+�Xb������
 -�f�v��#��x�/��(�����O��r�+l�\��`������<���\bI.ip����|8��� �����q.�	�H���$��q��>���|�F�Db�+�Z��������m�b�qfr?����J`����W�L���x��7��c��3>	�p��5]�rQD�;g|Z8��r��������*��J������X����FH��l�Q�U0]��JC������@.5�f��]���;� ��}���8�3@����q�J)gR|�'
<�	x��H�i#�>�2�'�y��sRf���2;��E=�CS/Bz��:%�E��AV/\�;����\��L8�����{���g&/�S6�lZ���rDq0cw��������<���h��D�>|���	�'����1�/NI��8;u&�c�X2�b��eD��U6���`�or����{E�'�Ga!����0E^�����^_�	������Q!k�-�G|h�4v������j.�O���@���r9�1�`�a��<@#/,�|�d�j�P��O-|���A���	%<��QMmw�l
�f�z��g�:��+�zP�J0�1���#pe��A�������"(o�K�j�L2To	��v�P��"��+��j$H\��� s�b4C�����������1���)���4#Av���������F6EP��PT�y6���)2��}L[�2���fk`��`�A�C�		���
6�#�H�-D���	_�2���`���z��(�
�(&�-i�L x�r���h�����@�Zt��X��)�K�ka4��l�����3�x1'�[��:������&�V����v�Y.G�3�)&�lwe���9�1�#4x��3z�ZW�Z��:5�iP�_������9�)��P��4��6P�����m9\=OA��Qg��k��+�'����d��z�����w�<�{��b�^�K�������������S_�d}�����)�
�i�i�z�����S�?q��f��)���]��HF��������)��H�l������P
d�Lg-p��g�����D����}BY=R6��\E��"	�������K&fN�^�����	5���()�L��/�j�E�!��v���In�:nM]���������|������,�$�����������[�9;=����|��[L��6�������m��f���k�1���������C�6&�L-L��\�v-����?�vQ����<�.��A�>�����{f�,�~G�����.���xP��piG�����C���������)�-E��A�H�dj�������/���13��fP��������O0�X�����]�9�w���X��|�T
��31�:��(���@e��h�	0����h������ i�I��������U��_�xUR��5���	�z�ZTZ�l^l��q^��>��K�3�W��
��7���$�L�%�,~�HS�w�c!]�/-m6-%��%?^���D�>�%Y:m����ju�������T�q���u�Z��f<�{��
�S�%���$�B-�$0����F;�7W���&�G��\c),U����z�h0<���hm�G�����-�r��>b�$�=�{�=KT���������wC7|��(-���I5���p/��?1/�a�
FA�V2��,4f����p�; ��`4�������`�;����HZ�
���'E�;�V�nx�"���G��G��w�fk��"
W�\,�_Drs�9C��1���������5�����B=��h���O��wK�L��v:�����-c#����:A�R�3=
	Rz��C���dX����p�Y�����)g~�����u����<�oy|0r9)����m	uH����h�2-�N�)��3!��������/�l����e��e|Y���R���TrwA@9���Q���P7Cn��r�����]����;�7:�Qo��Yc�l���C�|j����N5sM���F�-nu�W��(�����N_�[%�3�uK
mE	

��M��b&n�����,}�C���c��.Opb<���1��Y��2su����+\*��Y����f)	�g�����_!�S�l�?�����c�hx~r�!-A���:<�y"WE��:�o�w�M��vV��*�q1mTGy[(8�Q+|�%��E�uY��6�2<������L��l��L�S�U���X�����4�-^J��u�>w������	jW�=f����e.C]$VUKt�	CG�%�,�U�o�&	-���}����Jv2
��T��lR����cz���o�ND��X�4j������y�����?XN�c�K��fu�zT�&�:��uLM3�Nk�j�SU�W��R3�&2������GC �G����r0�>�*�,����u������qkicIR!:�88����C��%C��|�!��D��b��<�P�����J0a�w���xs<8A(��$���K���g�>A��d�g����	�A����I���55����d��uq*ex����sJM������Kks4�T�L{�4&m0r�[
l�V]�V���bR��^��DR?�2��S��+q�@�'�*�Y�M#����Lz�������;�]������N�+��[��oMcZO���F���$,���L�R|Q�*[��&�[�fM�V�S���[k���4�,���N�b#
�%X
������k�r4�����cb:�_�c���8�.��|%S^���LI7�%���L*!	�����`��Q��C��K.������q�I��g�V��nhZ�5nv��	��s�$�L-�-��fK�}�Df!�;����t�LZ��L������tnN������S�z�]������]/�&���]� �%�>,! ~w�vC��	*&c�fZ�8�[�(�7�JUe�qos#QV�3���iG0�O��Jp`�q�PL�&�K��b4�&�PT�3�� `Tq�d!�q������"����&�bH�!8��|���1@
��5��LPPog��v�(9���Hm�s����1�Bf��B%z�������~�|3���pG�+���6�
,P���lE�}W���#5m��h;������Hx�5�#�WzH8�+��"���C�.Ei^�Wo�kl�����)�5���\�4�gU�+��t_~E-vB�k�h
l�[�������,���iz���������X�������X�������R��}?/�}T�����������jJ&���OP)�Q��\��D�f��\�7�d{����A��dD�����z��hfc��6&�hjZmo�����V+#��`q-�.���8�C��8r�������CA�,6�3{A-�]�����'��P�~��|�� �Ll���OAd��8$�����+��k<����a&�"	w*�BMy���l�E\��Or������S�����xCG�"��I�Qg����1��dbt�*����
�BY��BY�cy<����8������qn/��c�3��T/>��h��>|8L�+�{��a�(��N~�!��h(%�1z�:~G:�z8\�I�����%�vs��`{��2a���6k�u�d X�$�S�����
?b�-�n��:����4z@Q� �,
����aZn�O^�b�����i�Q����Q���)���B�`��#:�U_�����=���\NWJt��@�ip5caA�����L������ro��l����i� ��/���Z�,��p�l�x�����/�
�\��U=��*��Wg�c����o���^'^�,�����,5���E�Po*������~����v���E7�@��(��2���H�M�����11k���V(�RP���i�"���v�;�"#�o�e����dB��n���#�Ph����E�-�U����hk���9�`����i)��Mk����iE*T[�����H4�i���t
�2����@���1Y@�PE�rM�l2��	�����J��h������� ��
����"����6b��{X;����n���(��������=c��3L�1�����?kAw2�L��"����T<�W��KP����k��|������rt�@��������Y+:I#n���a����}Y���������������}�MkN�^�]�av�������qu�<$�Z�������C�f��:�o��L�M�|s�
������Hg�vm�z�����O�������x�W^�#�{����Q���^�Jw�+�-���g�A$}t�;�9��w�3�Z��SN!��wu�K���'j9b���:��z��j���ryz���C�~3��ix�\�Xmg���=<.$}���up�?�K�I�F�N�N�'Z��y@	�Vj2B��`VnV��t*k��Q�3;f�������c���s�u;Y 
-qo�>�����o�,:��|0:@�N����"�n�V���n�\#>������
�GW����+]}zvuz.���rA[�����������kf)6)M�05�z
��}��#���`N������U�h��3�����������U[6��������}q5|	����$Zl(�sF�1U�����-V��Dg��e��p�v�)��M~&���bv�!
���^��^gv�m�"��p1���Ms��\*�%qS��l�X��^}����r0�`�+_*'WX3���[�LD�����t6��2[Y6������1a8P[��*���(�<����x{�r��N����c����Q��[Y�q���4���#*'�W^6��qD���r���W��O^��R��������g(�$_�W���E���++��0���l��v�#�%�<�q�9*7C4a�v������rJ9p��bsC�C��;E1j�x|6�>99��-Y�q�Sz�W�E&��w��A���[�$t���W[�B@�?&g�m��#Z=Z��O:����y����q^.��RE�H+q��"^S����/�j����>E�~�%#���N.��
������
�u-��*��N�}���3� 	�In�u��
�p������^��&qf�OAb����d�W2��+���/������=��K���k���_P�#lvD�I0F}4���5�KT���R_\}����X]���N����~��D�������3��Ty���s�B���R�V��Fm����{���4�e��iF;���nT�q��@��r@g2���_�5���v� ��u�<�j�n!J��v�����&T����6P��f�y�� ��0D4�oM��f���;����SR����}���A�
�>��0	=�#gP��a�D�h�E�?mn���#����d��3��� Ex�x���u�;LL���b'��v�e4��UjN
e!4a[Mf������9����d�"RZ?��X~=���f�E�����^
��B����t��Wx�"B=��DQ�i�jt���{�4�Q��$i���f�w���b�|�{��G*���������F~t�Q-jpMC�<�U��Nb������{��1��'q�EA�>��'��v�v������(�n`�d*������[����t8�5qww������O�	��������@��1�w�2Y�t2�R���$� � �{�c��%�<��q�[�B����{R>Fa��?xz�P~�3=�a��!.�x�0lu�h@;Z�t<x��q^�
>]�k�oy$�V�����������:��p0p�/c���<b-;48q5���������SN�P�����o�D-�4�����Mb�&(B�����y���5�#�qC�{������y�����v��+�����6>���QV
F<�.%�C
8\K)r�LD�Nv2r�RDWH�ASHN�,a�i�6����x(m�;�6��6��3FY���i����x9���t�����4#]�h>�v�f����t���9k��[���r���K��b�k�7��������l��:�6�����X�M%�fp*�����hX)���#>z�B�aNQd��h��$X�0�1�@5
�{���%|�uc��m8�Sk�
�1
�#?�At���!LH��]"7x����l�>9��4O2�v�G(�mF��
,�&�C;!=�dQp�!e��M�{NZ+ $�wz�Gt)����� ���37<�~LO!9g�(�uo'�(��A�9_`��YB2����`����H�,~��h4r����j^tZg������>>��y��������f/�_������#W�?4�0��]��m����[Y~<:�{�X�jO��af;� :����6P��\q�LeHU�:F)E��I��R9��:�L�7�t&�[�:��*�Fp4�~���gtUb�at02D�#�jp�g���R'UFTCZABdhX��wfw���3�R��#g�(1��V�_�c��QH����x/�M��UmY���be8��=��4}�0[�b�������]��V�?�k�	��+���<��QOMj�h����-�M�����d\�R��W�C"Ca�Tj1m��WP)DUf`N#p����D��E��C�<�c�U	��K�<�4�r�����+�:ube�.����fAm H&Bpl���Z+�h�sb��'3	��(���xK����$�>�� pk�%�A=��|Is u������Q���7e{���9��$t���@�	0):G�z����C@�+���{�4Y�3B�jh��-�8��#T�P�6���	�?�&@l�t�
#��}2kd�~�b&�l�|�Y�o�����O)�x
W��^����g�~�i��x<0��S	�4��HN9����fh�d��a8�'z���S��j"��l8by3�}a������ ��
���	� ����lD�l�l����n��H�F����
F��<'Dv!=o�	��s3v����M;��j��Y���dOq��(��
����� ���~M8��f#K�f�@�TH�cr;,0��X����L���b�gt����%x9`q���~,�
�M#�����F��9R�v����z{�g��t	�DB�����~�>��k�/��I�F�����l��
3�|eEs�W�Z���$g6������I�?�wR&3��=d�u6p�������gH�� ���q2I���*U�$��.Hx����)��s$��P�����$�;"�xf�p�a����"��6�����5E���,�=J1�\���5��V�m�I,��X4O�����,���BR#�/f:+<SpB��5�Z���GN�R
�����)�H�(E��
^��OV�#]�2h�O.�'���L3�OP�{�#���������Z �Is�����P/A#t	��j/x��%nc��a
7G?��Y���b��n�;n�������	��>�0��.9�	B�Z�b���\cR"����sLp�<�(�|����{"L����)l��w���k��4�}�N�����9p���Ag|�KC�p����8BZ���,�����Y�X��k�[��5_@�&RF�[�����'2R�1�p��$�U2�A�������6UsB>C������AD|�zQ�#B�������E8F��&]s_p��25�_�>|*�%�9-�me��l��D)n����� U�$�8�K���e�BIp��1��"���������;�1��H��(��k�����WL73S^4��2��oo��k�)��/XD����{j��F������;���)Q�)P���
a�C,*���Z����
oZg�)�N:����W4���9��;G�4M�z���`3�����Ln��;�h��oI� �������15+Q�1�|J���;�w�aXd�C�M\aL|s�H���A���*�P�-��h��������������U�;[�/x+D�Uu<5IZ��@@�8����Z��?u:?!|K�������l�|�=-�^�Z<��>Hz=e�:*)j	����]���� C@b�9��X���<����C���O����|�"�q �����9�G��1
R������-]'	�p:c�������������L�V`+�����~)�p:"��l�c��u`�@��15�\�x����4���6P�N8�#7P��w�r2�����a��wam����>X����jgVeP�$��'��	�l����j~^�����*�<�I�C�������M��u��\��a����I���J}vV��?����������Xr����<?�����u>���~'�r��U6���T���x��Z*0F`�Rl�qO�/Q�	U���&���$�v~�0����)FyT�VO<)Zb(P�VS�0�1��+�l)G��5JE��a4"�����A������
���>N���<oF���6��
�s����0���$�!�G��������C�)��<-4#h1f:>9��*����a����l���b7j����������Oq���K�����}U5J6������
y���A��50J�S�l�N���e���-ap���wo_5/���\>��<��zty���l�6M�������@���8�P����������4���"9l_���6�����;��>�t��.�'L�5$>Jx��:?;:e7��r�1>�-�v��~�Kd1��H'�E��_�BL��R��\���#9���ll5�1�\R���=`����W2Z�����h�(S������Ds�<e�GWhb����������D���=OX��������<;s��*[�����\A�*s��B1���r}>rpv�qJ�������)cw�7�
��zMF����d:L�Fk������eF�����p�����dZ&���,���0
�E�6p��:q�n�z+9�r@����U��_�&���q�&�F����$
��Xs�2#Z�U��:<d�����T|E���EYN��l!(�����4��X������6�W�5��XUY�P�� ������P�+��|�a�8t�����|Cj��%G7���K�9����F�qt�b����C������&����y&^�@
\kP�(��5
nTQ���V�3�r���(�q���w���v��f�&"�5G���\C��8�S����`��v�J�Bhd����Dpo��?�`{�[9xg*��8��@��2�
��8�4�cM�J;uO�&k,w�/!;=4�R�$�B�Z�~���[�6�1d�)V��P�cJY�r6���f���Z��z�~�
Pz�B�1y3
Ox���+���e�e�pZ�G0���X������O�8M/�1f>�S��L�+@���O����l���7����[�������9�{���l�9{�n�H��;'O����Q��\������ Q���Pp'�yu�!v�#pn�b�\ce�_�
\��-s
����w����\�)��t�Hj6�
S	�J�@�:���Q&g�(�|C��B����m��\����r�?��i�K�B+�J�r,����;=�d��w,k�E'#����������-�8���.�;��r�N�I$zj4/f������=�17�t���I����/K��37��)�������}$����d�\r-f]��H��H�/�*	�'��O��"S�=P~��?��-^����lV�u��o���&sO<�Lo�f���P 7��j�q���Li@$Mn
bX)�Pg ���x�K�gF�9��r���^�bz�X�f�?d��#1>gd)6yC�	Zp|�]|�f;�Bz^`��s���I��Y5O���-���Z`��E���=�m�%L�s�L��W,Imyr3"`AGqe�9`�\��*��Rd}J1�{���7�����!G�O9�!w����e���m�\B6��f����6boe��6�\&\��g+g�
�"Xi����PuUc�& 6���c��L/�P`�o%�PP4ed�R4��I!�l��`{��[*hX�a6�I�:�[�V%<n���cI���� ���F��m,�t:�����3U�X��V�j�����{)@�DR44�k��m���c��q��2��d�u���lG�#ZU��������+�
Q 0��� o����X7T������{�F����)�I�c(C���w��,�3�DH�+�����E�<v����x�]���x���b�<��u��,�!=�+�3T�*�p��2�~Z��s�G��kU<>6����{y}b��Z-��u��D�
R�[
>����pcN7�F4�����gQZ>-�i��>��s
�yYl�j�?���M�����5rh���cC�v�fpbj��� ��S"�h����S^3!�x�5�!va�����1G��������Eo�u}��X�G����F�e3|�*rJ�;��7r����y�;�|���ye�u�{�d2�����)�(�	*��|i��=a�]L�1�=�����\]Z�	Y)�B��,pd"tx���E�%a30;0GUx�7�<����q�]�Vj�[�V�7������UY[��#��7�gdG�&N�\�Yjm�sr~t}����}����C�|f@T�
!�@F
1�~ �+D�A��-���r��4��_��6X�Lt�5�/�D~
gS��O�
��T�F�$vh|Q�������9���72auoG{K_ge;�	K-M���9l��u����������=E�����t����~���H�n��w��M�Ml����S97��xf�?oz�6BKi|�f
a���Qq]�g<^:�_�>����N��_�UN8�= gf��Fc!~M���x���I��xt�In�d��I�D@i���w�zRF��i2���Q0
 ��OAz	�b�|8B����W�e��M��tJa��6���E�0��Eb��:�V��CbF�Umy �,t�p,q3���b�<G�p6��'�X[��h���]�m.D,/R�bj
��Q����D�Qk(� I��j&�d�c�v�:�c��aQ
�u�j'���^@5�J�i���%C���m��1S���F�x`��<HBP6X];\�p��h�Z��&51Qq�)zz��������� 4��S	����o�=��O��]c�S�Sl�N�Q�b�4�k���\wG�7}���y0r���c4KN�)�4��?��Dw0 *��h�/�n��Q<�����3 {w����g�c
�;�|�+<�TT\�����n�=�� �����~)Z�������^�
��E�eGl9�Rg�����@��z�s6������j�w��|�Yp����L���MN
*���9
�ai<�����:�HE�H�$���p�X2� ��Y��M�.��h�J��M���7�N����Qrg"��1�G�D0�U�������`�dM}|H6������=s�Z������e[�T�lG���h�������J�>�0-�[C�w1\�h�}|I1J,���lF��;�E�����+���<P*3R#7dPEO�j3VK
����|�}��q���mvNx7�����}�;|#q�`'K�������_(]��J�|U���j�'@���?�K���R!O����#PhDnD
�����,D��)���_�8P2�[J�k����c������6���;��5G����)��S�����x2�s�9AnW��D�;u��f�h&��8[`�$�8��1��J��\������q�|`J�&�j�=�e�!>M�� ��������S�5`PE��![��Z�u)|�#1��_���.��� ��lH�}����u����
��=�p�L���*�����V�=>=o��yL'������q���=�4�7C?:He9�\���$��
�S�������Z�����II7��vU^[�Q�Q��Mgs��]�v�NN4OZZ%�P]�Ka�gh����E_�X������/:7��}�N��@�U�`���A�>����{(�� z�uDF)-=Ys���e�s��qF+��+�I-���MR���*�F��f��s:�$����M:��
��I2��zg*���k���CH���s�%�C�D9�`�!�?�?��2�v��H�=v���'�z�u����ob[v.���{����e�����Y�����_��t�|{�c�����I�4��Z_�FDf�d$��E��|������X�����R��}���k������W`_�_z3J�T�l����rFa�:�m��wQ����94�23o�O�<����J7M�����u���x�X6�]���s�8l1g�/������E��2Y���a[e�����%���I��^����$�k�Tt������s1�W6�_���2�2����]�����S���������H���+���o��da�m��-��}����U�+~�aa������|�j���vX�`A�����G*���
��6r�Lo���8y�����;�@���pD���E1M�wM�W���JA�����y���K��v���������5L��	���ht3�M��P�@E���#Z9L������n�D�E�f��+g��G����������}<^5� %��\�6�b�#��J�8�I��L�
�������`����:�N�-����$���{$�����?�=�0X_��Q �n�s3��1h�'�;u�./!�p�k�(���n���IX�F��5���4&
�1��NI�<�&;�j~a���eFA&��2�/
*S<6�8g����i|	�6b����`N8F����s���A�8��&oo.#����������Xr�H)"�"�D��+(E�eh1�����6���~�4`����>��1+r'e���`~(d���[%�
�i`���,�u�~F_%���`��h�\���@���+����D���$�O�&�
�.g���)*Vo�O@���`]�F {��,��[�����%����3�%lA��p�������X]���L�������	�\G@Q��$���� ����q���V]���~'B`���7�?D6�;�t��eC�����p���CZ��QJ�����
�=���|n`�&�zK�5�
=[�nmq�'_AR;�I
3�` <B0�|�������.2H)F�c��f��Z������[��G�`�HEW����t�J��##���'V�"����F���+��*�����QL��/>��\���v� ��De�����q�s-�L/��)�?�m@Q�Av#�4�k����Q���)��@t!$�
Q���c7�"
$p��7��E`�)�"�g��s������3�c�'��L�\���%����>n��#e#7P���P�6��,^c�7F���S\�A`����,����������y.c�a=�r8��#G�����'	�(L�j��������-��9�V1GY���/�(]�X
�af�AlZpk�6�a����	o�OAp|����kA��K�Z�����#&��oC+�"��%_����q��n�qf7e���0@n6�����H���uj��&$��HF�BkW@�$Z)�>��������\�u��{8����%�&N��y�����T���r�&WS��a	�I���@�U��E�����C�z��$�P��BL���6�If&�����h�q�Z
�e�#EZG�MIx��(�Jknk�(�E�`g3�p�=�a�>�a0�G�zd�A��xaH?&m��P��L=�T"f��A)^�@
-�S���!^v���,A���eg�����E���4�fN����1 ?��/���|u�n���=J:��w�_���[�������d�������
{�!�_]]����]� �+���B����b��Qc��v=-3���W����m�Dv��e.��<(K@�8��R���1��\��,�br��<�=���J���|�Crq;����YZN���eojx2���:Hy*� �sn�ut~����7����M���o�����}�V%`�Z$Ey����x���&��@���\������K���ja\��-�������'���`&'-[������z�����L�����h� ��,��OP���
4������r��OIty�9I��'H��0�r�q��������auB��)��������G����h��!i���I<%��(�]O(J��H���������{������M%[V��eCk
s|�mI�
%<G���j�����3��=���
����O��������vg���G��_��\^����p�J	�������~�4��)*��ErD[T�s���S��\�?cs�^C�.$��H<�2�b�#������x�f#L���
�_E�����o�z�O������q�A�Z�4��*�P�c���g=�����/���X�c�y%~�����:�������X�i�y��=����9*s�:�6y����|�X a��U#��S�@�tgi1��P�mN�/A���6!��&`[*S�3�f����Z,���`I�A@���-���
�K��j���hlp����_��N�7�NB_G�'���5�~iNtr?����g�3	�AP.�?�f_#N��I/u�V)���
��T��[x�5�{��d�����',a8K��g�_����F!F�3�G�D}
]�Pv8*��[�~�}�?`?_��l�;w��M���k,�R�Y�DUO�1�
�Z��e�-�A��+0�Y�l�����8�CW�Dh�_�[�����7������Z�~���)�Hw?kJ5���2+m6z���[L�pu��H�8_�Be�V]����"7~��"O(u��b6F�R
6,f�1�:�f�$���.3����P���$���g�����2rr�"�T:�S,+��t(��NDL�D�BAm��}���6���@�m	����N��k>���m��4�~�k����I����C���#(	�a�!�3I��5X{O(|������V��T�����"p�,P)��R���n3��S���S�����.��GA���E��e.:h���51{#Q�W7�
������KH�r#����c�Gj�P�8�������W�9g��h���P*�"��{q��8���Vg�q;�����"yH����]�Z�Vn��`I	��R��S��]��h&���N�C�V� �>��!�`�5gg^2=��m���	�T�7S	����i�sT'��|&����T�]D����7��e�xT
�o�3��B�����U���+a@�F�
D�A
��=�����ur��G������Ov�l����
���w�+�������|C�<J������W�C�0�12�9�US��6Gi;��x�CBe��UjD��A@�?!����F���0a�c��m)���*G� 5������@��qG�+>�5��DH�A��e�n��U���d2QXjh�baH�Q���d�a�6I�[��MfDiX����T��l��1�s��H	���j\���5�� t��rq~�nv��O��#K'������r�XD����6r_Y��^�d
���k��
k��T�U)������5���`�������h��p�L4=�r��ke��{Mf
�~��l��}�<;�� �I�$�fxq!HI>����l��`����5fE2���!�������� T�",�7�	V��d�C���W�
T������^�-4��
��.����s��Bt��fP�u���(��fS�������7�G���9i��:ff�x2:��i�k\��4=�����.��#�5G"e�5L9�z���QN�*qN�x*a)�(I�-zu��<��3��/*"�r����)����?�=��<D�E�sK2{/�{���X5���}��h_"a��U�F�h�)Qc�d�L
�����V�c9�)noNd������������R#��	-*&A�!��dw;�D�����.RS@�1��a0J9$�d�y,�qr,%t��xH]���8oCUM��0���V��

�����/b��o:z�oq�-*8sX��5uAD�������W~��!�bWiw����-(�b,��g�!��3���_3D�"�x��W`�M����O���������7M��wSH������0�B'QtF�o��y{fz#�Fe���1���C��F�w(='�4d$����;r
n�-Q���Z�X1����DyMq��S����G��wKA$�e���Bwn���l��:���&��x���-���~`��N�+<���`��Md;�{��`t���(�������l8����8};��t�L�/_��yM�:��hJ�Ax�{�9fe�'���������������4Fu�E"�6�_:>����y��u�yz����$p�V�+�2�
������T|Y��%��U\G�6�>�k�������v_�P�h�
`B�&$�Ak��zj4��c_�a�O�	
I��ke��%�)I#D$G�vv~�
2�����NO��?���T�bZ�S���0����];��MM_��)�V�[&�#������Z(|7��0��,���Tmu\��)NTX`C���������b��4�n^��m��os#������/O
�����T�x�8����E�����"��[Y�0��"�t�c�U
�h�����.�g6�M�-�����[#9M��������q�����6��������n��dU���.���\�5�������X��{���r����9����	�u����fQPH�LJq2���T�I�n+����A���]�5�y;�3�1��g��
�#�|M���}�����w[��B9��wV�w�)�������;��<� I����<J��ff�=���#G���7�%�S>D-R�N%�^4B%�pr_����!�%E~>SN�F
t��p�P�'���5�;�o��0PF`n�i`2e�����������a������0�����A�����J��*
�)
x�o|���g�1���e���n���D�{@T�@��'4�#����gD6^��?%R;������6Q��&��v����c��w7whQCB&^�P��g�7���Xj���e>:`�X�H�B�.����y��<H�����X���0�u<6�^��\+�rK�S�xJ�$��R�T�����,���TCQ����
�1�	N�L�M�N��|2��{�Y��n������{+U'�034�;R�B��m�>��sb5��5��
�wq��S"�?��Bo�����re��	�O�E�#E��d���	u��Y��dF	���'tB8�a�9�����4�� ���G�w��%�(A�����c�$'(|C���It����g�x���=�����u�i�]5/<:e��� ���'��N��G�	��t�Q%{>O����O�`|#�_O�Y+�(�4��S-���s���)K\|������5�(/��_�j�]rI����������F��<$��n&��^��,e�|�]G�DM�>��Ar$��{q��D��L$�=V�
F������e�A|
�.b]U0���i��x�SGE�����5��r n�f�����:m�����qj������m�S�j��{V�	���<.����{�t�=/a����x|�M9�!J��/<�u\�7"e�rs8�N
#������8bbF���4���������g�Y���ztsCsM���7���t����5<�%��BcuGR�\���zR��:�'�|����"9}VaaU����+��~U'�Q���e���H��1;��c`��[I�����;	���W
���[��h7�h)qD�E�&}���)�a�F�]�<}h��a��&�d7��xD8}Q�����B%i	O�
�x�~����a��hf&\W:��3dz8�{�bz:eh*%*�6��4;�Y�Q|s;R�Yy�p�e�	��9=ut�9:=U�����w~j��l���n>%�e�b��o
������$�q��I��1���bUR��NFi�9��%GtHU� &��;��15���J+�P���LY&����'t'�
�\�(;q����JgIp^� �8Mg����E�!,�%<�:���u"������s0���� 8�d�M���o�U_�\����v�T��0��J�Ae#�k����AZ����Y��q���2�����_Y��Q�"��F)���
L%c���:����2J�>|�#]��Z��C�u�:i���l^����.[����e����P��'�X2.��X.�Z���L����!i���B*��+^��L25�(��d�i��
�����{U�<OX(��
��U~*�xO�U�`�vjB�R
���������O�O��+H
�����������b�Z~��ur:E���Bb=| ]s7sY,��/�L&2N���rMnA|"Mp6����'�*�CZ�k���\��o������e*�K�X��/��/�$�<�����f�c������z����h���E����-Wr���s+W4����k���7�ld*�e��q�2;�j��*�6�c�'Wk�qh>�:���q�D�����li48�I��C�����A���+~���1(��7������a��v���J�5�Yk�CzG%�K��)f�6���*����0G�r�-�B9��0r��3F�)�:���a8m�@3�&�>�vCh�Wg���B)��6-
~��l��|��
��S���HY�3������J++�����6���-r�5dA��x����Fk(�lK�����e����\���v�j�����=3��j5Kv$�Hd��%9�*����]2l��h��ECi�"����%D�_�#g�5��.��t�p���x������H��f:)�y
j�A��i�T�w�Nc�'9�����`.�1�]������������s��	N��i���	e,uo��\C��H�Ga�e����`1�uQs[���������	(_�m!���Gic��}c��A+��E
@���\��6��jKL���#fMs���:���)���8����C��4�.d��3�L��-(C1q]�w���?*4��wv����b���Q&���^U�^�JF���!�`�ac��;�zT��|�%�-���44<����<�.gJq�V����Zo2���`��M���N�!��;Psa�bU�D���x`#f�����sC���@������,�uLgO��6���So�l�����7Xn���o�w��mk�Z���<aN|�9,�xV��~h��Cp{�����=E��y�z����mY���	D���VA�{���+����
="���sLo5d��m�AE�L����]e�}�2��2��,�������2x����p����I����n���,����/��vy���hM����7���iK��
4�F:�n�!���n1n�#���	��fx�mt���<��Ha}t$1�N�����x��>���Fc7�z����{���z=��E6F3�X_���?��fm3X���l����Q�Z�������-�0<z���>S��d[� �?���E\$�������[A�[
�^���oon��������
y�0c5$c�T������vk�etCN7oy7���g���p����V������1Y�d�Yi���Y3�S$j���l��G��5��������Hg���UB�
��6��Ri�sz~t�9�h��N:��g�[o�����.|!���<$a��[��	�O��D������@�1���BG[?��k����lId�J�����x��P�2<=�6#E��Ho�:�H ,�	�Y�+k�e�������b���������������6����<��+|���Zo�'�/�O6��Y<<��fE
}��R����L�(�e�+�	r����7�b�(��-����d�� ��&D�|������}���������t�l_8-"kvbPsS*�46�U6Q�U�����9'i����.9��I0�xvW	U��og����W���������a_��XpT������c\0��E(��P��@p��\d��Ec�#�����S�\�(�
��
�Ekj�|C�#�8���ID��'����I�}|�"x��vI�zE���=��*<S��Y�L`��l�b�`J���N(��Fwc����=�0��2��x��#��h4�&"���9���+�
#��V��yt���u�2V��;�V�
���@3	hzba��JY��"��<������z��=4���ui�	�8r���@8�	A�0d�7��d8f�'���.�"��7N��
�������4(���Y�2�=�D�$T�/B������K��!��h7���;���ll8��4������<���<;�8o�]`A�P�������V��L�D`2�3�_2=���-o�>�C�B���&��qe��
X:���gdQ���%��.���E���r,e`K~$�����n�/p+z�3�9����I�.2��=�O��$�wA�[B0d}��y?��,�Vi��]��MH\�K���e��S�����I!��|b�L/��-t	�����^�Q���l6��/w�8�q�i+@�q�����d���S�e?�&Kq�L��1�/��|��{f	�h����<�z�=K��9���kE��e�W��6�2�g�7z>1��g�<}N��Z�#�$���7D��V���q���;HnR�cu��`���[O!�b���L���$%x�A������a?ngUSA@rS.~��%
���Zy�y���L��"���gn��"���5��0H+�`��l2o�
�Ff/��.o8�������I���B�4K�^�w��~A����hKs#G��������K��x!"��\�{�$�����nv��5�<�����y�a
��FT�rp���);�_�'����&K�����(�|�XV�2h��2����K�V��$Et|���Q�T���&��-�&�v��q����5��Ej��o�^�V\�WN��6]��h��pa�CA}����B{>'����������H�D���'�f�\u�$D�����f�/�Tk<�E��N�"[�8����)]S����0<�g��T�1��O��{���9Cg��t��WG��������n���T<�&�T�L�ta���F�)kt�a�S�m;@�!:x�G��I6#�4�K��������2��f�X���g���q��f	�Z�&�%[`��9�����
�E��v�O�}�PH���	9]�-4���v������
;a�?6Rv���y� v6k!q��Tb_�D���]�y�@����z@H���j 1�C���o@lq4[x�����l �E^�]�������Y�y�$E��@xh���}g���9��LQf��������l�K.Y���F�
CY&B�vqS,����E����DO,6�|�D��{��^?{���~������'P�����M��@_aKF�C%t����5P>��hAH�w3ar�vv��`}�p��X��8����
`w���6��}�R��fJc#u��a��'1�`�B���>$�3��{�����	�@��}��b����#��V����5�b�H7����G���=��� �z
�yL�����t;0��k,B���y�.���7��4vf�f��w"&Q��e�\��m���s`�x]�����C�xK����E���k6�p=��U?��B�h;�e�$Fx��4*����8T��p�xG��/S���>����d�u�v�M�TB`�nE���r����U�ew��r�kC���s��Z-�iX9�K�<-H��_r%��LI���h��������$Na5�3B����!�.�Uz����?��{�1}T��iW�TfR��y�xa�h���9��b�+5��d;���@2`$@�I������y'�M��M ��\����E��hLQ����G,32�M(8S,7A����J�\�&�K\(�� �1�fr�Z��e.�n��T��b�A���0�@���NWu+�6j����wd�����;�ym K�4vut���!�`����r���<�n��#�G��%�C�
[D�V���E�x���8)�^"_X:,�)CU���(��R�0�a��E@
�#	d��S;xL�s�N�_�T��X�������Y���h�(R|�@����5y����@������R�E`dd�O�&	>��-�`+)�}������cxv�H�O0_�s��� �ep��J��<-�POM%>�h9o�T�(K������z��7�!�\�����y�|� �}����/E���V-�t~<�]�iw:�C~��6�9�
�v���@\�U���O���	6�g��;}���J!qG�C'ZH�M��byF�<�+V���������dZ����,6���k�48�����@��o
Li�RkY�r�Z��y��D���m$��2g�S�������-����-�o�,F~��!%�"?�|�9:f�E���O^��������KL����"G�5�fp�D��S4|�9a�`�;S1$R2^'.(G��Ni7~y8����*�,��a���.i���)-DiQ��������F���)?����Fu5{,�O>a|�>'��������Kv�dNl�iAs$��e�#)�l����{_��i7��"�' �>U� �.������hKp����B�����a���}u�:{����EA�#�C��9��*��[��rp��`���YUD�(�g(d�n`LyF!QS���raL�����1w��8�>sf7&�~4@Ht�
�H�^YI�������O+���=VT��[L�v�6=Z�^S���� ��S���O�.9���T�����������k^*J�����c��S���Y�� <,��yw��y��U����b`�|dP$q=���=60�x�Ud�f��s�?Q}�B�9��#d����7�c���a�E����:q�����}�����c����v�!C�<��`
�����X?v�%�����"��0ju�){�?.��_������
5TQ\���%=0���X/�=e�������+��a6"��9kY������*�X���W0�&Y0cU4
���m���.�W���,	��M)�
J�g���8�\�9��e5��jO{j������W��1�T����u�~|r�u�����1�N�r����vn�z4#27X�*��#p	�J#�u�2j�T(���f�nb����p�:E����{���\IN<�O�O0;���9�
�R�J���T��#�Ip;%@+M��v(����I�����V]]�M��`k=O���y�M/"]�W��"rY;�}�-�J�b�w��xn���� ��l'��</wK�<������B2�:��\9c@A~T�Ik(8y��8:���0}��!�6$51�[hD�8Bg���.��x�&�^���C|

��Z���]��,�#�-'��\�w�-*M`y/��R�2��q7���
���w�`P�z�Aq!��W�B���7
~p�S�~�w�6�^m�W�C���_����yG����v��p�C7GJIs6���v��
~P�}�����}0O��=�������������k����,m:���
\��[�1\7����y�#B6��5������)��.��,
N�
�3�/������[X	���4���lL��{��1?��h.1���.��tY�\7p�$�
�*��>u�vdK.g>A�!��f�;�B�� �GT�������y���Y������T+�D�����9�z�N��U�V�p�W-�k����O�X��$H���IR�*3}B���#5��/���U���(���O2�������c����t����C���������Y���B��s�S�&��	������
��j+���$	j��
L���b��x�����!y���)�Q�%���X;0����U�u3X�?�d���*j��X�z|M0��Yf[��/�\9s(�/�����{��X��%���<��!�3��g0[���E���k����+>�z�R�$E�8�<o���nO�^�6��e[���H+�zF�5
�w�l�
G$��������V��[�C�����{��
f=�L��3��;U����d�QH����uG�1Z�m}�2"`0'NQ9|
��d���?%���u9|J7���QV��2���|��q����}�5���p�����j����$!�,��!�y���\������8=Eyl�y����,���2	XGR�4���]�]�f`��C�<���CEsJ1b�\��H��w��`�H}y����f�S�Xe�f!���]j��&��|7H~�������G*u3�t����-�^^G_El�/8�����5���:��4�V�.y�����$�Uq��������2'�����d_�~��rGn�K��]6wdsg����~�����#�n������:r��AY;Af������k�E�"�������0��?B���������6/w#_)����������D�������{!+,z7����������>t{�>���N�����vo|�� �+nA�l7�=����������C�0���Hv��1���z3��T�<�A�n1������z��(������h�����b��|(w�o.���I�B�[��`���C>��4PvP;��+
c�����'xO���>k�>�|���D���B�*[�zB|/�8��J ��c4�}0�g��mL�����j@e!Y>1b2B����V��3\�
����F���p�S�������C
v�����/�!\$uq���������|-8b���b���J�1�����\'hi�	�$�Uw����2(.�����@G�V��j{�:��a��#3�=��m��'Dz-�X�S����A�����>�p
P������%h^	"B���f%sW�����@r:�&��\`U�k��z���Y/"����I��
��f�L���,rf��>�&y�u[+UK�+�yS�"�`.P)k���0m�T�Ul7WH�5]�THz@^�&��r%�`�q4B!\�����09�JE\�l��F;�����9UH�R���i���mo�X��W��_����y+�Q��I�(J��'�����"[�LqU;o�TV�!zZ��6�HY�� �<�t��i�1D)UZ��N��Y��D^Q�V�@?_���+ �{H+��*�8<V����3,P&��|$�Q]����xNR��������y�.�������u�G3P3��eDo������l������,��2�(�?�'-;���y����c�H���u�V*O���(d���TW%{���y�Z�+qn)	��iO�9�)�^&!xa>0I+���*_}������@aq+���������V�7w6�����2��<J��n����/J�������������1A`���|�q�k37a��~��/t�.��E�Q�����Oa����0�=p�O��9)�A~"�)���vm���D��y#�7�?�B-z����d�+������?E�[��kY���Z~
�|�+l���\%{�*�`��>�b��b��L
y.�8`�9/����l�h"����4��u
�����6QFX��x��X�eX�����R���3I��;+7B�#����q����W��H;C�y��{�9}��k��Y�����y^�{�Ek��W��x'�����w�N�lD
5���zz�@�{GFnl��x��K9�<9;
���~�����e(?(�h�`���)^���W���E}P�V������`8�2�'�4}5�E�~/�I�F�����E)~��gO[~�����,������/\,��S��K��$[Uf��* �h�z������r�����n jC�����|0:I������1k��q�p��
��E���q�O����"�t���{�W���o����.b�]��}(�K
������
�T;Vd���M���z����n2�D�i����?y$/�si�u����^��H����qz�4L�[-�4��;���e>1�����rK��;a���O���'[vJ��2�}�E�8��7/��u���u��Z����?�
���V��|�no�=��.��-���N��]�/��R��������.Y�U,��p���W�W�F=��z�J����fmk���M���hq�l\P�k�F�QDEs�=Vxc����d��)&���(%�AK�u�>�D���������Avj���M�9��@�]2���g]�%$t���yV�w��oUk�!2��&'�mUg���z�	vl8
�(2�������"+���Ca��|��������^x�t<�:�e:�0��#%��"j���5������w����������!NJ=e4�8%?VQG����a68�����%���Z��R�Pu2@jK#`^^oT����_��kS;�e��8�ZA	r�`�A��N :"�E�M���V4�J������o.�I����q�(��*��U^������>�����O��������upV	��P������eV��
	�Gt�v�M�`�S3��������;[�@����?�>c\<�%�d��p0����}���
|�$�J����(�0�,� x�G���3��;g�&r� �C���]�2�.�!-��~mwVro������t��k�>k.g0�p7��hj�A�d������u���0^��{p���5�N7���	Hm�9�d���0�^2��I������4!�?��o�Gy�]���w��j31	�����&�H����"����I:����Q����=p���:�v���
^*{s�����R�{��fz��_�!����s��Qr!��aA!��<��?k^$g3?1�oJg���9S#��s�OK&`=��|2��KD�Pe{�{]���$�/n�+��H� ��MH���G����`l<��Y2e������lcTf ������j~�n	G�r�����!fc��Y=�bG;Tn�����d�1��g�~�LJ#:���r�����\7A��O��J�|�W't}�lF
�zS�P
��������q�r��e?qcp�|�aT�v5H���9�\&�����Qr����Oy��N���e�P��H�3��@t�2�c�j_��E@�����l`j[�N��)��kA}�^y���j�+�S�1{��.s�-����3,6����sq��	_�j�xJ��dk&r������u���)������a����+�1�)�rc)8C����������8����[��L\Jc��-���#��;�A3�t�
���'��T����+CP`#:�v����g�����; �	[��n2�g��@�e��o�����o�I��3��	3��w��h���X�H:���A#��������\�I0�]�n'"X'$0Z��-\�����5����4������=Q��!cBW���s�EqO�1�J�g?����Cro�r4�`����W��~\a�U�Y6���G.g�w���{����l����=��Ql�T�
��U���
��zft���5y7�wP5�3�4��G����7?�@�6���lk���|�����lTG�i$�}�&�u�i-Jn([���4x9:�LW���v�F���K�vnV�s��Mr��"%�$Ix�G��b��)B2:iu^l���l��)��k�}�hQ�$��S9����������%h�����p�������������d��������n`%@����@���I��IeO>���$�����	}�hI�Q�$�GFT�s�����1� rJ�j���f�����������W����}r�e5��X#��AyA�\q�R������$���U�o�J�.�����%�p��_$���3�1�!�����5�^���eH�r�F
���&�����o6�����\I�9�&��Ll�8Z	�N4lJo�i���P�|��xFH�	��G</����s_��B�}�	/��N���a��t�[��qkk�/�������|f	�G��������sc�*�2Q�T�w_���E���%W�Zb���RpO'��?Ve�B+h?#7�Y
k��N��L���f�������^o�����h�o���Nm^?���LN���m� �zAT�G�h_���(�����?'64��$R"S�2��I�_��H��O d����Gn`���v6�*3D
�2�1�4�E6p���%�-�4�U�|Q=|G����'�s)���G��x)�r1�Y��s�~j�8O������RD�^{���>e������ynn�yF�`v�u�v������z���}���R;��5���(f��a��6�SR��U,��'
��1W����P�6X��S����p�n���I:�@�5���EBq%��^�"��Z^|�|+������6y�94n��57�AgGZ�T#�aC��>l�
�����V+��/��5z�����{��TD���|}����[�=6��['���V+�*o�����g�u�H�\;j����~qko�/2�,}�l<Wb�!�`������k_�����$��5��K��^��d,�@�������d���}�?���!�6�v��d�E#���Qg�$��^{����/@���KHN4"2�3��{N�t{Sb'<��\_9rt��&��9k6O�'E����X�����E������?t��7M��>�|�b���7��1�^($��g��p�s�]�>��B�90��'tEh���X@`{�y���30^h�uH�?��@�4����� �K��e{*�;���d�@/��%�eT�`e�L�������x���y��`�qn���8F��M)�4R�z��;�C��]T\��~
��T�%(�dH����a����q	M�d�`���<_h)��bM�����Ew�r�c����MbgT���<�y`���W�M�t]e���y�������-����EG&S�����2uL�;^>4�(&�i"�\$���u4IR{�%���
��9H������o���dn�� ��'��S2��m���}�<V�T�2�VX�j�PK�r�C�_�4�of!��������40�����l8�����a�P�bV[z)SZ*&�O�U�?��A�A\x�a0'�9�_��3�����R�@�>�MR
���%6���nmw��Phn`��{_��X�k�T�B��6+`�MATi�Tr�K��sN��/��2�f�/h�%�s7� ����8z�@�-�X�t�X�;dk�-�d_����msD�V���!��Aq1�����e{����i�E��+�}������':�����`���i��	'�;������K�]����~���N���:����u�i����r

R�t��d���~��=�2e�a�����Y��������7IP������\��&�t�HH	�Vz��N��K4����9i�>zwze%V�]t�", =����'��4�z�f���t���/�e�����3G��x��'9jwog����q�2�������$w�8���*��R���r2��$�>S�n�pru/���:��X�n�m��
��AA���o�s�?�W����6K��v�6�f{���b��L��n��k��o)Z�O���Va�>�>��CvEM���yR����a���l�E���������n���:�������.g����k�!�tn�� T/��5����6����~2����O���M�3
���_='
������#\c�Z{�%����-,AfQ��g�
g����	k������x�����o_i������Ep�";�h�f����%�^��>/
�����������J�CQ���H�di���	���Aqv��A]�Z���.�~��|e��>!����g������^�u�z/���u����NKs���1JJ��_��W����@��m�X�=&J���Y*���b#M�^?�?ln]��/�������I����"\4����z�
������ymParz�� /���	LV`y���}AC�:B-�qj�
)
��Jm��k���/�O����|�UY5�H�[�e��Q��="�}��O��.�����������M-�X~��p[l����=�F�E��A�]
����4(�����]I�-�+O�Pg�|=C��2��ZG�M�	4���Q;�wb
B�HLcx��TZ(�:7�����%��������jCz�!�#Ye
������a*&����K��}��@.sQ���u�P��������R�����
�<��/[:~�I���%��5fd�C�@�����X�=t��+��pY9f�i�<Z[ N��Y��������"�kz|*��+0^6���\J�3�-�k���{���0]z�� �I%���3M�|QHa��"��2:����P���p�w#M��-Q��S��6b`M��p��t�;�)4�9����^�4�kl�"M��
���~�
��jIc�#J���)
H�p���9��pgU�o����T8BTB�8\���v2j�����:���w3����^����w�6w�o��Kp|��9�}��<`�28so@��������?
������=���Z12���/i3��'w�zX��[���������O������m��i��;����0�����~tgp�;L#�'�|�I��pz�����[;�._���,Zb�ORxO(�:z���v��a������u3O�mX�(�O��R��wL������)�����`�a�� �5���]������h����5����%�o�H?��q�����g4����&��D�e�e��� W��{������H���
v10-0004-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v10-0004-Add-pytest-suite-for-OAuth.patch.gzDownload
�[K�dv10-0004-Add-pytest-suite-for-OAuth.patch�[mw�6��|�+�Jw#�-���jO�i����r����e!�XS$C����g	R�#����jjS$0�7<�B�M��d�^��hr|������}���������H�z�#��.���D�vY�wB�X���i�2'�{��cv:��$���L<}��n~�S�x��_��<'�Mt��>��B���Oz'�G��;���|������jxs���������g6�}�,2!3&� l��r�g�F�Z|��THv��f����~f3�0!���	�y�o"��0�f���5|������ ��&|��b�l����]�����������o��T0_d<��h������{���\D�,��8�e�Lf���GS|��Cb�a����%Ez���;����s)4M��4e��������yp��qfh�@6��w�n��d�m#���T��L�,�F�z�������I��I.OQ���+&iM������U��0�]7���u�dQe��v�G�]���{���/��\u]L�YG=���i1g,����������
:+������RN�ew�x3���[Rm&���{��L�N���������U�$��H�dD=e����#��1F����f�������W���[-�����}�&���vV:\}V*&! N�=�R��m�����'�>Z��Z�,0k���sk�T����}��1�<D4��B�i�@���
��De��h����G�h���6�h������i��>eZ|�<���f���r�������:�l4iR�2�����'O,�di��������f����K0���'����F�&����g|���d���F$�	��du����������?��Cl�w�Q�
@��x�2��;�>{���������F��vc�����e)����
�P�q��x|�?:�H���%��c�q�H��,S9����w�y�9�xt��������s���`���]�2��b���D�^F���<rO���$N3�	�<c���s�">��Z�s���h�PsZL�!�N��\�L��+$����S���{d3�s�>�������x�q(�cs�d��voC��+�>&�I�=��+8Ia����a73��'<3�t��b@���G�n?�Y�"_?�}��R��T����������ko���z~u~���0������������|��Xc�G��7��X#^������vn�S�
T���R>1t�(�������i7����f�;�M���p���ZZ�����$��~A&�LDt��k�rxM����kl�--��q��R��P��+-\���������o����'F~tG�h����&�N� �m����$P��p5���^���d�g|�qr�0����q��)m��t���V������Fq���v7�`r&�	l��������8�+�e��	���'�b�/v��7�@M�B@(9���U"���4�5����W1�tS9�w�0Ux7��m����>�N�\����N?��8Ks����^8����p1��0��S�P��]"����@:�^�B�,��$G��u�L�h����������aF/�y��S�a"�v��2nl����	!,60D2[���7�WDB�t��/�A�|��	��AL�)F�� +��>��"�0��h�If y��7#���sq��g?�\��Q��}�q�>H*E��������p)�9h�h���G>O}���&SD�8���P�)K��L�&2�L@�F��d�4#��K��z����`�q���9��9�_��Y$�/��������-����i�@JB>~aEu9;P(�������L�e�$A���&��i=`0&R Q�x�8*@E��7�W��� Q�������Y��h�u�L��yB�y��][����e���K�aZ���V}]%�a����p����_p4���q��`�,����Qa��N�:OA��-�e���F|�T�v��x�����N��|P��Q�/�7��0�4�����c�X
�l�����Rj�?e�([��h���8sA�vTPq�Q�P��0d�3��]�QD025�������]|KY��7�Z��@_1�('�e���q�-K8x������@������{�6�4�z��5bC*�"�)*�2K@�1l�:�
����D����h�6��x���A��0�+��@*�$�����'�f��"7ui3�
��@�
���a�GX����:2��3\���Ql�U���H�|�*����XW=�� ��2�^�}�q��5k}�/��=cIN�9���-��2�*�u��[��Xy �st��P��U@�2���M��X�s�yzf*#��>�9����
\rN!�SL��)UY�a��S��%�?����?/g+�E�0��������v;t-vscC6��&����n�I�X�������������!����Y\g3Hl|�~�^����x�R^NV�����j�W�/O�Af���_�]�p���>����1�	6��
��\#��
b1C����z�[c��!�q+@m��"WI��#�����E>�	�R+	t�f��.�!����A�
 �P+����_���T
�95"TC�Y�^_�q�?��t�����@}���.5?�����j5w��N��iv�9Z�v����o�TnR�7q������-�Pk�!�c���o!l��K�p	�����8�^��1�%;��@;�4YtNeA�F��6
^H/N�%f���?f�����/�s���!V�1h�6��D?1���"�t���.^������mL7[m���6����j���gp����d���j<T�| <�� �!���*$f���R�.�o�A]��F�Z7��|Pq�/l��U2�Q���^���N{y)�����(�Y!	��=YA���"(������M��w�������}����)����T+Y���v\/�a`4���/@n�
�m4��F'#�Xs���v���P)�����[
�p���W�e�y�0;�5����aK+����D
����V�H�:a7j�Y��i�YQdQJ#�.D����f����,Z�����W|�K��Xf��=��*�!�]�%W1j��mLr���j�����c�A�i���=V���BN�-ji�nSg�SJ1�]�[k�Zxu0B����\���kU��#aOp�H�!���g�&ztp��&�� e�:z�i�In|@���yJ),�sX#�Z��!E���a���1�}Ke&�Hj���7���P�A��,5e�f��G 5Evkw�����:�F
�yi8�X2�`}������'�N$E:2�+��X`[�Ls��c��H�D����XME�i0'g�tL4�e�}����
]�������`Q�WuoTb��������K��F�E�Dk��U����R\j��4����s�TZo��j�4t��5�p��!��_rdL&���Pa1�zO|n�>�Bi��(Rr
�����h"�&��E���wU�K'���]�S���S?��(��P*�hbN=���0�QN��(Ex�(N;1��-�y��r]
����
_����\�\�~�`���) = ,�akn@�.�x�j[���|4a�a�����6���vU6���h�*�hW�
P<��T��aN����V�$U�<#���� *D/X�UDG��
7�������8�������>��R��7����h���,���k������yw�yt|���xfGCH���f�X�`��~/�
z����l}�����-�I|�V-U,eb�Q�3R4%�*3T�4md��(�3,T��
-l1������S�����i[�`��5�B��&��-�"����q>u!�	>@���1����Q�=���`��J�����G����Mo�;�;��|�)M��Zv������Y����^���HWEwOT>�.�,���fg����$i02��%�S �"_�����vQ�Cy�[�T�N��	j����a`�������
��}�y�c�t���Xup*t�[�R;�x�M.�O;3���x�BZD_E��ru@����U���"�����"�A#|q���������=���|����+�B��D:�x��|Hc]=�"�bl�@�}����p��{���t����>X�G�v���W�^cX�t���H4�F���[D��
�n�.���p�c��E��R�j��V;��<YZ	�L��GjDui=����f}�f���a�B��~^c�����[����qA	_�_jo��T�Z���;M5xj�N&�++�'8>,,�+C��
X���V�g)/0C������*g�*j��*
��C*�;�bf�l`��7������Ew�v���@�(�~s���z�6��D�lo��*�"�8q:��e�p;�x8���sM���
K��#��C�_AAF'��E!T�Y����|���`��P���5a�^��
G���.5�H\6��JX:�%���%�1MB�.���U��������<�U�D�-a�����4�������p��").������S�S��::o�WK_�K"}�0WCM�O�V�������eYiA9�a������2>C���SN]��l=����>�i����X��������0���l �WS�h�5qkz�>'���_c/`���3�?E��-��-`+�
H�f�����=�|��fu���X�@)8���k\Ne�:6�������1�5�R�]�������3z�m	�/��$��I�A���+��^.�����Z��N%�I����eTV�_3����Ebu	����iX9���!�ZO���|7+��q����Px��v����OW�����G1�I�s�5|arT���<�D��su�Z�$����(�&|�����	8����ua����/;�n�������M}�AA
���Id�q���������&�-���M$�`�G��H�Cp<����2+�Thi^1�u/��}.�R%�1S�
��:	d�:�:���n���N����"y���R����X�X��Vz��������d;���,4nt�������	�nAR�h[�<t�{T4�>7��[Yj������q"9���z�I���+v*�>�K�16+���Oc��s����c���������V���WP�������\���jc�}�E��_����:Q�#��MTj���^b�{_����}���N�c�p@Q�|u4�M.� h���Y�A�n���y�Q�F�U��;�0��O���:�+��jqM�v��?�������c��\K<L�P�Z����Sa�:��F�R���U�����q�v�Q�����
��eh���0a|Cul�n���oU7{W*� ��0�Y�<�,H��5W[�@�7�
���#��5-<�5�?�}��X�y��]��~������tt������?����y��-����������>�c����/s�!��Y~�d��`��_�4�LA%W��������u�l��J�sR���L�o���-����J��r����Wg������vx�?�j�
�m7��`���k��AL���Z�6�X?��� �+t�>��2)�
TQOe�q��7�W��O�A��2�T���ZY�{���������]Q�~3�KNC���-�j
x8X�,��������2������q�ES��4�aU��;��U�V	l�}�����}�ug����C:�_I����|	Y�i����k{G�(��_��$��+���8`��fc'������ZvI�tK�����u����j�,'�=�����j��u_h#��W���u��
���]�HN��a�6��5���R�!v����{���l~�;���rv9F��=
e(s��E��Y&�B��]��B]LaZ��SA�`���B�����H(�96� ����F������,�R�N�C�����B]NRq�Mb�F��S�������]�2��!����Ao�w��������qr�6�P�F���8H�m�0�^(Ep{�����};��h1B�7�g��
���'<�����3��w���wirw�tQ��tB�����H(_�P���J���w���MX����q�R����"��m�%�*���G��1,��U�-:������� " �=K.����=�R�!Wd�z��A����W!w,�k��|]q[����y`E<],h)@���R1�uU�Wq6����C���p�9��$������`ftlU|����a$$M0��m�|�
w�����7A����4�M�[�iW������	��|v'v�K��GS�������5Ic�����|���g�1����C"���1_�����_~'*��ia,"r����Rpt�v��1H�Tq��X�V{��^�A?IF�y���#^�w��R����^���\���5����	�K������<��@��*�<��aV;�H$��Ab��~�z!��_D�i�������O�����T{�"2�i�W�q�3O��_k/��
3V���8-�q|������	 ��+tp(��H��!Er��dFq���ZIf��k�Q�.�0`����K��\Nt?��[�PZJs)���]'�'��5
�g�����j��po�p
Hg�B���rv�9��s"7�[���]#A���z�Y�z�c�����Q���A��uRu�8��5��^�1��Y�V|����D���[��z�:�t��<>��U�==�(nD����+�X����?y����{ Hn�D]���5��v�����6��sgV�'���L��U�N����p.�	Y��d��+*sW�������U���|��d�	Y-��z~����m�V����z��&�~�����-����a=e	���1�������;u�-����C����1P{4�����0�K�#��.��9|��56���Vj����c�xm�W�����#����Q�������7H��_]��������TI�(�x�����iU�u��R�}�������L��:��5�����z��^�}�D��*g	-�\�����H=���h��(S9���I4�c����;�l��!����]�=]��x���1U��H7r���V�E���4�8��Jm��O��Y�8%�%���36��>����r&�HOG��wYE��h����d7S�hl%����
��
)�ri��D�-�4��j��^ �>�=��������CYl�����
i��k���E���`�g/��j���W������m��/V�W�3��/\�J8�<(���h��\�#y��6
������g^����������z���97�T`������S�{>&��bW_��$���� ��2!c�M��u�PFuS��d_p���`D�#��8�>.�q/��v����,R�z�OVN#*�8I'�K2�I��r9E��"�%UU���h-���&
Y_��hLr%��,��@�Y}H�=y���4���7\�t��(�������(}���n��RaGQ���/��n/�Q�_�]\\`
���$�3�t�OA�:��q���������oY��Ja��)�*	
`����s���W�*8!����Mh�[���"���7*�xPQ�(m?�wFf�������G��h!�����������T[����e�&w���A���pRL������VUG=���*�+gT�mU��rJ�����7�a���=�_"T�~�ssjc�s�2������U���A�mV�^f���� 7}�QB�n�l�CMSK���l'��mY�_���|��
7����x��{�W$�9V�C
�����s��/����f8�HE�[������6Y��)4��5y�&�
��jZ���!�Zv,[>6C/i��W�Rl�U�^�,w�������b0��.�.C4� ���ir�Y����0�W���T@�kd_�&1	��������������O����JI�M+���8wH�~���5z\"���d�2�{{t�?x~�����/�*z��>�yV�P�F���nl��RXe���*h��4O>\���{+?�Za�M��J��O��0Io�����i��rPa������+�.�c����N�V���P�`G�����`���W�{J��|�rz7�+pV�n����!��j�
L�T�~
m���i�;�������Fp��@n������)m$fN.L�������=�9:���?�L��6R�)h��~�����������Z�?6���W��vp��;����W�0��W�c�_��YZ��d�?����EyA��?l�5|`�Q��������P������J�Z���'�4L.p3�A��#9N�q����+�:���2j���a/���
9R�qFa�I�I��M�hd;�AFn\�?r*��s��i@4e]+�W\�G���Q�������MR��J�h��~���}����I��w�s��UR�S6���c��1&���7�q~,J��r���e�Mg���Et.�-EY�����23s]���3�e�X��vr��6�x�H�`��..������8�qrp�K�y��p
�ms�@�=������J��~V�	�~�=i{��{?x��k�s<g�(a6�tz�����\���|D[��������O�x����s�������^S�_m��F{�����`���:����?R
�GTO��,�����w�<w_�56�KAC�*��HJ�|�Kh/y0���C[�����u��.8�3�~������3�#[���eJ�aNOi�17�!�b��|.�`X|��)~����p��[���k�-bA?~'��/\������=�����A���<$`���h�QQl/C�	��kv���|���u��0r)4�5����"�R5
���?��O���W'��z��k�Q������]�2���m~1�llDo%��8�������`�A|�0P6p��d��z#r3�T1��&�zx��S�X�DL�?MIc3+^������P<�~s�l}*j�YWaR�����TW�G��kq���R���@3������
���
������7E��m������^(9�9�>�ga�?�����~Y)boA!�ry�n_/;��l������	����n���ucgX�s��[�f��H�*�O���9����"��eqj
<"?�_S�Ql���C�u��,+�vc~r�x�X�-�8��'u}mT��G*���9���\��=8K�#�^X�c�"�Z�|��9)r�R��R�#���p����C�[A!AE�uh�������S���T4&�Ew���
�%��Ir$p@�
���(�	�Mo��t���.f@�=��6�����Tj��@�ml�O��
U��Y��?�U�_F�57����z!0����ok@.tL��Y.6��)c�S�3;A����*��/�"o��G���%�}U����#�EF]�H�����;�p_<�����u���M|��k�Xsa!��b_d�����4����8g�Z8��T��^w�4��6}�;\��d]�-
�y���#'3�����]�d����Tx�����n7���l������������S�;we�����2�G��&�>�b�p����������\q7��'���C.�I���7���v$�5��w4�y+�����������~�6��-tQ�R��j��v���3S����t��3��S�����<'*B�r����?�w�P�f%�i����)I���U��Ws~��~�12�"�5�N`������f��v�ma4�L?ea��=�S�X�/��u��}{�r/`R����O�
�Y��[�~<��i�5�?J��p3!�����v�`���������������R���LqY2�c�|&�E]e,��I�:J_������.mRPS���_9|��L�x�n�2��A8���7&P;��������w?P����c_�h{U����P�i�����T������*�����$Re�����_�g��'.��Vdb�-G��r����!��
�WjH��iTea@x�z6�������dV�U
v1b3)^	]3q</$�6T�k��i���������u~8�\�e���y���l��W0�����B�e�����(��./v9=|�M�z��tS���{�!'I������8'W��n����e���(eq��*�@�2_r����3�rN������R��6��I���M����AY�m]S�>�3e����1p�u&�L�x���P�Y�::M�*2e��i�x�j��>[�9A!zJ!?������K&#\���������@Q��O%f�tD���O�-E�q��-���j�Q/����RH��DQ]�6��#�-�NK����%%7&���Vd���~�|��B������0j���o#l�<&
��J�4����G�1���hD�$�+�^|�^f�
�ts���
����6b��B��V�g���7%7�-�o��,Xpj�JOUnuOs��t%_e;/�d�������p�;�\�k������W	���+1�_P�������2�������&�<�����|o������i����+�c�Q�N�vm�����F�
��?�pHiL"�/
�K��7I���;�x�����;R:�IL��&m�����)�H����'TG�vU0O�"�5Y��&�Z���N�����������.�-��R���w�i�7r��,�Y���e�x�p_��w�Y����UZq��������G�U�4W��G�UU�j��(����|�X)����)a��Q�)=���`)����N����'�^+Rt�)CW!%���P���E-W)o0�_*\h��!��m�@���aK$�=N��}X���4����m�3H���J�R������:=��l��4����[�+�^>����P�����j�����/�4�5V7�]QJ�a���)��1��9r�S��%
�Nc{u���JA�L�s"��oIZ���r�E�X������1!����R���XK��)��'����%gsL#����q�z�G���"�o�_�r=�v�GD�x~"���]�J��|X�,q;�i1���M��\�������H"�\[���m�U���P1�
��~�Spv�����Y��|IO)!�w���W�R��S�mZ����BY=-IT��%u������'���\����[O766���<V$%�7�}S��J�G�I�����T!�P���Q�KD&�qt��q?/P�OC+�����WJ�9
qZ��.�3Z	���~��k��F����G�������R���p^.��@ft��i�����>.l������Y��"s�����K-i���������L�K����d���d��=@9�R}�q^J��0�f� B��2���P���4�vO^���YQ�8Fs�Pn�D��Y)��D_	>�R������a���W
2_���<�/���	)�+�f4���2���#���~���d,���_n��Tp�B�^:�H�V�5AO��.2�I�WQ��X=��U��������G�b���������r���X����Y���]&�S�%z�%��8�������L	����5�����%F��J��V�XU��R����H���l5��d�u��g�I�P�l���=w8CY��`s�X$�4������{=K=��p���\������r���1W���r%���!/�+)�YTA���X��h��v�8��3�E�������@�a�����{����y?���%��b��{{cQ�u�QO�����=_���������\Wy���Yx�<�;eG!\�J�U��������[)d�[�F�SJ�,@6��L���^��
�E�F�����Zzi�[�7Z��j��8L��I2
h���������W�����0'���d(l��2^IB����O�z}�%�\lYZ$$&��Y0�����6����
�����=�"<��sI�!TO��`!(�b|���O_}I�}be�����Z�
���u#ke�2���Z���7)���'��|��L���I�����\��E��5��N|57q5K7�����X�A�������Cj��E�8�
`������-�{����z����h���|��*����������x�����f2Y)O��	��6�%��	�S�Td�������a��+9�q�{N	��?������T�����+�4��"��wV��������siv
���NM��R26�LO�6���<q�R��t�!�-O���/J*/�
�n��++5$�{����?��K�]{���#��S���%�/�E����Sx��������e�vb f�NN���O�����v�u���R�O7����(��?�S��E
S1n�F�q�a��n���P}s���o!��OC�kzU��j����I�I�:}Y�^G�������L�����3�����F��-���l���%:�y�8� ���U��Y2,�c���b���p�=��1���~��Qc�\�2�4L��Y����<�����u�B�U/�Iq���X�Q�(/���/p1JB��C\��UE���=�>�YG�Q�o�7�M��E�G�k�9Db�`��)>�*>!�D0�0�@,�G��+����e��T3�t��JR�����8a�/"�?e�����W_I7���{;K��$��N�Ib$������y�d�:��j��R ��3�u�0�=�X����]�"����]!���cf�(�!���h���W���?x/���I����Yi�N�Q���Qi$��Jd��c~���:�� q��Y����&r�<$����������B�*�bV���F�������*)>��w.v��0�6G�+03
�1tL��P����&��,��G�\Xx�,yC��Y+����3�����E���&�����iTz=K;�8�:�{��QT`�
���o�Q� `�)sP2�����D��`C�4��������0������n#��=z�h��}�m�7�G��������=�����*�O����|~��	����C�Gt?8~�����q'�\;D%C,�wI6>M���~Mc�{Ha�d�|6[��/�(���:0>��$J!�N&Y����LY[���~h���~9y�_�^��79���;T3f�PgISj��P���LKU��#=�|���$�[fQ4���`���\���c4�v��N�Y�%�&>^)�h/����`���8����������KI�m�6�4�E�2����*@�#��k�~���p�.�b��?zL�
�~&�Q�gMJ
��bxNc��5���p1�������P�52x��*D�wYg�1+Ky�.���<����[

1�^����q_��x�*1�-m?�;�v;O6���2�qA�l=-�����o.��np=�C�r����#8����8�.���e�����s
y
���&�X�IgH�����~��o��V��D��aNw5�1'�Y:S���TKQR��SxxzI
������1�K8E�i|�����iB5����\e�����n-������,������������(�^��'��p�D��U����k�����������p�������}9�]	��������	��qBI���5#�]����\Mv5��������~�4*Gy�B�Q���V�*��0gY�������uy�3'��w��"��B�G�L����J0����#3��{��B���AVuG�'�L���6����?YM�I����~.p��,�����,�s�rn��?F����[g���>����Qk����0e��lr4�)���E��n�:Z�M-�|���u��m���AW^d.���i�v=�����2i��RX|8����uc�M���h�qZ����[<%�Uq���9���aV/(%�����Gj>H(Cs�v	?������B{��Fi�h�X���-���������lq�RX!�(?����j��'s�J|-!�~2�<	?(��8�N�0gu�D�^b�Z��&��R\������� QQ����g ����M��V]X��2L~�Q��pK���65
���9v�dx#[���t&�/�����wB�)��AKy�+��b����y�7�G�#�10�)&���J
9ltR�������>l��1m!�ZKI]u����/\����E��w�5g�xM�
��9���+�����[9���#�N�����u�p8��y��v���}5_^0,���x{+k������7�v���J�T!(��^E�j�Z����#�]_�5>��b�3k�������nAQc&��o���a�t�b��yE����%cz���m8��H��������VK'�e���5��j`z|,������^��#��=��$�V��5�#���c��Zj����oi%�[P���eC��h����T���p`=�=T���.�Q��R��u{����������f�����X��d|�\����H�8eZ{}�\��o~��<H����v9k.>|8p����x5
�w��{�;��!#a���{�� w��fq��Zq��������/3��h������C,�+��c������n�%%�C��e`�����~2�~h���6S��������t�R��s�V�����# h��#
s��w��EZ��b�L���c�������v�@YL�#��2I�{�h�����������>5���MSK��W�;�m�j���9�a�����������`��sQ��W���$�1����c���>���^���8�R��
2�.;D���-�~l*�B��77�������i|Jv���V>y�
[h��JN� /Nl�8�q/�^�7���	��\\[\���*���n=?[,�<H�h�,�5�:�$��O��>�gdm���
����Z��F���.�A=��Yt���L��I�>Gi�U��4z����i����jX���/���9�F�X����=Z�V����K�rNU�e?�
:�%���v�XJ"q�|���xH�W�<j�9�4q�p��S-
D�C����[����#:E
����Xg�tgB�}=8���
�YdX�S�c����������*a$B����+!-5d�3�N�m����z�r'bwg�Ks;9��>�~eJy�`A!�a>g�M��8�JP��%j����o�pr1�#'�k-�H5���+���Zj_�����d����fp�&��/�G�������;1
0s�5��*y
��<���	iO���D�����nv������\�N�nV���������3.�=@����
Ecy>%�@���*�/�yD�Ce#j?�����W�rTk>�������<����B[��{6-����� ����f#_�����wiD��}�����{ ���j
�s��6Z{/��R_D������'��������hLD�p
x!-�IzX=i�(35��Q����h%:�pHT|nu�_j�25����h��7Gs�S�r�~�.0 �J16�1��vng~%���d�+���:��i2H�b1��b�%m�R�Z���5������@�;T�)�*5s0������W���{s��;�[5k(��R��I
P~�:>x����K��UL��c���������b�d����sgFl�*z���>�a�6������0cF
	V�@%���4���4�2]�6�j�a��d�����P�S�A��K<8�k1(�;w�C��"&%����S�S	:u��jl
EX�4�L�S�������$���p��3�����yG���H�i��*�j�Q	�M�Ol�w�:)��R����N�3
Gl��Gk���d	�L��n��AY��B��K��?���<T�1�u��eJL��L0����I5��9vJm(���<@0�,H�<2F�Znc
��ey��a>w�
�a)"��nCo���L�O������shMKe��=@���
Yc���U�I)�-u��-B�����H���Z�� T���G�p
srC+Ox7�G�4�HE�6fi%�8�Kut�i>�>>Oxl����ixA�Cm�X��EM&c���=��7q� ^����j�[�&��esrIZ�4t,������D�[��X�����qM�B^)WD��lL��$O�|�%���R��):�<|�������j�ur�����j�}��oEE(|h��-�v�Tua�VK���0�979ui���j���'bWAo��@Q�0����T0cs��M�c��a�
L��	P'������FE���pU��DP����J��ts��q�$[z�l�[��U�g���e���J
�\�7C�"D�
��g��4�,���BY���;��}�)�w����j�:?����:�:�@C.+0�3����q����<6���_TU�Y`���g$x����q$-�Q0�"�5+���h�0�4V�m���Q�Q�:���k����*B1�{�){���E2�)�-�U�'�=k��Y�O�w����}���;��]5���J����I���p��Gq����c��F���*��M���W�.�bmV�.�W���g%�S��0�'������U�O���W�)��N���uM���e��u>��A����@����H��v	�In`"���C�#vt�&�,��Eu�0w���7�a�u�ot`G��-����,�-��/b��E������(��J�FB�8L}�
���0�eD���E�>T��%��T	7�8O�V��I?��E�D��c~�I�7��r�k�M5�!�`���A�x�����sv�,j9�(����I�!��ax�^�zHhW�7�������>�Y��F�Y���=:;Z6MB�(a�#Mi������Av2�,tE�P��Yu1�W�%��h64g
��rry8aB�q��+F���?��$/=J[M��c%���p�H�c11�beW��|D���:�.����(���J=��A�u~i��.G�\�$�O��+�=����)�c���H��j���1�a��t2�e<#BJ_R�]��A�2��M�f��T?���������},9����N�4Eo�+�K�n��M-�xOF�����
L�Y����@��g���>C����p�-Z����g��E�[	���q�c��*�������_r�U�`�O;"��=�&��i���Hz�������*�q��N�Q(��&�Y�O1� �b��@`��La��r�#�ho�koe���U���XhFM��!w�\����>G���BIG�2xh�����q���s?�6T�`�,�\���C������0�"��S�5kQF�mt���	D}��7���(Y��ILh�e��0W�(l`#L0qL�1����KK�=�L>Cz�L���j���(�w��z��iL9��r��|��-���#�r3���|�:r"�8R,{�+���1��Ft��=����e��p%��������0�Bk�,�z��v�����X��V*kT�����0�V���������`�dc�Yp�L�4�P_���3E	��g0�+�(Hq`,��~���:^'�+�V�*�����c*���q����r�+y����.2��rm���#4� n�Xx��J������b������@N��q���=���	x����Dp�t�@�D>���/lAu�h�JX]\`�$�^�TE��z�����I4�/���+���Q��y���Ij�����(U,0p�����m}%�0
9+Eh%.����By�\�/�X�w����_�����05��U�]��/�2�+T��E�L��$0��K�3���-�x;5�3'����s:-C���L"4(z���=�7��R�HyC4��>�JQ��F:����������g��g��oN����t�R�t=G�~��������I�}�
y%�>��g��t��~�lF�H�
!��������H���1��X�S�"f��#�������J����qa8zpgq;���+�Q����@����" �(	/���D�,������o!cn��� X��h��W��l<���h_#��%P��V;N�j�z>��[3��)Q�x�P�*;7��s7�����T���+COX";)]���x5��vS��5�/K
�xF�
P�(�B����`�^��	�	+P��h�l,+Y�:4����� -� ��rNA����K���`x�����cu��� $H:��R#��@U��QV�m���I{[����X��:n��������1���j-�;NU ���&�h��2st9,���Y�`z5�?�
���
,
|������6hL`P]�Ha��/n���H2V�k�+�]vm�&;�J5��F����?�1(�m�^K|��B������v>y������� ;e�Bk?u,���u9�A3�|}8��`���>*�E�?����I�����#��Tv5��F���Om��G��@����I.y���q@s���9�F&r]|��:�!v�bIM�!�lS��D��'�.��_����!6I�4�2������q~z(w�-�*��?���WIRe|�O�I�����~��=�m�R�O��k6����#J���Q���g���Ak/��CF�V��f#����@9z�!Qq��h5��t��.�I�������1���6���������C�*���i��7����^||2��k�6����P��Z���&t�L��@P�����36�#�dBfq��c�	r��
�4�a\q�:'$��*$�RdGpD�8��p��st���8}���~��c.���~3K�h�@�l�sZ
�Ih�%���S� #�+���6�������� N�c%��-]���<"������M x�1��>2���h(0�6��FJl��}�/�.�J��4�a_�r�����`�i!:��a�����^X�\Z/d$����[�;�����g5c��I�9���1e?��O�NGSx���K���	M���A�f�m���UA�� �!�E��x������/�##N��o�k�oB��=��1&������I7k����=�AM�������osl���g,Y��0�b������{�_��(����U*�O������a�K����7�^�~�FOz�g�e��'0���P���%%�l�(\7���1�R@R�A���G�W*y�Ho�/�p�&���������ngg{�U�1���������:ER�%��%gW����%\�A���~B.C�~8�wA]�:z�Hx�����������|�<G���jln4v7�=�3������.)2	K'�.9c$��i���l7��J��<�.|R�n>il�?�Br�H���hlm66��
S�u�G��|,6��|��g�h�Y��5���������u����;s��g"��<��)�S�N7��)�>��D��>z����Z'�o�~:R�E^@�8�#UV�����1�@h���r��2%t�����8I�tFWT��	�C������5��?��B�.	OCM]G9���k�=�k*4���c�T�Z�q�%`3��)�U��58�l����

D<^g@�>��(a���,%Yc�S��H������wWy��<�,� C�	�C�;��E6h��-.���3jI����k��=Aa����")c����2e=d�����=��/���.8��.�����U��8��
�+(���D
���K�Y�V~�U��N��0���v�*!m#��LY`��E�������_����{��������B�8?2�1��FO�Tl�@
X�����������W��FQ�8s��0�"�;�8Ih���_J��9����7�gQj���D��l.g��\oH^�`�+b5�����v����gOyG�����y<�[�����B�r� ����
aE_&d��d�V\�*���]M�q��i�Z2���Q��C���F�M��d>4�=��
/#[��[=�o���I-��WM]��-�����4/�7���?�^�����������������_�ha�������rx�KC=?zq��;.�s����G'G��^�~><z]l����xy�����������O��E:xa����=:~���|s���v�
~[!�[�M��
N�~<x�:>�l�������'���0<|�J���]Ql���0i������S�G0J!4�'�Hhv����+7�G�r�|3�*�I��;y���
r���J7w}"�$�7�[��K^Bro���8~�7i(��Q�~��NH��taX���Z�NG����gC%H����:�"�j�&�������������(���2�l����-r0G��!EE�����=�vp2������9�t����p��
�!3k�Od]����"xh���FV�y���D
��c�B��^�z?xI�F��#$U/3s��<��#~&7KX�����?e$&�����-�1��a��P����.�A"��P�}6Pkh8x��e��E�xI�K3V�8��q8��%�68��������!I��*T��sq_U�Qn5*��H	h�h8��_��<a%�J�|�;�HI��
��!��rze)U���^DvM���5OW���`fFt��JcP���']�����I��������g=��NB%��p�I�{����������ZG?y��m����T��~_���i���@�$��hpu��w?<�<�Hp�1�"�b��
�������G.^�4na�,F*�M���9'F����l��o���4�I?*�s�S��+|&���V�d���������W�K6<���������@l9q�%P����VHa�n��F��8|���������8��3��O����3<��L�J{<pd'�������7������_�O�w���PK����cj��N��V��q]	�����;|�4����Y��t�|�����l8a*C������
*��eA��R6�R��O#���Sm<�}''����>nV�9R���Z)���A�M�������h�V�]
�p�����KG�L�,I��;�T��|���u,T�Q�C 4r�������7��X}[+�O�.��5���t���P�Ftu&h��� EV+���\r�?z}|�^�p���J���g�4_�&#���hc�s���%�X�(�%���AN-g�N�����L�s}�� ������;���"&���5���S�<��� �Dicd��� ����6�v��qY6�������M�1��5 .��x����W��Ue��	�~������Mp|p�.��a����(�[�m�Ta���o��C
F�- �~t�����������IT��[�����lP��d������<
���8�����{�F`�z�x����4d��o�|e�v����	nCQ�2@���>-�OY��XAG�'�4w��,�s��R"��8}��L��s��2�y0e���\����������X�<����H��V=}�A-7�T�TPW|�_�=�'vTR(*��kX`�h�R�5I�O��L|<����*���\��e1����QO�>5��U�+�"�A�*������(�o
A���%0������7����h|��9���J�i-s�-k�Ir���:A��Q�=S�EB��TRe��]�(F��u6��/���:��.��x���M�V���0=0�K��r�eR1|����}.;G98s'���JK���QU!��I������{�"��R�+��(X����jdA�+����8�qt�8�NN��5I�hzC��v��
q��v��.��H����� k�LX�N�:I(K�Ark���Q�AF��U�z��N�|��V����f��7w�KV�$#�P�
F� L�`9���o��_%��b���u&]�2��$��b��7�uh��Q�>g"���xA���y��Ft�P;���xM�2����N1���@q2\htx%X��B����tj����BG���@C��`	�0��yq"y��+m=��e)M7������l��{���OL�b%������wu������D�U�����,B����|g�{]�r�nH1�
��`��^Jj��wK��
�.�z��"_�=]=�"4��z��y������;�~�|���Ek������%3Y�tV"d�Q����3�����~?8�H��B4��w��95nOz��>��������Zy�Bn�2�P��1��^Q�����p��e �������o{�/��R���W�>��yr�sCy�x����
|X'n��s��=c��s)D9��:��-�iyJ��S��F3������L� �X�/?�������Bp�.�x������[>���z���D�\*���3��R����4����r�?��}�2V2���e1�h)'Z���T��LH3���T�����+�a
����f���Fi_���l�(����R�1�0w��� #��G� ?]F���������`�B6g�	��'-f
��%*��y�bc{���������O��z]�����B�����>��
�[VN���S_F��z�����?xE��-�\jX����$��e���
~��3��,%������v���������V6�������X���^��G�8\�:S��Z���@dnIe�B�F�\�3W��%���� IJ������o~v�����:D,'�w�X8'k��Gm�E*��q�b�f�+y:�s�ts-���������c�~��O����j�b+����F�8���u��Q����D�1�:F��Pw��������������Rp~��H��<s��8g)���]
&�b_���
?��J�|�q+��������5*o�q�(��jvt�b#������QPYo�
y����-o��
�&��#X�I��"RL����%����h�Yu��E)�Q��O'Q��������T�2�7���St��V����A��ip~;�R�:P}�^[
���������z*q����,V�w�dYv�Q��z���Ha;O����A����-���X�n�I#��)�8��C�JIrp�9s�(�F�4$t.q���8s��*67�v���5/J;����e�K�rr��n~��}(-qP��W��xM�B[�L�"��--��)�K�2BYv-�O�o�P�H%��JO��� 	�bN���%%N.�K���L�Z�E��zc����'tj��\Tx�t�x�b�#������`(����l
H���!}�)��������/�hHq0���3�4�"��p��""W�v����f���
��>�}cx��h�����T,�A�%�IJ�0��p9t��H"�Z�jNX���i6{�I��u��n��z'�z6_F��b)��`�{����/���&������j`�u/36A� ���T��������v�����s��\�|�����n��V��oVz��;c)�G���g4W���^���h�F��wF_gyKaY��[G������`k��|���sZ�)�^��41���j��a�W��`�,|O��c�����.���E��,�������At#�Ur!�y�.lN�[��!R8�b+�f�{�5������o����VM&���+,�f��hu���g�����Fc��Ly�{}�@��x�b���I�Z���d�{V��%�H'4�K@����X3��<���J�0%*�*<V����0�"
\""�ruX
l`2���)3�O�}U�-aL0��q�S?�����CC�����!C���;r<J������A��0#Q-r�s����Llb�u��6��InH8�>6�b'T�WN|~�_`a��3������Y��C�w�;�6����|�M}�������������F�����[�<�b-�Fc8X�Z����5��7���
����:��h����������)E�J��d|:R*�!���5�����@fn{s�
1C���"S������r!%��e�
:�pu*�l����G`�CeM�uc%�Y�'l�LmX�����N[�={��]O��@xH{'8��M�+�X�h��f�<�y L]#��V���`+��PF��j�85����*���?/*�;y�<c����k��Ck1o�|(����E�Z��VX�	�>�he=*�[L
�@����!"Y�j����_����f���2���O6�>Q�C@���j����y��L�����,�S�h�����J��jn�J	Q0���x�jU�J�{YiL���/���4��N��� ���(,�W-Qz0��5qD`�QO�oY�ky�3u������\�&���6��jB!�����?�7o��p��4�N����n�-��L����&�~�^����/b�<��4���g��Lg��-0���`k;���!������w���6��\�m1���%h���,���.i�:�.e����`�����D`�%��u�\�O�\�Z����pm����i��o�	�M���Y��'�x�*-��������f��[UX�L��+��<X�Q��e��'�\�����}���!�C4�d[�W�������=1d&��<*Ck�a���2/���92[�
���3���ND���ZL��Qv���5]�=��a���U��4�%�2�nv�v�.�����;�)�S/6v���8����<�����wEeK���y����=j��,s�)nu}j�}�>����T�{Y,w�tT�z�����)�T4A�]z�e�����z"������!�x���mi����� �������D
�s�{���8���E}�����^W�d�_�i��m�����R��f�������cM+�4Rx\y�/�����x��;A%MQ~�>��
��l'��`��9r�I��Im��z�<�d���6��(�R1B��@Lm���f^K����D%N%��\���v��������W-/	8���L[��(�N}4�u�
�u���7�T��������jY��G���M(��7�f��p"a�'�=�
��2	��U��	w�!�����7k������Vj�n�T���h������c�(����.��M�cs�
ln�U<���M�������;1���6�6�J0L��(��tqG�ece����87Ne0cs�:����MY�\��`���9��Ph�U7�����n�))������kG#!��"�
�!Z���yK'��D�dY����C��9��0��i���1�K
z�����R���DL��C�f��bl���t�F�q?[c�|
X�5����������7r���<Wm����[��R��];���m�+���s:�8���ru��9f)�&�y5�?ogQ��X�A�Y���G� �y�H=\�H��S��q�O"��S�4�KR������N�1Y]'���2���w�
B��/s
�%��!��1�1!�����1�UF����c�;xd?FWk��+�por���^u�1&��_g�|K�$W���D-{����x�.:�i����r�Y�A?��	����7
W���C���p�:�f��	����d�X�{��j��"���� \��T
(W}�$�SgRD��"�PBI�m����CP��=LN�����������]?����Qw�I���P�2'l����2��OF�?XX�����+��������+~�?���A��T���2�Q���}��z�}Ds�Z�M��[Z�o�f����w��b�!�&^�F.X��������@(�Q��{�Of0�1
UH�D��~;XE�C�3�����D��?[�`��?�%/����7���45��������X�e�\T�x����SZF�k�m�3�hA�/S��|��Cj_Y��}A=
'OX�0�P�l��"��_��|z~��
����kv5�|��&�z��gE�>�?�2Ov�"�������e��y<C�����V\$��Srk��_��Q��Q�b�R=V9K.�����;�-�qW�������ZV�����k�'mjr��F�T<0�(�J��E^�w{�e�^��������P\���x7J�c�G�	���IV��i��F���VS�6	2�1Vl��R����JF��X�Ya�*Y\�d��[bj���E3�Px���q��������QY���>|L���z�P�C2���a$�3�,�D_�$��xAW]�h����^,�^m�3U@�``�s��ca������1\���$#��22\���#��2:����z_UL�R�vRIe?W/���d��$���l����{���W���4���	b@��FFM}�
<&���[�@@�2��K6�:�n�j���uB����p�	���XR��</�����������k�k��0����D�j�!�J��o�~��|~�kK�P�C�,N��z;:�IH�@$`+T�u<P��������I�2*UJ"u��_���/�l���`�h�*��jg�������y5t�����`���)p��z�v�?��
�k����Z�6�
������8���*^����������2��?������n�>yvomm-X�F���B}���Gs���w��F}#x�Y������������dtE���-�&���lR=8v��������;l�Z^6wO����~M�{7}U�v9����O��U �
��/�!RG�����)�4��{�c��q9�%�9��|u��i\7��cZ%,�B�b�r�)�-�����=��x;|����;OAW�|�nn�~�O�t����y�cl���/���G'�����1�(
��?9z�Eu������s���*�*��+{�p*1Wd��'�E��PVERc�>�3�*F
��3t��B��%�Q:.�~b*?�AD��_r,�p�M����b��Z(��0^���q����#�Sr�U��z�Y�DS��t2��1Z�~���sx����7~��?��A��:����Me�2��0�6
���?�!e�������k��^��h��2�<=��$
�r�8u^�
�������&���M`���A��W
d�N��X#���y��_f�9�l0nw���ES�� ��5r�n#x7�R�y�M�TS�Q���(U�L#I����R��x<���������������T�d����3�����/����)`�b,��/WKn�� DN	�0}�4���� ��}Y>�<V�K�fec&���]���&}f�x��.o.����Q\&�(} 
S8D��:���
C����2����{�k��ZX�&�|{G;0����(U93���M�!
HU*3���i�a��4�u����n����#�u=H��|�H@=F�&�J���w�F�:44�h���x7�}����wy��]X��K�{�"��4��I��������?�k�����ccY��?��gCD�Z�U,���s6�S����LY���$���q�B��,���(+����e��:DiKd������?��P��)v�J�^�)���)��b�V%����Jq��0/�iT�tu���k�i�-�=6J	��t�L>@"�I��1�]ld�.�.������\��;�$�8t�
��WF�A�@�x�j��j�o	��U�}�{3��h�k��UV�b�4�����qB�e+��	|"5��
L���y����1C+J~�nff��x��rR�<-7����kf����^�t���Va��������b����T;�QD��c������6K�d1���
��.����uj��2����LUY!��/V��v)kV�Z��"��Q����pt�#3��r���N7"E���d]�����N����F��<��kj�S���1���><��dd��K����[��3��1�2�V���VoL���%��8/��s��:����a�CL��+n�t��'����K�����`��s���2%8��\��\�`�������m���c�e�f}������2���e4�$A���l�5j��u����%�rV��	����TS����|I�a)�C)��R���k���?�04���3������v?a������KH�KuZ�0AWE�Z���`Jh���M��R��'�^�����G�*�5��Ruf t��]AUq�`�����R�oj�C���!���g~�Vh3�Z�E;O?�u6��g�yL��������	����Vr`�ER���d��l�oZ�C{��%�#N�#�tx/ ��z�g�����a2�
�zr~qy�/��%T�:Sm��[(h����J����)1c�3b?.hIg�Jy������*6��^����gu
\%����3�a�3�,+'���8��v����`w'�}���O������`7v��n'���Q���l|�'d�q7q�'������v�d'x�8���qL�dV�@~�<y<y<y<	��v�dh{���:���(R���o�`5�bS�z��+G�S�s����R���|Y 
�,��t���@������\�+�P���llg#�l��Y�������N��o�lX��-���5��uR�%���A
K]R�+AJ�Mn��������2�j�GX����/���QC4�s����B�L>3��k
�	������m��[��f%��Q��hX2'�u&-���������3 ��r���]�W����K�������{��O�_��	Ok�]���O������@��g�����7����w����&�WMs�R�n��47��-��gK�H���+�F��h���������`������1����z��t���s/%
�����J2~.5��ez���S���~����O�|�f�		/������i����5��m��w�g��v�����*ws��:�q����A|�!l���/d�m��b���54���K�Z���x�.j�b�NY��k&�����P�/������7"�d�:
R�r
�#�*������>;�g����Z!Q�������`��/�C�r��O
}vc`/���J�\�_=!�&8��u����y�?���3��fQ����u�L�3��`��m
U��;��uY].��<~��oq�*�cY���P}M��ib)��%@9Ib�NS��A�J�X<��[�Qv��"rX�r��	B���:mnm�<�}�t&��@O�>^n��}������i��Q���1������x�_&=5��.
�S�i���l�7"v^@���C�����S�y��)�9����J�=o5)�	�"�9&j���2e�\��L�����O�]
b;�������)r����������
b�6��w���9���o=q�����<,������������o:�*Mj�#�gp�;���B��������m)��f�p_�K}�*���CF�7����<��w(;7gB]�w��w��3�.��/����&��h����T�9W�~Q�$y
������V�����Ez�������:���j�32��}�*Qs�y;�/����rr�����[���E�s=�&����,y���R�`%h���v������^b'U���v�a\�>��oe?����|���ni3i��vr�b�e��Zn��'����9e����C0��8,�'?�0�Y,�������v��0"�l��f���
����j�X7���-�w�^==>8Y��x����%���OmxJq�'K^8��6��A��7���
�����~3�5�@x��'u{}yE�^<U�}�-�����"�<9��^�)i����.��-+dO�	��^�����)�������5}��Y�����[��KA�����LG�B�3.d�Y����f�u,���7��7��gM�I�h���,�|�������������-�z������[�w
cA�@�d��v�p~�yZ�|A���\%$�0�j�����i4�������y"�B~������h+�
NB�1���o��fjy���;afa6@���0�����Q4Ce*L
vU����.4t��*��-Y)�
�d����)���UY"���Iy(��*5gI�������~\v}7��-(~m
hh��-���*L����O�������=��4�7u�ak����o�OoxbK��Vf��^-��[`�n���*;���j9������kU��%�S���Jw���,�w�NEm�&k��x��,��+G=�{�7�=�����J�B�����*]���F^i��������r����ELj��kpS}�o}��g=Nq����\��*z|��T0e
�C��
J����rS��}�W�#J7����U!E��B����h�G��������>s�f�tpg��|m�+��S��1S���%t�W���(:S��ta]{��<��u����f�YN2_�����1Tys)[@P�.7����R�\���D�0�z0]���s�{n|�/��^��Y�V�kp�7��`�� �}��H&�/_N��t�Q�X�j��7�����.��<�J����;���%���&f%k;������H�6T�����������a��k���-�v�b�M�s�Wl.�H�:_�T\�0(��qh��&e�K��>���tG���u�k�R�g� T�"�[(j9_�E������3e[��t�s_a^c�'���n�� Y������@h���H�^�/	\����(��t���]%�D�#)u?�hu�=J�8��j>�]i|Y���u�A7-��>��mqo�G���X��/����g%pUe�#���n�����}��/���E���U�{x���CJ�$��/ck�2(U&�/-#e��asg�5k�K��1�]��*/��J���X-��Bg5���_6�-�U)=�*Z~����'��H;/��	o�~����H����fU)�h��S�t?x~��l"i�1?�i�J�`�R�W��]R��&{V!W�^vh&M��S�s0r��9~(0�j�FHE���0�cf�iOV����WG�'
������~�	���u2������5������r�a�!�C��*3&���(n��'����~*�
�#�J���?�I�U�d��@����o������*�WU�H��i�e��������x���/�1�}\D������#�!��g~S��G���>y�,��'���z�i�Q������N�Y�{�~z3�8a���#��lVv����\�1
�4m�0MV53#��Ec�;-{��s��f�\(���������f'�/3J1����������!~��q_�
��
tWy�IQ�#x39})u�*
�������������
��G���8��i���0����pBIZ���-ze��Z^��%����Q�j�CE�����4��n�#�~��t �L����#D���D�#�������M���K�on1��/��O��`���5L�@����d��)]���;,oU��wa���>�F����jq����}������R�)F����wN�+zHx9��j�B9����j��04�fR����4�5Z4w��"���+r��*s�4:O>G�����K�,�'��y��r2�<L.���]�2�������a���'$j����QRGx��1N����.L�r{��s�c� ��T��A�����U}�����D}�N�`9X���E�I���O6����Wf}���,e�L�6�'�F��P�\k�6<G�?���	�r�\>&��^�>��J���F��Y�T�Z�q���	�
��NH?m����xsw��)8�5ew��g^�ziHesZ\C_T6����T(�/�4M&�,�pY|:��N+��&��g�,�����=�P���������<��L����W�>l� v�)�[���?�`��r�#�jm7��[Sh�i��U�H@K*���������U��,}������$��(����3H�9C����1�_X�C�m�����I��"��� ����������c���o+BD���Z������B�&"6[8Z/4#�"4�/�B\y���S3��s:�WX��?���I	Z�����`|�������Z��
�`�	��U���!��	X?J[-�
���g�W�n\~�6>�ml�r
��|���tFTT��r����k]��TJ��G��"(��f
$�3�HX74(�0��m��������=z'��g]����R�����:�*�G��q�v�s4���W�0N�C6O����=�)`
�R�0\u{�L���ka1��_^���!���b�4�K�9S����gI7�����}=�s�+��V�A2�o���W��V�9K��W�g���M��
�n�c�M�?�����C�1����[���>�7z�1f�i*2�i\�^�x-�o�12��Lm������?~�7}�!@���~l{�0�����2��dD�w���W����5������U��Av�)�����"��_��l�h���7��u)
m�8�:_C�o��_lm��0Ol�|:��z�G�g^-��}�_�h���G����^�_DB>����x����z�2��{�����?�G3�P�����ft(����mtq���'3��/�^�ct��j��6~�[�%�cT�V	=��D��<�I�$�%�R�xH�`4�.���� ��8D�!gU5��K�&�M�gk[A����K�������y��bCD�(Y��s_k����{[�����{����1"�]
v10-0005-squash-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v10-0005-squash-Add-pytest-suite-for-OAuth.patch.gzDownload
v10-0006-XXX-work-around-psycopg2-build-failures.patch.gzapplication/gzip; name=v10-0006-XXX-work-around-psycopg2-build-failures.patch.gzDownload
#76Jacob Champion
jchampion@timescale.com
In reply to: Jacob Champion (#75)
6 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

v11 is a quick rebase over the recent Cirrus changes, and I've dropped
0006 now that psycopg2 can build against BSD/Meson setups (thanks Daniele!).

--Jacob

Attachments:

since-v10.diff.txttext/plain; charset=UTF-8; name=since-v10.diff.txtDownload
1:  0278c7ba90 = 1:  36409a76ce common/jsonapi: support FRONTEND clients
2:  bb3ce4b6a9 = 2:  1356b729db libpq: add OAUTHBEARER SASL mechanism
3:  20b7522228 ! 3:  863a49f863 backend: add OAUTHBEARER SASL mechanism
    @@ src/backend/utils/misc/guc_tables.c
      #include "nodes/queryjumble.h"
      #include "optimizer/cost.h"
     @@ src/backend/utils/misc/guc_tables.c: struct config_string ConfigureNamesString[] =
    - 		check_io_direct, assign_io_direct, NULL
    + 		check_debug_io_direct, assign_debug_io_direct, NULL
      	},
      
     +	{
4:  f3cec068f9 = 4:  348554e5f4 Add pytest suite for OAuth
5:  da1933ac1d ! 5:  16d3984a45 squash! Add pytest suite for OAuth
    @@ Commit message
         - The with_oauth test skip logic should probably be integrated into the
           Makefile side as well...
     
    - ## .cirrus.yml ##
    -@@ .cirrus.yml: env:
    + ## .cirrus.tasks.yml ##
    +@@ .cirrus.tasks.yml: env:
        MTEST_ARGS: --print-errorlogs --no-rebuild -C build
        PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
        TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
    @@ .cirrus.yml: env:
      
      
      # What files to preserve in case tests fail
    -@@ .cirrus.yml: task:
    +@@ .cirrus.tasks.yml: task:
          chown root:postgres /tmp/cores
          sysctl kern.corefile='/tmp/cores/%N.%P.core'
        setup_additional_packages_script: |
    @@ .cirrus.yml: task:
      
        # NB: Intentionally build without -Dllvm. The freebsd image size is already
        # large enough to make VM startup slow, and even without llvm freebsd
    -@@ .cirrus.yml: task:
    +@@ .cirrus.tasks.yml: task:
              --buildtype=debug \
              -Dcassert=true -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
              -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
    @@ .cirrus.yml: task:
              -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
              build
          EOF
    -@@ .cirrus.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
    +@@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
        --with-libxslt
        --with-llvm
        --with-lz4
    @@ .cirrus.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
        --with-pam
        --with-perl
        --with-python
    -@@ .cirrus.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
    +@@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
      
      LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
        -Dllvm=enabled
    @@ .cirrus.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
        -Duuid=e2fs
      
      
    -@@ .cirrus.yml: task:
    +@@ .cirrus.tasks.yml: task:
          EOF
      
        setup_additional_packages_script: |
    @@ .cirrus.yml: task:
      
        matrix:
          - name: Linux - Debian Bullseye - Autoconf
    -@@ .cirrus.yml: task:
    +@@ .cirrus.tasks.yml: task:
          folder: $CCACHE_DIR
      
        setup_additional_packages_script: |
    @@ src/test/python/requirements.txt
      construct~=2.10.61
      isort~=5.6
      # TODO: update to psycopg[c] 3.1
    +-psycopg2~=2.9.6
    ++psycopg2~=2.9.7
    + pytest~=7.3
    + pytest-asyncio~=0.21.0
     
      ## src/test/python/server/conftest.py ##
     @@
6:  8f36b5c124 < -:  ---------- XXX work around psycopg2 build failures
v11-0001-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/gzip; name=v11-0001-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v11-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v11-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v11-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v11-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v11-0004-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v11-0004-Add-pytest-suite-for-OAuth.patch.gzDownload
v11-0005-squash-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v11-0005-squash-Add-pytest-suite-for-OAuth.patch.gzDownload
#77Jacob Champion
jchampion@timescale.com
In reply to: Jacob Champion (#76)
6 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

v12 implements a first draft of a client hook, so applications can
replace either the device prompt or the entire OAuth flow. (Andrey and
Mahendrakar: hopefully this is close to what you need.) It also cleans
up some of the JSON tech debt.

Since (IMO) we don't want to introduce new hooks every time we make
improvements to the internal flows, the new hook is designed to
retrieve multiple pieces of data from the application. Clients either
declare their ability to get that data, or delegate the job to the
next link in the chain, which by default is a no-op. That lets us add
new data types to the end, and older clients will ignore them until
they're taught otherwise. (I'm trying hard not to over-engineer this,
but it seems like the concept of "give me some piece of data to
continue authenticating" could pretty easily subsume things like the
PQsslKeyPassHook if we wanted.)

The PQAUTHDATA_OAUTH_BEARER_TOKEN case is the one that replaces the
flow entirely, as discussed upthread. Your application gets the
discovery URI and the requested scope for the connection. It can then
either delegate back to libpq (e.g. if the issuer isn't one it can
help with), immediately return a token (e.g. if one is already cached
for the current user), or install a nonblocking callback to implement
a custom async flow. When the connection is closed (or fails), the
hook provides a cleanup function to free any resources it may have
allocated.

Thanks,
--Jacob

Attachments:

since-v11.diff.txttext/plain; charset=US-ASCII; name=since-v11.diff.txtDownload
1:  36409a76ce = 1:  0ff053cf31 common/jsonapi: support FRONTEND clients
2:  1356b729db ! 2:  26a341e1de libpq: add OAUTHBEARER SASL mechanism
    @@ Commit message
             Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG
     
         The OAuth issuer must support device authorization. No other OAuth flows
    -    are currently implemented.
    +    are currently implemented (but clients may provide their own flows; see
    +    below).
     
         The client implementation requires either libcurl or libiddawc and their
         development headers. Pass `curl` or `iddawc` to --with-oauth/-Doauth
    @@ Commit message
     
         Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!
     
    +    = PQauthDataHook =
    +
    +    Clients may override two pieces of OAuth handling using the new
    +    PQsetAuthDataHook():
    +
    +    - PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
    +      standard error when using the builtin device authorization flow
    +
    +    - PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
    +      custom asynchronous implementation
    +
    +    In general, a hook implementation should examine the incoming `type` to
    +    decide whether or not to handle a specific piece of authdata; if not, it
    +    should delegate to the previous hook in the chain (retrievable via
    +    PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
    +    follow the authdata-specific instructions. Returning an integer < 0
    +    signals an error condition and abandons the connection attempt.
    +
    +    == PQAUTHDATA_PROMPT_OAUTH_DEVICE ==
    +
    +    The hook should display the device prompt (URL + code) using whatever
    +    method it prefers.
    +
    +    == PQAUTHDATA_OAUTH_BEARER_TOKEN ==
    +
    +    The hook should either directly return a Bearer token for the current
    +    user/issuer/scope combination, if one is available without blocking, or
    +    else set up an asynchronous callback to retrieve one. See the
    +    documentation for PQoauthBearerRequest.
    +
         Several TODOs:
         - don't retry forever if the server won't accept our token
         - perform several sanity checks on the OAuth issuer's responses
         - handle cases where the client has been set up with an issuer and
           scope, but the Postgres server wants to use something different
         - improve error debuggability during the OAuth handshake
    -    - migrate JSON parsing to the new JSON_SEM_ACTION_FAILED API convention
         - fix libcurl initialization thread-safety
         - harden the libcurl flow implementation
    -    - figure out how to report the user code and URL without the notice
    -      processor
    +    - figure out pgsocket/int difference on Windows
         - ...and more.
     
      ## configure ##
    @@ src/interfaces/libpq/Makefile: endif
      SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
      endif
     
    + ## src/interfaces/libpq/exports.txt ##
    +@@ src/interfaces/libpq/exports.txt: PQclosePrepared           188
    + PQclosePortal             189
    + PQsendClosePrepared       190
    + PQsendClosePortal         191
    ++PQsetAuthDataHook         192
    ++PQgetAuthDataHook         193
    ++PQdefaultAuthDataHook     194
    +
      ## src/interfaces/libpq/fe-auth-oauth-curl.c (new) ##
     @@
     +/*-------------------------------------------------------------------------
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +#include <unistd.h>
     +
     +#include "common/jsonapi.h"
    ++#include "fe-auth.h"
     +#include "fe-auth-oauth.h"
     +#include "libpq-int.h"
     +#include "mb/pg_wchar.h"
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 */
     +	struct provider		provider;
     +	struct device_authz	authz;
    ++
    ++	bool		user_prompted;	/* have we already sent the authz prompt? */
     +};
     +
     +/*
    -+ * Exported function to free the async_ctx, which is stored directly on the
    -+ * PGconn. This is called during pqDropConnection() so that we don't leak
    -+ * resources even if PQconnectPoll() never calls us back.
    ++ * Frees the async_ctx, which is stored directly on the PGconn. This is called
    ++ * during pqDropConnection() so that we don't leak resources even if
    ++ * PQconnectPoll() never calls us back.
     + *
     + * TODO: we should probably call this at the end of a successful authentication,
     + * too, to proactively free up resources.
     + */
    -+void
    -+pg_fe_free_oauth_async_ctx(PGconn *conn, void *ctx)
    ++static void
    ++free_curl_async_ctx(PGconn *conn, void *ctx)
     +{
     +	struct async_ctx *actx = ctx;
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +#endif
     +
     +		state->async_ctx = actx;
    ++		state->free_async_ctx = free_curl_async_ctx;
     +
     +		initPQExpBuffer(&actx->work_data);
     +		initPQExpBuffer(&actx->errbuf);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			if (!finish_token_request(actx, &tok))
     +				goto error_return;
     +
    -+			/*
    -+			 * Now that we know the token endpoint isn't broken, give the user
    -+			 * the login instructions.
    -+			 *
    -+			 * TODO: allow client applications to override this handling.
    -+			 */
    -+			fprintf(stderr, "Visit %s and enter the code: %s",
    -+					actx->authz.verification_uri, actx->authz.user_code);
    ++			if (!actx->user_prompted)
    ++			{
    ++				int			res;
    ++				PQpromptOAuthDevice prompt = {
    ++					.verification_uri = actx->authz.verification_uri,
    ++					.user_code = actx->authz.user_code,
    ++					/* TODO: optional fields */
    ++				};
    ++
    ++				/*
    ++				 * Now that we know the token endpoint isn't broken, give the
    ++				 * user the login instructions.
    ++				 */
    ++				res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
    ++									 &prompt);
    ++
    ++				if (!res)
    ++				{
    ++					fprintf(stderr, "Visit %s and enter the code: %s",
    ++							prompt.verification_uri, prompt.user_code);
    ++				}
    ++				else if (res < 0)
    ++				{
    ++					actx_error(actx, "device prompt failed");
    ++					goto error_return;
    ++				}
    ++
    ++				actx->user_prompted = true;
    ++			}
     +
     +			if (tok.access_token)
     +			{
    @@ src/interfaces/libpq/fe-auth-oauth-iddawc.c (new)
     +
     +		if (!user_prompted)
     +		{
    ++			int			res;
    ++			PQpromptOAuthDevice prompt = {
    ++				.verification_uri = verification_uri,
    ++				.user_code = user_code,
    ++				/* TODO: optional fields */
    ++			};
    ++
     +			/*
     +			 * Now that we know the token endpoint isn't broken, give the user
     +			 * the login instructions.
     +			 */
    -+			pqInternalNotice(&conn->noticeHooks,
    -+							 "Visit %s and enter the code: %s",
    -+							 verification_uri, user_code);
    ++			res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
    ++								 &prompt);
    ++
    ++			if (!res)
    ++			{
    ++				fprintf(stderr, "Visit %s and enter the code: %s",
    ++						prompt.verification_uri, prompt.user_code);
    ++			}
    ++			else if (res < 0)
    ++			{
    ++				appendPQExpBufferStr(&conn->errorMessage,
    ++									 libpq_gettext("device prompt failed\n"));
    ++				goto cleanup;
    ++			}
     +
     +			user_prompted = true;
     +		}
    @@ src/interfaces/libpq/fe-auth-oauth-iddawc.c (new)
     +	/* TODO: actually make this asynchronous */
     +	state->token = run_iddawc_auth_flow(conn, conn->oauth_discovery_uri);
     +	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_FAILED;
    -+}
    -+
    -+void
    -+pg_fe_free_oauth_async_ctx(PGconn *conn, void *ctx)
    -+{
    -+	/* We currently have no async_ctx, so this should not be called. */
    -+	Assert(false);
     +}
     
      ## src/interfaces/libpq/fe-auth-oauth.c (new) ##
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	Assert(sasl_mechanism != NULL);
     +	Assert(!strcmp(sasl_mechanism, OAUTHBEARER_NAME));
     +
    -+	state = malloc(sizeof(*state));
    ++	state = calloc(1, sizeof(*state));
     +	if (!state)
     +		return NULL;
     +
     +	state->state = FE_OAUTH_INIT;
     +	state->conn = conn;
    -+	state->token = NULL;
    -+	state->async_ctx = NULL;
     +
     +	return state;
     +}
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +{
     +	struct json_ctx	   *ctx = state;
     +
    -+	if (oauth_json_has_error(ctx))
    -+		return JSON_SUCCESS; /* short-circuit */
    -+
     +	if (ctx->target_field)
     +	{
     +		Assert(ctx->nested == 1);
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	}
     +
     +	++ctx->nested;
    -+	return JSON_SUCCESS; /* TODO: switch all of these to JSON_SEM_ACTION_FAILED */
    ++	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
     +}
     +
     +static JsonParseErrorType
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +{
     +	struct json_ctx	   *ctx = state;
     +
    -+	if (oauth_json_has_error(ctx))
    -+		return JSON_SUCCESS; /* short-circuit */
    -+
     +	--ctx->nested;
     +	return JSON_SUCCESS;
     +}
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +{
     +	struct json_ctx	   *ctx = state;
     +
    -+	if (oauth_json_has_error(ctx))
    -+	{
    -+		/* short-circuit */
    -+		free(name);
    -+		return JSON_SUCCESS;
    -+	}
    -+
     +	if (ctx->nested == 1)
     +	{
     +		if (!strcmp(name, ERROR_STATUS_FIELD))
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +{
     +	struct json_ctx	   *ctx = state;
     +
    -+	if (oauth_json_has_error(ctx))
    -+		return JSON_SUCCESS; /* short-circuit */
    -+
     +	if (!ctx->nested)
     +	{
     +		ctx->errmsg = libpq_gettext("top-level element must be an object");
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +							 ctx->target_field_name);
     +	}
     +
    -+	return JSON_SUCCESS;
    ++	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
     +}
     +
     +static JsonParseErrorType
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +{
     +	struct json_ctx	   *ctx = state;
     +
    -+	if (oauth_json_has_error(ctx))
    -+	{
    -+		/* short-circuit */
    -+		free(token);
    -+		return JSON_SUCCESS;
    -+	}
    -+
     +	if (!ctx->nested)
     +	{
     +		ctx->errmsg = libpq_gettext("top-level element must be an object");
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	}
     +
     +	free(token);
    -+	return JSON_SUCCESS;
    ++	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
     +}
     +
     +static bool
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +	err = pg_parse_json(&lex, &sem);
     +
    -+	if (err != JSON_SUCCESS)
    -+	{
    -+		errmsg = json_errdetail(err, &lex);
    -+	}
    -+	else if (PQExpBufferDataBroken(ctx.errbuf))
    -+	{
    -+		errmsg = libpq_gettext("out of memory");
    -+	}
    -+	else if (ctx.errmsg)
    ++	if (err == JSON_SEM_ACTION_FAILED)
     +	{
    -+		errmsg = ctx.errmsg;
    ++		if (PQExpBufferDataBroken(ctx.errbuf))
    ++			errmsg = libpq_gettext("out of memory");
    ++		else if (ctx.errmsg)
    ++			errmsg = ctx.errmsg;
    ++		else
    ++		{
    ++			/*
    ++			 * Developer error: one of the action callbacks didn't call
    ++			 * oauth_json_set_error() before erroring out.
    ++			 */
    ++			Assert(oauth_json_has_error(&ctx));
    ++			errmsg = "<unexpected empty error>";
    ++		}
     +	}
    ++	else if (err != JSON_SUCCESS)
    ++		errmsg = json_errdetail(err, &lex);
     +
     +	if (errmsg)
     +		appendPQExpBuffer(&conn->errorMessage,
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	return true;
     +}
     +
    ++static void
    ++free_request(PGconn *conn, void *vreq)
    ++{
    ++	PQoauthBearerRequest *request = vreq;
    ++
    ++	if (request->cleanup)
    ++		request->cleanup(conn, request);
    ++
    ++	free(request);
    ++}
    ++
    ++static PostgresPollingStatusType
    ++run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
    ++{
    ++	fe_oauth_state *state = conn->sasl_state;
    ++	PQoauthBearerRequest *request = state->async_ctx;
    ++	PostgresPollingStatusType status;
    ++
    ++	if (!request->async)
    ++	{
    ++		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
    ++		return PGRES_POLLING_FAILED;
    ++	}
    ++
    ++	status = request->async(conn, request, altsock);
    ++	if (status == PGRES_POLLING_FAILED)
    ++	{
    ++		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
    ++		return status;
    ++	}
    ++	else if (status == PGRES_POLLING_OK)
    ++	{
    ++		/*
    ++		 * We already have a token, so copy it into the state. (We can't
    ++		 * hold onto the original string, since it may not be safe for us to
    ++		 * free() it.)
    ++		 */
    ++		PQExpBufferData	token;
    ++
    ++		if (!request->token)
    ++		{
    ++			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
    ++			return PGRES_POLLING_FAILED;
    ++		}
    ++
    ++		initPQExpBuffer(&token);
    ++		appendPQExpBuffer(&token, "Bearer %s", request->token);
    ++
    ++		if (PQExpBufferDataBroken(token))
    ++		{
    ++			libpq_append_conn_error(conn, "out of memory");
    ++			return PGRES_POLLING_FAILED;
    ++		}
    ++
    ++		state->token = token.data;
    ++		return PGRES_POLLING_OK;
    ++	}
    ++
    ++	/* TODO: what if no altsock was set? */
    ++	return status;
    ++}
    ++
    ++static bool
    ++setup_token_request(PGconn *conn, fe_oauth_state *state)
    ++{
    ++	int			res;
    ++	PQoauthBearerRequest request = {
    ++		.openid_configuration = conn->oauth_discovery_uri,
    ++		.scope = conn->oauth_scope,
    ++	};
    ++
    ++	Assert(request.openid_configuration);
    ++
    ++	/* The client may have overridden the OAuth flow. */
    ++	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
    ++	if (res > 0)
    ++	{
    ++		PQoauthBearerRequest *request_copy;
    ++
    ++		if (request.token)
    ++		{
    ++			/*
    ++			 * We already have a token, so copy it into the state. (We can't
    ++			 * hold onto the original string, since it may not be safe for us to
    ++			 * free() it.)
    ++			 */
    ++			PQExpBufferData	token;
    ++
    ++			initPQExpBuffer(&token);
    ++			appendPQExpBuffer(&token, "Bearer %s", request.token);
    ++
    ++			if (PQExpBufferDataBroken(token))
    ++			{
    ++				libpq_append_conn_error(conn, "out of memory");
    ++				goto fail;
    ++			}
    ++
    ++			state->token = token.data;
    ++
    ++			/* short-circuit */
    ++			if (request.cleanup)
    ++				request.cleanup(conn, &request);
    ++			return true;
    ++		}
    ++
    ++		request_copy = malloc(sizeof(*request_copy));
    ++		if (!request_copy)
    ++		{
    ++			libpq_append_conn_error(conn, "out of memory");
    ++			goto fail;
    ++		}
    ++
    ++		memcpy(request_copy, &request, sizeof(request));
    ++
    ++		conn->async_auth = run_user_oauth_flow;
    ++		state->async_ctx = request_copy;
    ++		state->free_async_ctx = free_request;
    ++	}
    ++	else if (res < 0)
    ++	{
    ++		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
    ++		goto fail;
    ++	}
    ++	else
    ++	{
    ++		/* Use our built-in OAuth flow. */
    ++		conn->async_auth = pg_fe_run_oauth_flow;
    ++	}
    ++
    ++	return true;
    ++
    ++fail:
    ++	if (request.cleanup)
    ++		request.cleanup(conn, &request);
    ++	return false;
    ++}
    ++
     +static bool
     +derive_discovery_uri(PGconn *conn)
     +{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +				}
     +
     +				/*
    -+				 * At this point we have to hand the connection over to our
    -+				 * OAuth implementation. This involves a number of HTTP
    -+				 * connections and timed waits, so we escape the synchronous
    -+				 * auth processing and tell PQconnectPoll to transfer control to
    -+				 * our async implementation.
    ++				 * Decide whether we're using a user-provided OAuth flow, or the
    ++				 * one we have built in.
     +				 */
    -+				conn->async_auth = pg_fe_run_oauth_flow;
    -+				state->state = FE_OAUTH_REQUESTING_TOKEN;
    -+				return SASL_ASYNC;
    -+			}
    ++				if (!setup_token_request(conn, state))
    ++					return SASL_FAILED;
     +
    -+			/*
    -+			 * If we don't have a discovery URI to be able to request a token,
    -+			 * we ask the server for one explicitly with an empty token. This
    -+			 * doesn't require any asynchronous work.
    -+			 */
    -+			state->token = strdup("");
    -+			if (!state->token)
    ++				if (state->token)
    ++				{
    ++					/*
    ++					 * A really smart user implementation may have already given
    ++					 * us the token (e.g. if there was an unexpired copy already
    ++					 * cached). In that case, we can just fall through.
    ++					 */
    ++				}
    ++				else
    ++				{
    ++					/*
    ++					 * Otherwise, we have to hand the connection over to our
    ++					 * OAuth implementation. This involves a number of HTTP
    ++					 * connections and timed waits, so we escape the synchronous
    ++					 * auth processing and tell PQconnectPoll to transfer
    ++					 * control to our async implementation.
    ++					 */
    ++					Assert(conn->async_auth); /* should have been set already */
    ++					state->state = FE_OAUTH_REQUESTING_TOKEN;
    ++					return SASL_ASYNC;
    ++				}
    ++			}
    ++			else
     +			{
    -+				libpq_append_conn_error(conn, "out of memory");
    -+				return SASL_FAILED;
    ++				/*
    ++				 * If we don't have a discovery URI to be able to request a
    ++				 * token, we ask the server for one explicitly with an empty
    ++				 * token. This doesn't require any asynchronous work.
    ++				 */
    ++				state->token = strdup("");
    ++				if (!state->token)
    ++				{
    ++					libpq_append_conn_error(conn, "out of memory");
    ++					return SASL_FAILED;
    ++				}
     +			}
     +
     +			/* fall through */
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +	free(state->token);
     +	if (state->async_ctx)
    -+		pg_fe_free_oauth_async_ctx(state->conn, state->async_ctx);
    ++		state->free_async_ctx(state->conn, state->async_ctx);
     +
     +	free(state);
     +}
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     +
     +	PGconn	   *conn;
     +	char	   *token;
    ++
     +	void	   *async_ctx;
    ++	void	  (*free_async_ctx) (PGconn *conn, void *ctx);
     +} fe_oauth_state;
     +
     +extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
    -+extern void pg_fe_free_oauth_async_ctx(PGconn *conn, void *ctx);
     +
     +#endif							/* FE_AUTH_OAUTH_H */
     
    @@ src/interfaces/libpq/fe-auth.c: pg_fe_sendauth(AuthRequest areq, int payloadlen,
      			{
      				/* Use this message if pg_SASL_continue didn't supply one */
      				if (conn->errorMessage.len == oldmsglen)
    +@@ src/interfaces/libpq/fe-auth.c: PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user,
    + 
    + 	return crypt_pwd;
    + }
    ++
    ++PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
    ++
    ++PQauthDataHook_type
    ++PQgetAuthDataHook(void)
    ++{
    ++	return PQauthDataHook;
    ++}
    ++
    ++void
    ++PQsetAuthDataHook(PQauthDataHook_type hook)
    ++{
    ++	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
    ++}
    ++
    ++int
    ++PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
    ++{
    ++	return 0; /* handle nothing */
    ++}
     
      ## src/interfaces/libpq/fe-auth.h ##
     @@
    + #include "libpq-int.h"
      
      
    ++extern PQauthDataHook_type PQauthDataHook;
    ++
    ++
      /* Prototypes for functions in fe-auth.c */
     -extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
     +extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
     +		case CONNECTION_AUTHENTICATING:
     +			{
     +				PostgresPollingStatusType status;
    -+				pgsocket	altsock;
    ++				pgsocket	altsock = PGINVALID_SOCKET;
     +
     +				if (!conn->async_auth)
     +				{
    @@ src/interfaces/libpq/fe-misc.c: pqSocketCheck(PGconn *conn, int forRead, int for
      	if (result < 0)
     
      ## src/interfaces/libpq/libpq-fe.h ##
    +@@ src/interfaces/libpq/libpq-fe.h: extern "C"
    + #define LIBPQ_HAS_TRACE_FLAGS 1
    + /* Indicates that PQsslAttribute(NULL, "library") is useful */
    + #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
    ++/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
    ++#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
    + 
    + /*
    +  * Option flags for PQcopyResult
     @@ src/interfaces/libpq/libpq-fe.h: typedef enum
      	CONNECTION_CONSUME,			/* Consuming any extra messages. */
      	CONNECTION_GSS_STARTUP,		/* Negotiating GSSAPI. */
    @@ src/interfaces/libpq/libpq-fe.h: typedef enum
      } ConnStatusType;
      
      typedef enum
    +@@ src/interfaces/libpq/libpq-fe.h: typedef enum
    + 	PQ_PIPELINE_ABORTED
    + } PGpipelineStatus;
    + 
    ++typedef enum
    ++{
    ++	PQAUTHDATA_PROMPT_OAUTH_DEVICE,	/* user must visit a device-authorization URL */
    ++	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
    ++} PGAuthData;
    ++
    + /* PGconn encapsulates a connection to the backend.
    +  * The contents of this struct are not supposed to be known to applications.
    +  */
    +@@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
    + 
    + /* === in fe-auth.c === */
    + 
    ++typedef struct _PQpromptOAuthDevice
    ++{
    ++	const char *verification_uri;			/* verification URI to visit */
    ++	const char *user_code;					/* user code to enter */
    ++} PQpromptOAuthDevice;
    ++
    ++typedef struct _PQoauthBearerRequest
    ++{
    ++	/* Hook inputs (constant across all calls) */
    ++	const char * const openid_configuration;	/* OIDC discovery URI */
    ++	const char * const scope;					/* required scope(s), or NULL */
    ++
    ++	/* Hook outputs */
    ++
    ++	/*
    ++	 * Callback implementing a custom asynchronous OAuth flow.
    ++	 *
    ++	 * The callback may return
    ++	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor has
    ++	 *   been stored in *altsock and libpq should wait until it is readable or
    ++	 *   writable before calling back;
    ++	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
    ++	 *   request->token has been set; or
    ++	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
    ++	 *
    ++	 * This callback is optional. If the token can be obtained without blocking
    ++	 * during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN hook, it
    ++	 * may be returned directly, but one of request->async or request->token
    ++	 * must be set by the hook.
    ++	 */
    ++	PostgresPollingStatusType (*async) (PGconn *conn,
    ++										struct _PQoauthBearerRequest *request,
    ++										int *altsock);
    ++
    ++	/*
    ++	 * Callback to clean up custom allocations. A hook implementation may use
    ++	 * this to free request->token and any resources in request->user.
    ++	 *
    ++	 * This is technically optional, but highly recommended, because there is no
    ++	 * other indication as to when it is safe to free the token.
    ++	 */
    ++	void	  (*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
    ++
    ++	/*
    ++	 * The hook should set this to the Bearer token contents for the connection,
    ++	 * once the flow is completed.  The token contents must remain available to
    ++	 * libpq until the hook's cleanup callback is called.
    ++	 */
    ++	char	   *token;
    ++
    ++	/*
    ++	 * Hook-defined data. libpq will not modify this pointer across calls to the
    ++	 * async callback, so it can be used to keep track of application-specific
    ++	 * state. Resources allocated here should be freed by the cleanup callback.
    ++	 */
    ++	void	   *user;
    ++} PQoauthBearerRequest;
    ++
    + extern char *PQencryptPassword(const char *passwd, const char *user);
    + extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
    + 
    ++typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
    ++extern void	PQsetAuthDataHook(PQauthDataHook_type hook);
    ++extern PQauthDataHook_type PQgetAuthDataHook(void);
    ++extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
    ++
    + /* === in encnames.c === */
    + 
    + extern int	pg_char_to_encoding(const char *name);
     
      ## src/interfaces/libpq/libpq-int.h ##
     @@ src/interfaces/libpq/libpq-int.h: typedef struct pg_conn_host
3:  863a49f863 = 3:  d02bc9a466 backend: add OAUTHBEARER SASL mechanism
4:  348554e5f4 ! 4:  fded01d22b Add pytest suite for OAuth
    @@ src/test/python/client/test_oauth.py (new)
     +#
     +
     +import base64
    ++import collections
    ++import ctypes
     +import http.server
     +import json
    ++import logging
    ++import os
    ++import platform
     +import secrets
     +import sys
     +import threading
     +import time
    ++import traceback
    ++import types
     +import urllib.parse
     +from numbers import Number
     +
    @@ src/test/python/client/test_oauth.py (new)
     +
     +from .conftest import BLOCKING_TIMEOUT
     +
    ++if platform.system() == "Darwin":
    ++    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
    ++else:
    ++    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
    ++
     +
     +def finish_handshake(conn):
     +    """
    @@ src/test/python/client/test_oauth.py (new)
     +        thread.stop()
     +
     +
    ++#
    ++# PQAuthDataHook implementation, matching libpq.h
    ++#
    ++
    ++
    ++PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
    ++PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
    ++
    ++PGRES_POLLING_FAILED = 0
    ++PGRES_POLLING_READING = 1
    ++PGRES_POLLING_WRITING = 2
    ++PGRES_POLLING_OK = 3
    ++
    ++
    ++class PQPromptOAuthDevice(ctypes.Structure):
    ++    _fields_ = [
    ++        ("verification_uri", ctypes.c_char_p),
    ++        ("user_code", ctypes.c_char_p),
    ++    ]
    ++
    ++
    ++class PQOAuthBearerRequest(ctypes.Structure):
    ++    pass
    ++
    ++
    ++PQOAuthBearerRequest._fields_ = [
    ++    ("openid_configuration", ctypes.c_char_p),
    ++    ("scope", ctypes.c_char_p),
    ++    (
    ++        "async_",
    ++        ctypes.CFUNCTYPE(
    ++            ctypes.c_int,
    ++            ctypes.c_void_p,
    ++            ctypes.POINTER(PQOAuthBearerRequest),
    ++            ctypes.POINTER(ctypes.c_int),
    ++        ),
    ++    ),
    ++    (
    ++        "cleanup",
    ++        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
    ++    ),
    ++    ("token", ctypes.c_char_p),
    ++    ("user", ctypes.c_void_p),
    ++]
    ++
    ++
    ++@pytest.fixture
    ++def auth_data_cb():
    ++    """
    ++    Tracks calls to the libpq authdata hook. The yielded object contains a calls
    ++    member that records the data sent to the hook. If a test needs to perform
    ++    custom actions during a call, it can set the yielded object's impl callback;
    ++    beware that the callback takes place on a different thread.
    ++
    ++    This is done differently from the other callback implementations on purpose.
    ++    For the others, we can declare test-specific callbacks and have them perform
    ++    direct assertions on the data they receive. But that won't work for a C
    ++    callback, because there's no way for us to bubble up the assertion through
    ++    libpq. Instead, this mock-style approach is taken, where we just record the
    ++    calls and let the test examine them later.
    ++    """
    ++
    ++    class _Call:
    ++        pass
    ++
    ++    class _cb(object):
    ++        def __init__(self):
    ++            self.calls = []
    ++
    ++    cb = _cb()
    ++    cb.impl = None
    ++
    ++    # The callback will occur on a different thread, so protect the cb object.
    ++    cb_lock = threading.Lock()
    ++
    ++    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
    ++    def auth_data_cb(typ, pgconn, data):
    ++        handle_by_default = 0  # does an implementation have to be provided?
    ++
    ++        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
    ++            cls = PQPromptOAuthDevice
    ++            handle_by_default = 1
    ++        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
    ++            cls = PQOAuthBearerRequest
    ++        else:
    ++            return 0
    ++
    ++        call = _Call()
    ++        call.type = typ
    ++
    ++        # The lifetime of the underlying data being pointed to doesn't
    ++        # necessarily match the lifetime of the Python object, so we can't
    ++        # reference a Structure's fields after returning. Explicitly copy the
    ++        # contents over, field by field.
    ++        data = ctypes.cast(data, ctypes.POINTER(cls))
    ++        for name, _ in cls._fields_:
    ++            setattr(call, name, getattr(data.contents, name))
    ++
    ++        with cb_lock:
    ++            cb.calls.append(call)
    ++
    ++        if cb.impl:
    ++            # Pass control back to the test.
    ++            try:
    ++                return cb.impl(typ, pgconn, data.contents)
    ++            except Exception:
    ++                # This can't escape into the C stack, but we can fail the flow
    ++                # and hope the traceback gives us enough detail.
    ++                logging.error(
    ++                    "Exception during authdata hook callback:\n"
    ++                    + traceback.format_exc()
    ++                )
    ++                return -1
    ++
    ++        return handle_by_default
    ++
    ++    libpq.PQsetAuthDataHook(auth_data_cb)
    ++    try:
    ++        yield cb
    ++    finally:
    ++        # The callback is about to go out of scope, so make sure libpq is
    ++        # disconnected from it. (We wouldn't want to accidentally influence
    ++        # later tests anyway.)
    ++        libpq.PQsetAuthDataHook(None)
    ++
    ++
     +@pytest.mark.parametrize("secret", [None, "", "hunter2"])
     +@pytest.mark.parametrize("scope", [None, "", "openid email"])
     +@pytest.mark.parametrize("retries", [0, 1])
    @@ src/test/python/client/test_oauth.py (new)
     +    ],
     +)
     +def test_oauth_with_explicit_issuer(
    -+    capfd, accept, openid_provider, asynchronous, retries, scope, secret
    ++    accept, openid_provider, asynchronous, retries, scope, secret, auth_data_cb
     +):
     +    client_id = secrets.token_hex()
     +
    @@ src/test/python/client/test_oauth.py (new)
     +            finish_handshake(conn)
     +
     +    if retries:
    -+        # Finally, make sure that the client prompted the user with the expected
    -+        # authorization URL and user code.
    -+        expected = f"Visit {verification_url} and enter the code: {user_code}"
    -+        _, stderr = capfd.readouterr()
    -+        assert expected in stderr
    ++        # Finally, make sure that the client prompted the user once with the
    ++        # expected authorization URL and user code.
    ++        assert len(auth_data_cb.calls) == 2
    ++
    ++        # First call should have been for a custom flow, which we ignored.
    ++        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
    ++
    ++        # Second call is for our user prompt.
    ++        call = auth_data_cb.calls[1]
    ++        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
    ++        assert call.verification_uri.decode() == verification_url
    ++        assert call.user_code.decode() == user_code
     +
     +
     +def expect_disconnected_handshake(sock):
    @@ src/test/python/client/test_oauth.py (new)
     +            finish_handshake(conn)
     +
     +
    ++@pytest.fixture
    ++def self_pipe():
    ++    """
    ++    Yields a pipe fd pair.
    ++    """
    ++
    ++    class _Pipe:
    ++        pass
    ++
    ++    p = _Pipe()
    ++    p.readfd, p.writefd = os.pipe()
    ++
    ++    try:
    ++        yield p
    ++    finally:
    ++        os.close(p.readfd)
    ++        os.close(p.writefd)
    ++
    ++
    ++@pytest.mark.parametrize("scope", [None, "", "openid email"])
    ++@pytest.mark.parametrize(
    ++    "retries",
    ++    [
    ++        -1,  # no async callback
    ++        0,  # async callback immediately returns token
    ++        1,  # async callback waits on altsock once
    ++        2,  # async callback waits on altsock twice
    ++    ],
    ++)
    ++@pytest.mark.parametrize(
    ++    "asynchronous",
    ++    [
    ++        pytest.param(False, id="synchronous"),
    ++        pytest.param(True, id="asynchronous"),
    ++    ],
    ++)
    ++def test_user_defined_flow(
    ++    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
    ++):
    ++    issuer = "http://localhost"
    ++    discovery_uri = issuer + "/.well-known/openid-configuration"
    ++    access_token = secrets.token_urlsafe()
    ++
    ++    sock, _ = accept(
    ++        oauth_issuer=issuer,
    ++        oauth_client_id="some-id",
    ++        oauth_scope=scope,
    ++        async_=asynchronous,
    ++    )
    ++
    ++    # Track callbacks.
    ++    attempts = 0
    ++    wakeup_called = False
    ++    cleanup_calls = 0
    ++    lock = threading.Lock()
    ++
    ++    def wakeup():
    ++        """Writes a byte to the wakeup pipe."""
    ++        nonlocal wakeup_called
    ++        with lock:
    ++            wakeup_called = True
    ++            os.write(self_pipe.writefd, b"\0")
    ++
    ++    def get_token(pgconn, request, p_altsock):
    ++        """
    ++        Async token callback. While attempts < retries, libpq will be instructed
    ++        to wait on the self_pipe. When attempts == retries, the token will be
    ++        set.
    ++
    ++        Note that assertions and exceptions raised here are allowed but not very
    ++        helpful, since they can't bubble through the libpq stack to be collected
    ++        by the test suite. Try not to rely too heavily on them.
    ++        """
    ++        # Make sure libpq passed our user data through.
    ++        assert request.user == 42
    ++
    ++        with lock:
    ++            nonlocal attempts, wakeup_called
    ++
    ++            if attempts:
    ++                # If we've already started the timer, we shouldn't get a
    ++                # call back before it trips.
    ++                assert wakeup_called, "authdata hook was called before the timer"
    ++
    ++                # Drain the wakeup byte.
    ++                os.read(self_pipe.readfd, 1)
    ++
    ++            if attempts < retries:
    ++                attempts += 1
    ++
    ++                # Wake up the client in a little bit of time.
    ++                wakeup_called = False
    ++                threading.Timer(0.1, wakeup).start()
    ++
    ++                # Tell libpq to wait on the other end of the wakeup pipe.
    ++                p_altsock[0] = self_pipe.readfd
    ++                return PGRES_POLLING_READING
    ++
    ++        # Done!
    ++        request.token = access_token.encode()
    ++        return PGRES_POLLING_OK
    ++
    ++    @ctypes.CFUNCTYPE(
    ++        ctypes.c_int,
    ++        ctypes.c_void_p,
    ++        ctypes.POINTER(PQOAuthBearerRequest),
    ++        ctypes.POINTER(ctypes.c_int),
    ++    )
    ++    def get_token_wrapper(pgconn, p_request, p_altsock):
    ++        """
    ++        Translation layer between C and Python for the async callback.
    ++        Assertions and exceptions will be swallowed at the boundary, so make
    ++        sure they don't escape here.
    ++        """
    ++        try:
    ++            return get_token(pgconn, p_request.contents, p_altsock)
    ++        except Exception:
    ++            logging.error("Exception during async callback:\n" + traceback.format_exc())
    ++            return PGRES_POLLING_FAILED
    ++
    ++    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
    ++    def cleanup(pgconn, p_request):
    ++        """
    ++        Should be called exactly once per connection.
    ++        """
    ++        nonlocal cleanup_calls
    ++        with lock:
    ++            cleanup_calls += 1
    ++
    ++    def bearer_hook(typ, pgconn, request):
    ++        """
    ++        Implementation of the PQAuthDataHook, which either sets up an async
    ++        callback or returns the token directly, depending on the value of
    ++        retries.
    ++
    ++        As above, try not to rely too much on assertions/exceptions here.
    ++        """
    ++        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
    ++        request.cleanup = cleanup
    ++
    ++        if retries < 0:
    ++            # Special case: return a token immediately without a callback.
    ++            request.token = access_token.encode()
    ++            return 1
    ++
    ++        # Tell libpq to call us back.
    ++        request.async_ = get_token_wrapper
    ++        request.user = ctypes.c_void_p(42)  # will be checked in the callback
    ++        return 1
    ++
    ++    auth_data_cb.impl = bearer_hook
    ++
    ++    # Now drive the server side.
    ++    with sock:
    ++        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    ++            # Initiate a handshake, which should result in our custom callback
    ++            # being invoked to fetch the token.
    ++            initial = start_oauth_handshake(conn)
    ++
    ++            # Validate and accept the token.
    ++            auth = get_auth_value(initial)
    ++            assert auth == f"Bearer {access_token}".encode("ascii")
    ++
    ++            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
    ++            finish_handshake(conn)
    ++
    ++    # Check the data provided to the hook.
    ++    assert len(auth_data_cb.calls) == 1
    ++
    ++    call = auth_data_cb.calls[0]
    ++    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
    ++    assert call.openid_configuration.decode() == discovery_uri
    ++    assert call.scope == (None if scope is None else scope.encode())
    ++
    ++    # Make sure we cleaned up after ourselves.
    ++    assert cleanup_calls == 1
    ++
    ++
     +def alt_patterns(*patterns):
     +    """
     +    Just combines multiple alternative regexes into one. It's not very efficient
5:  16d3984a45 ! 5:  38a9691801 squash! Add pytest suite for OAuth
    @@ src/test/python/README: To make quick smoke tests possible, slow tests have been
     +    $ py.test --temp-instance=./tmp_check
     
      ## src/test/python/client/test_oauth.py ##
    -@@
    - import base64
    - import http.server
    - import json
    -+import os
    - import secrets
    - import sys
    - import threading
     @@ src/test/python/client/test_oauth.py: import pq3
      
      from .conftest import BLOCKING_TIMEOUT
    @@ src/test/python/client/test_oauth.py: import pq3
     +    reason="OAuth client tests require --with-oauth support",
     +)
     +
    - 
    - def finish_handshake(conn):
    -     """
    + if platform.system() == "Darwin":
    +     libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
    + else:
     
      ## src/test/python/conftest.py ##
     @@ src/test/python/conftest.py: import os
v12-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v12-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v12-0001-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/gzip; name=v12-0001-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v12-0004-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v12-0004-Add-pytest-suite-for-OAuth.patch.gzDownload
v12-0005-squash-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v12-0005-squash-Add-pytest-suite-for-OAuth.patch.gzDownload
v12-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v12-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
#78Shlok Kyal
shlok.kyal.oss@gmail.com
In reply to: Jacob Champion (#77)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On Fri, 3 Nov 2023 at 17:14, Jacob Champion <jchampion@timescale.com> wrote:

v12 implements a first draft of a client hook, so applications can
replace either the device prompt or the entire OAuth flow. (Andrey and
Mahendrakar: hopefully this is close to what you need.) It also cleans
up some of the JSON tech debt.

I went through CFbot and found that build is failing, links:

https://cirrus-ci.com/task/6061898244816896
https://cirrus-ci.com/task/6624848198238208
https://cirrus-ci.com/task/5217473314684928
https://cirrus-ci.com/task/6343373221527552

Just want to make sure you are aware of these failures.

Thanks,
Shlok Kumar Kyal

#79Jacob Champion
champion.p@gmail.com
In reply to: Shlok Kyal (#78)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Nov 3, 2023 at 5:28 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:

Just want to make sure you are aware of these failures.

Thanks for the nudge! Looks like I need to reconcile with the changes
to JsonLexContext in 1c99cde2. I should be able to get to that next
week; in the meantime I'll mark it Waiting on Author.

--Jacob

#80Jacob Champion
champion.p@gmail.com
In reply to: Jacob Champion (#79)
6 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Nov 3, 2023 at 4:48 PM Jacob Champion <champion.p@gmail.com> wrote:

Thanks for the nudge! Looks like I need to reconcile with the changes
to JsonLexContext in 1c99cde2. I should be able to get to that next
week; in the meantime I'll mark it Waiting on Author.

v13 rebases over latest. The JsonLexContext changes have simplified
0001 quite a bit, and there's probably a bit more minimization that
could be done.

Unfortunately the configure/Makefile build of libpq now seems to be
pulling in an `exit()` dependency in a way that Meson does not. (Or
maybe Meson isn't checking?) I still need to investigate that
difference and fix it, so I recommend Meson if you're looking to
test-drive a build.

Thanks,
--Jacob

Attachments:

since-v12.diff.txttext/plain; charset=US-ASCII; name=since-v12.diff.txtDownload
1:  e80c124c3d ! 1:  1e17a1059f common/jsonapi: support FRONTEND clients
    @@ Commit message
         memory owned by the JsonLexContext, so clients don't need to worry about
         freeing it.
     
    -    For convenience, the backend now has destroyJsonLexContext() to mirror
    -    other create/destroy APIs. The frontend has init/term versions of the
    -    API to handle stack-allocated JsonLexContexts.
    -
         We can now partially revert b44669b2ca, now that json_errdetail() works
         correctly.
     
    - ## src/backend/utils/adt/jsonfuncs.c ##
    -@@ src/backend/utils/adt/jsonfuncs.c: json_object_keys(PG_FUNCTION_ARGS)
    - 		pg_parse_json_or_ereport(lex, sem);
    - 		/* keys are now in state->result */
    - 
    --		pfree(lex->strval->data);
    --		pfree(lex->strval);
    --		pfree(lex);
    -+		destroyJsonLexContext(lex);
    - 		pfree(sem);
    - 
    - 		MemoryContextSwitchTo(oldcontext);
    -
      ## src/bin/pg_verifybackup/parse_manifest.c ##
    -@@ src/bin/pg_verifybackup/parse_manifest.c: void
    - json_parse_manifest(JsonManifestParseContext *context, char *buffer,
    - 					size_t size)
    - {
    --	JsonLexContext *lex;
    -+	JsonLexContext lex = {0};
    - 	JsonParseErrorType json_error;
    - 	JsonSemAction sem;
    - 	JsonManifestParseState parse;
    -@@ src/bin/pg_verifybackup/parse_manifest.c: json_parse_manifest(JsonManifestParseContext *context, char *buffer,
    - 	parse.state = JM_EXPECT_TOPLEVEL_START;
    - 	parse.saw_version_field = false;
    - 
    --	/* Create a JSON lexing context. */
    --	lex = makeJsonLexContextCstringLen(buffer, size, PG_UTF8, true);
    -+	/* Initialize a JSON lexing context. */
    -+	initJsonLexContextCstringLen(&lex, buffer, size, PG_UTF8, true);
    - 
    - 	/* Set up semantic actions. */
    - 	sem.semstate = &parse;
     @@ src/bin/pg_verifybackup/parse_manifest.c: json_parse_manifest(JsonManifestParseContext *context, char *buffer,
    - 	sem.scalar = json_manifest_scalar;
    - 
      	/* Run the actual JSON parser. */
    --	json_error = pg_parse_json(lex, &sem);
    -+	json_error = pg_parse_json(&lex, &sem);
    + 	json_error = pg_parse_json(lex, &sem);
      	if (json_error != JSON_SUCCESS)
     -		json_manifest_parse_failure(context, "parsing failed");
    -+		json_manifest_parse_failure(context, json_errdetail(json_error, &lex));
    ++		json_manifest_parse_failure(context, json_errdetail(json_error, lex));
      	if (parse.state != JM_EXPECT_EOF)
      		json_manifest_parse_failure(context, "manifest ended unexpectedly");
      
    - 	/* Verify the manifest checksum. */
    - 	verify_manifest_checksum(&parse, buffer, size);
    -+
    -+	/* Clean up. */
    -+	termJsonLexContext(&lex);
    - }
    - 
    - /*
     
      ## src/bin/pg_verifybackup/t/005_bad_manifest.pl ##
     @@ src/bin/pg_verifybackup/t/005_bad_manifest.pl: use Test::More;
    @@ src/common/jsonapi.c
      /*
       * The context of the parser is maintained by the recursive descent
       * mechanism, but is passed explicitly to the error reporting routine
    -@@ src/common/jsonapi.c: IsValidJsonNumber(const char *str, int len)
    - 	return (!numeric_error) && (total_len == dummy_lex.input_length);
    - }
    - 
    -+#ifndef FRONTEND
    -+
    - /*
    -  * makeJsonLexContextCstringLen
    -  *
    -- * lex constructor, with or without StringInfo object for de-escaped lexemes.
    -+ * lex constructor, with or without a string object for de-escaped lexemes.
    -  *
    -  * Without is better as it makes the processing faster, so only make one
    -  * if really required.
    -@@ src/common/jsonapi.c: makeJsonLexContextCstringLen(char *json, int len, int encoding, bool need_escape
    - {
    - 	JsonLexContext *lex = palloc0(sizeof(JsonLexContext));
    - 
    -+	initJsonLexContextCstringLen(lex, json, len, encoding, need_escapes);
    -+
    -+	return lex;
    -+}
    -+
    -+void
    -+destroyJsonLexContext(JsonLexContext *lex)
    -+{
    -+	termJsonLexContext(lex);
    -+	pfree(lex);
    -+}
    -+
    -+#endif /* !FRONTEND */
    -+
    -+void
    -+initJsonLexContextCstringLen(JsonLexContext *lex, char *json, int len, int encoding, bool need_escapes)
    -+{
    - 	lex->input = lex->token_terminator = lex->line_start = json;
    - 	lex->line_number = 1;
    - 	lex->input_length = len;
    +@@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
      	lex->input_encoding = encoding;
    --	if (need_escapes)
    + 	if (need_escapes)
    + 	{
     -		lex->strval = makeStringInfo();
    --	return lex;
    -+	lex->parse_strval = need_escapes;
    -+	if (lex->parse_strval)
    -+	{
     +		/*
     +		 * This call can fail in FRONTEND code. We defer error handling to time
    -+		 * of use (json_lex_string()) since there's no way to signal failure
    -+		 * here, and we might not need to parse any strings anyway.
    ++		 * of use (json_lex_string()) since we might not need to parse any
    ++		 * strings anyway.
     +		 */
     +		lex->strval = createStrVal();
    -+	}
    + 		lex->flags |= JSONLEX_FREE_STRVAL;
    ++		lex->parse_strval = true;
    + 	}
     +	lex->errormsg = NULL;
    -+}
    -+
    -+void
    -+termJsonLexContext(JsonLexContext *lex)
    -+{
    + 
    + 	return lex;
    + }
    +@@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
    + void
    + freeJsonLexContext(JsonLexContext *lex)
    + {
     +	static const JsonLexContext empty = {0};
     +
    -+	if (lex->strval)
    -+	{
    + 	if (lex->flags & JSONLEX_FREE_STRVAL)
    + 	{
     +#ifdef FRONTEND
     +		destroyPQExpBuffer(lex->strval);
     +#else
    -+		pfree(lex->strval->data);
    -+		pfree(lex->strval);
    + 		pfree(lex->strval->data);
    + 		pfree(lex->strval);
     +#endif
     +	}
    -+
     +	if (lex->errormsg)
     +	{
     +#ifdef FRONTEND
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(char *json, int len, int enco
     +		pfree(lex->errormsg->data);
     +		pfree(lex->errormsg);
     +#endif
    -+	}
    -+
    -+	*lex = empty;
    + 	}
    + 	if (lex->flags & JSONLEX_FREE_STRUCT)
    + 		pfree(lex);
    ++	else
    ++		*lex = empty;
      }
      
      /*
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
     +	return lex->errormsg->data;
     +}
     
    + ## src/common/meson.build ##
    +@@ src/common/meson.build: common_sources_frontend_static += files(
    + # least cryptohash_openssl.c, hmac_openssl.c depend on it.
    + # controldata_utils.c depends on wait_event_types_h. That's arguably a
    + # layering violation, but ...
    ++#
    ++# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
    ++# appropriately. This seems completely broken.
    + pgcommon = {}
    + pgcommon_variants = {
    +   '_srv': internal_lib_args + {
    ++    'include_directories': include_directories('.'),
    +     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
    +     'dependencies': [backend_common_code],
    +   },
    +   '': default_lib_args + {
    ++    'include_directories': include_directories('../interfaces/libpq', '.'),
    +     'sources': common_sources_frontend_static,
    +     'dependencies': [frontend_common_code],
    +     # Files in libpgcommon.a should use/export the "xxx_private" versions
    +@@ src/common/meson.build: pgcommon_variants = {
    +   },
    +   '_shlib': default_lib_args + {
    +     'pic': true,
    ++    'include_directories': include_directories('../interfaces/libpq', '.'),
    +     'sources': common_sources_frontend_shlib,
    +     'dependencies': [frontend_common_code],
    +   },
    +@@ src/common/meson.build: foreach name, opts : pgcommon_variants
    +     c_args = opts.get('c_args', []) + common_cflags[cflagname]
    +     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
    +       c_pch: pch_c_h,
    +-      include_directories: include_directories('.'),
    +       kwargs: opts + {
    +         'sources': sources,
    +         'c_args': c_args,
    +@@ src/common/meson.build: foreach name, opts : pgcommon_variants
    +   lib = static_library('libpgcommon@0@'.format(name),
    +       link_with: cflag_libs,
    +       c_pch: pch_c_h,
    +-      include_directories: include_directories('.'),
    +       kwargs: opts + {
    +         'dependencies': opts['dependencies'] + [ssl],
    +       }
    +
      ## src/include/common/jsonapi.h ##
     @@
      #ifndef JSONAPI_H
    @@ src/include/common/jsonapi.h: typedef enum JsonParseErrorType
      	JSON_UNICODE_ESCAPE_FORMAT,
      	JSON_UNICODE_HIGH_ESCAPE,
     @@ src/include/common/jsonapi.h: typedef enum JsonParseErrorType
    - 	JSON_SEM_ACTION_FAILED		/* error should already be reported */
    + 	JSON_SEM_ACTION_FAILED,		/* error should already be reported */
      } JsonParseErrorType;
      
     +/*
    @@ src/include/common/jsonapi.h: typedef enum JsonParseErrorType
      /*
       * All the fields in this structure should be treated as read-only.
     @@ src/include/common/jsonapi.h: typedef struct JsonLexContext
    - 	int			lex_level;
    + 	bits32		flags;
      	int			line_number;	/* line number, starting from 1 */
      	char	   *line_start;		/* where that line starts within input */
     -	StringInfo	strval;
    @@ src/include/common/jsonapi.h: typedef struct JsonLexContext
      } JsonLexContext;
      
      typedef JsonParseErrorType (*json_struct_action) (void *state);
    -@@ src/include/common/jsonapi.h: extern PGDLLIMPORT JsonSemAction nullSemAction;
    -  */
    - extern JsonParseErrorType json_count_array_elements(JsonLexContext *lex,
    - 													int *elements);
    -+#ifndef FRONTEND
    - 
    - /*
    -- * constructor for JsonLexContext, with or without strval element.
    -+ * allocating constructor for JsonLexContext, with or without strval element.
    -  * If supplied, the strval element will contain a de-escaped version of
    -  * the lexeme. However, doing this imposes a performance penalty, so
    -  * it should be avoided if the de-escaped lexeme is not required.
    -@@ src/include/common/jsonapi.h: extern JsonLexContext *makeJsonLexContextCstringLen(char *json,
    - 													int encoding,
    - 													bool need_escapes);
    - 
    -+/*
    -+ * Counterpart to makeJsonLexContextCstringLen(): clears and deallocates lex.
    -+ * The context pointer should not be used after this call.
    -+ */
    -+extern void destroyJsonLexContext(JsonLexContext *lex);
    -+
    -+#endif /* !FRONTEND */
    -+
    -+/*
    -+ * stack constructor for JsonLexContext, with or without strval element.
    -+ * If supplied, the strval element will contain a de-escaped version of
    -+ * the lexeme. However, doing this imposes a performance penalty, so
    -+ * it should be avoided if the de-escaped lexeme is not required.
    -+ */
    -+extern void initJsonLexContextCstringLen(JsonLexContext *lex,
    -+										 char *json,
    -+										 int len,
    -+										 int encoding,
    -+										 bool need_escapes);
    -+
    -+/*
    -+ * Counterpart to initJsonLexContextCstringLen(): clears the contents of lex,
    -+ * but does not deallocate lex itself.
    -+ */
    -+extern void termJsonLexContext(JsonLexContext *lex);
    -+
    - /* lex one token */
    - extern JsonParseErrorType json_lex(JsonLexContext *lex);
    - 
2:  02eea9ffe0 ! 2:  e941ba5807 libpq: add OAUTHBEARER SASL mechanism
    @@ src/Makefile.global.in: with_ldap	= @with_ldap@
      with_uuid	= @with_uuid@
      with_zlib	= @with_zlib@
     
    - ## src/common/meson.build ##
    -@@ src/common/meson.build: common_sources_frontend_static += files(
    - # least cryptohash_openssl.c, hmac_openssl.c depend on it.
    - # controldata_utils.c depends on wait_event_types_h. That's arguably a
    - # layering violation, but ...
    -+#
    -+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
    -+# appropriately. This seems completely broken.
    - pgcommon = {}
    - pgcommon_variants = {
    -   '_srv': internal_lib_args + {
    -+    'include_directories': include_directories('.'),
    -     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
    -     'dependencies': [backend_common_code],
    -   },
    -   '': default_lib_args + {
    -+    'include_directories': include_directories('../interfaces/libpq', '.'),
    -     'sources': common_sources_frontend_static,
    -     'dependencies': [frontend_common_code],
    -   },
    -   '_shlib': default_lib_args + {
    -     'pic': true,
    -+    'include_directories': include_directories('../interfaces/libpq', '.'),
    -     'sources': common_sources_frontend_shlib,
    -     'dependencies': [frontend_common_code],
    -   },
    -@@ src/common/meson.build: foreach name, opts : pgcommon_variants
    -     c_args = opts.get('c_args', []) + common_cflags[cflagname]
    -     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
    -       c_pch: pch_c_h,
    --      include_directories: include_directories('.'),
    -       kwargs: opts + {
    -         'sources': sources,
    -         'c_args': c_args,
    -@@ src/common/meson.build: foreach name, opts : pgcommon_variants
    -   lib = static_library('libpgcommon@0@'.format(name),
    -       link_with: cflag_libs,
    -       c_pch: pch_c_h,
    --      include_directories: include_directories('.'),
    -       kwargs: opts + {
    -         'dependencies': opts['dependencies'] + [ssl],
    -       }
    -
      ## src/include/common/oauth-common.h (new) ##
     @@
     +/*-------------------------------------------------------------------------
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		goto cleanup;
     +	}
     +
    -+	initJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
    ++	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
     +
     +	ctx.errbuf = &actx->errbuf;
     +	ctx.fields = fields;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	success = true;
     +
     +cleanup:
    -+	termJsonLexContext(&lex);
    ++	freeJsonLexContext(&lex);
     +	return success;
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		return false;
     +	}
     +
    -+	initJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
    ++	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
     +
     +	initPQExpBuffer(&ctx.errbuf);
     +	sem.semstate = &ctx;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +	/* Don't need the error buffer or the JSON lexer anymore. */
     +	termPQExpBuffer(&ctx.errbuf);
    -+	termJsonLexContext(&lex);
    ++	freeJsonLexContext(&lex);
     +
     +	if (errmsg)
     +		return false;
3:  b3a731b695 ! 3:  37eaa5ceb6 backend: add OAUTHBEARER SASL mechanism
    @@ src/include/libpq/auth.h: extern PGDLLIMPORT bool pg_gss_accept_delegation;
     
      ## src/include/libpq/hba.h ##
     @@ src/include/libpq/hba.h: typedef enum UserAuth
    - 	uaLDAP,
      	uaCert,
      	uaRADIUS,
    --	uaPeer
    + 	uaPeer,
     -#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
    -+	uaPeer,
    -+	uaOAuth
    ++	uaOAuth,
     +#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
      } UserAuth;
      
4:  b2a570114e = 4:  7177943f0c Add pytest suite for OAuth
5:  48cd916bfe = 5:  bea695c317 squash! Add pytest suite for OAuth
v13-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v13-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v13-0005-squash-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v13-0005-squash-Add-pytest-suite-for-OAuth.patch.gzDownload
v13-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/gzip; name=v13-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v13-0004-Add-pytest-suite-for-OAuth.patch.gzapplication/gzip; name=v13-0004-Add-pytest-suite-for-OAuth.patch.gzDownload
v13-0001-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/gzip; name=v13-0001-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
���Kev13-0001-common-jsonapi-support-FRONTEND-clients.patch�\}[�����|
�}!!�����t�SJC�^����[��Ql�xql��@s���wf$;v��d��������F������B�q���o7[{]��o�����������G�v����3�c�"`z�5���Z��^:2�7�;�I�@��S���a��CQ7����W<$��6;�3�����7h��.�5{�f�z:�]�����<�9z��6�{���^����x�X4
?���������3]GxqT*������������f���)	�]�?������2;��Xx�����`��?/��-B�xQ,8���u:���g�uv3v"
 ��4�X��8��=����<F��q���E�p�c�(��=OC��r}qn\��1.�������J<b�����4DZ"2�
����8�t���S`"���	��c��DL�p��GR�c�~Z���J�1�p>JS��������~c����%;�N������Ldr<�����v�G-�k��D]�>���J&0���z�V��X����5�;h9�l:�
`	c�=�Q\7}�	�[������l�#n��.�k�����{a;�`���������x\��g��W�������|�>�:���~���Lwj��y���r\���������V���v������:��+dC�R*Y�m�Z���o�m���]K�g��l��3��~�nu�M}�b��f����{l��*�d/^����i=V�?��\6���+�L}��Gj
�]3YL�����(\h%���eWS��7�)wi�3"��n�$�����J��Zv�G���I�9ttlV�t~v ��������u�T����	+J*���Inc;�f|"�m ^]s��J��Fc0�J:M^�bO4�3c��rxtc/�+�g��&�dJSO| b�����z�YV;Ha���~���=�?��P�4���[�O��>���c�c���m����[�
<��P�j&3�},&���[]�Q|��_O��m���`���.Y����*�8^0�YD�]�v4p�?B�BS\�K�fr�,�3`y�kh����g�x������T�J��O%�9�b��&����������|�mB,]2����I���:M4}��>�;���]^���f�V{����q��`�l�n_���no��z}�z}���'/�� �Mv/��p@
�j'u��}9���d��/'�*������\g��<�Qg8k����������+����;����2n�#\���E"�X���F�I�`6��L����d�3�Q���Y�SO��uk4������t�x���������i���kz�U�GG�9:�:�����R}��}�X��6L���j�96L�N!t�
-�
)�?`����W�n$��F
�\��y�'2�5q<����a
��n��v��GA�5����R��6��`/B���sp'@�0<Q��H��g �F
+18&
k�]�P���z�&hp<��o�^��,G�U�4���OO/����_QQ3��x��A�����%?e�������p&��\��dwH��e�:�o�6�d6�S!a�mD����O�~�C@�g�RG]�.����
t7���U�z����]�R�v�"�Z�vC�����@q%��{�BH��"��p�A:���,��A�5���A��LH���D`�D��`W���0K�5�:�Ie�@@��P`���7�$���}!�}M��n�b��#N�W�?`��n%b���x�j?�No�pdx��_t����@D�&?!�����ly���.K���ek�t��C��"(x��@�_g�F�a�/5��\����E
L��IBa��!�-W`�C��&��8&|����po���A�<r�2���y��NU&�-;�.�����t��8�
XO��?O�H����)"���������U~����@��
��>�f��4rh��v�+���w�����[4��������c�J��`�`�xB|j~~�+��&����$i9�om���C?N���k���AS�i�g��\Zj��*�7Y!�gb	nN*�N����<O'V�835t�5T�������h'3@��JF��1
h_�:����L�
)�<����`}��� ��������M*lY>-X\�8�K0#o�di*��i��������#�"��
w�^���nmKu��k��
FZ�07@�$(_��TH�)w2za]�D��DXJc�+�[���K*r�����������k-��L��8��b�b����T���B�,j@���1m���Xl%��Qc4i���G0����$#�����J�������N�_�q=u<��gR�T���
K�B���f(d��+�VJD��4�uL�c��J�H�>�����'\`�-����bp�64�cPZ�e�!��1�:j��� �*����	�r
����q�j�8mi.���������y���#�������W������d�e��p�
��g����D6	XTe��Z��_^]UY��u���>��8��O��Y@�p���*G�=��%O�������&|�vV��z�ku�S�2SCFfT��WO���Y�cn:zG������.��&�[��:D#������~)e�(A����Az�A�+�A'����lA����*�=���x\g��Q8��\g��,>�Q���0�
jX"�����l����p)������-�!N�S����-.-�8��U�Y�h�T`\H$A���X6��g�hb!��3���`+��&S)��8��]0EB[��hi���#<�(g��Cw�������x�b����oR����V���|%(��ON�������j����������1�>:��Wg�72X�v�[�j�UsO��!�s��p�#2�DL�wXo�Fu)+�a��v�-m"��s����7���/�c���^��a�)!|��# ��	��A�p�4&#�X~jdM�,������A��U�G�@�����?g��������c���HL�K3������8�F�w"���0�L2�GN�|� ?�2��ZZm�J�z��D������o���#�V*q�n���c�z[�������=��1�h R�<-����ag���x}x�"�*��k����.��l������3�}��]hh�����-v^U*��,7����y��h�;���u����9��q�7�l?������g�i��f��
8�Os�7�o�9^�0�5P8f��O���fw�}����
v��{'H
�6lS1�`�S*���aQ Li��R�GAX��=���m��m��h�����edSP+`?�Y#�d���W?�#L�zb�=K�Q�e�EB���
.��HR���B�My��~PJ�.��O'�k����1����K�Z����#y�3����^P��"B�I5����L �91Z��iK����U���/������^Y�-T&j���e�@3W+�~���������l�J���Y�^����������X*�����y2�`@���Y�zgO��n��Z=m��:.����,Q���@�Y��Q��"XdX#��i�X"�K}�����JX+8���S�����*�	h<��tx�7���c=*�Pb���yF)6K���LK5U���-v���d9zI��nA�j�x��cWN9��(b���'��VT��DS�{�0/���c��-=O:�
z85B�,s�k�����eU�����-D%ry��tYG2d�s?�����d���D�������I�SS�s$������a�)�2H�-9��h2N�;$���*�'2�x���%+�{��
����I�KG�����0&��m�`� �=q����^4N��cdh������-��*�s��U�A�����UUPS��
�SZr�ez=�����S'��C��
�f�`��C�^RuD��zc�������rA��
v��&O��UV_-�8�X�d�(������2���� @�]I*�4�ll���.H�z7��J��c�IT��9����|����qr�����+���>�(���eP�������1��x�}{�Ct���t<�R��]�(��R���W]}y�i	.d���w8&���-!��I�����=���)�:��:E����f�#�L#ZI��!��|]��v���*g��$C7�M����������E]}K ���q�����;_�a��I����&������������/k���:�H}���v�[�d-���v����PL���V��\��2�.N/�����[�����J���,4�5u,�m$_����Pe(��9�X��p}������_����!Gq �����i�_W�J!G�o��'�~]m���u����?!�����}��7��J	�)����
��1���[#����A��T��}~�?��t�o�<�D�P�I���]^����;��X]���&|a��RG����:ym����Ux�8�����O74�������=��J�%�A>��=z���t���$�����Is���3F����MsL�Z���\����T'�k6{��q,<U0S�F�"��7���*{{~sux~}zxs��t(�{]���K*�ry����TZ�V�H|�����'-_D��f���������jB!!oO=�[�&�%��m�|����"S13[�����|9!��N47�-Xu���Z��x�G����r�'�\��`LN��n��ZN�)�4���rK;����)�#[���o�.tX�O��r��:���5�ig���Lc�<��(Z�,�K��8K�p�����c���J�,�?rP�|���L�$�	o:Q�����i,}���A
�
p��2���~���?����.�)#���/���(�N��x\��]y����������imcP���`�V_����J��)�/�>�}�La2_&RuA���|���RM
���r-������L�L���
�����A$�5,2�����/�J��O[V�������bEQ]�2��6e4|"t����ZXz��r_�+����7���m�V<Po�GB���M{T���v�o���o�Xz�+�P���O?i���������d�0T�z �/�Kq����p���Gc��G�[756�ps�4y_������wQQ�_)M;�>~���x�>p����K�1�<����7c���gB�v|�J��������?���;v�lz$���G���i����������EH�:�ak�p��|#$b�Jt6�����������
x��_9el��������
���3@R<`������������#"��Z����FI��%�g��@������=^���e�HI��n,��W��Pd1�������Qa��
So�Zl�nH�il�w�(i�%Y��1����Z�������/�?~4�=�?����"�z��Og��������hW�QJ8&t�r��'��7�+�H*����!]]����#�e�����`YO��)uq@��w".��&����
()�Jo���H����Q�F5� �1������������b�����,'��K��3
�1���1�V6�������G�e ��|�*�7-�P�`��������{�M�z`_�NN3s	0�0�d���7W��Ob���F
|�M}H����W���4D�s�+r����i����=��CR#j�(����{���F���n��G�o&3o4oJ�g��������M��I�"DY7�������M$	�A9W���5O�����+�Pb�{=
��F�@dT7�0!����������U�5���\l?���xP�>[�#�_��s�FeP�cE���U[�lq��g����zq��6?�O]-c�v�	c�m����G?��</���E^�H|��3�~�H�UP>@�%
��fc�/G�wR�����n��q1�������c���{�������]r�aS�I�-mLn�^�)�{�g�>BY�X]a��p�`$�����@��#G� oW�:#��y�`��x���>B���o��c�O&�d��e�X�
�v��#��#���L0��5�����J�;��XN�+e��7�v�_9���<����=vA���P�����
�Eb�|��>}S�����[�T�}��Gr��^�8&t��2�uCf�*����2@?��4��I��r3��N
#81Andrey Chudnovsky
achudnovskij@gmail.com
In reply to: Jacob Champion (#80)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi Jacob,

Wanted to follow up on one of the topics discussed here in the past:
Do you plan to support adding an extension hook to validate the token?

It would allow a more efficient integration, then spinning a separate
process.

Thanks!
Andrey.

On Wed, Nov 8, 2023 at 11:00 AM Jacob Champion <champion.p@gmail.com> wrote:

Show quoted text

On Fri, Nov 3, 2023 at 4:48 PM Jacob Champion <champion.p@gmail.com>
wrote:

Thanks for the nudge! Looks like I need to reconcile with the changes
to JsonLexContext in 1c99cde2. I should be able to get to that next
week; in the meantime I'll mark it Waiting on Author.

v13 rebases over latest. The JsonLexContext changes have simplified
0001 quite a bit, and there's probably a bit more minimization that
could be done.

Unfortunately the configure/Makefile build of libpq now seems to be
pulling in an `exit()` dependency in a way that Meson does not. (Or
maybe Meson isn't checking?) I still need to investigate that
difference and fix it, so I recommend Meson if you're looking to
test-drive a build.

Thanks,
--Jacob

#82Jacob Champion
champion.p@gmail.com
In reply to: Andrey Chudnovsky (#81)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Nov 9, 2023 at 5:43 PM Andrey Chudnovsky <achudnovskij@gmail.com> wrote:

Do you plan to support adding an extension hook to validate the token?

It would allow a more efficient integration, then spinning a separate process.

I think an API in the style of archive modules might probably be a
good way to go, yeah.

It's probably not very high on the list of priorities, though, since
the inputs and outputs are going to "look" the same whether you're
inside or outside of the server process. The client side is going to
need the bulk of the work/testing/validation. Speaking of which -- how
is the current PQauthDataHook design doing when paired with MS AAD
(er, Entra now I guess)? I haven't had an Azure test bed available for
a while.

Thanks,
--Jacob

#83Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#80)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 8 Nov 2023, at 20:00, Jacob Champion <champion.p@gmail.com> wrote:

Unfortunately the configure/Makefile build of libpq now seems to be
pulling in an `exit()` dependency in a way that Meson does not.

I believe this comes from the libcurl and specifically the ntlm_wb support
which is often enabled in system and package manager provided installations.
There isn't really a fix here apart from requiring a libcurl not compiled with
ntlm_wb support, or adding an exception to the exit() check in the Makefile.

Bringing this up with other curl developers to see if it could be fixed we
instead decided to deprecate the whole module as it's quirky and not used much.
This won't help with existing installations but at least it will be deprecated
and removed by the time v17 ships, so gating on a version shipped after its
removal will avoid it.

https://github.com/curl/curl/commit/04540f69cfd4bf16e80e7c190b645f1baf505a84

(Or maybe Meson isn't checking?) I still need to investigate that
difference and fix it, so I recommend Meson if you're looking to
test-drive a build.

There is no corresponding check in the Meson build, which seems like a TODO.

--
Daniel Gustafsson

#84Jacob Champion
champion.p@gmail.com
In reply to: Daniel Gustafsson (#83)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Dec 5, 2023 at 1:44 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On 8 Nov 2023, at 20:00, Jacob Champion <champion.p@gmail.com> wrote:

Unfortunately the configure/Makefile build of libpq now seems to be
pulling in an `exit()` dependency in a way that Meson does not.

I believe this comes from the libcurl and specifically the ntlm_wb support
which is often enabled in system and package manager provided installations.
There isn't really a fix here apart from requiring a libcurl not compiled with
ntlm_wb support, or adding an exception to the exit() check in the Makefile.

Bringing this up with other curl developers to see if it could be fixed we
instead decided to deprecate the whole module as it's quirky and not used much.
This won't help with existing installations but at least it will be deprecated
and removed by the time v17 ships, so gating on a version shipped after its
removal will avoid it.

https://github.com/curl/curl/commit/04540f69cfd4bf16e80e7c190b645f1baf505a84

Ooh, thank you for looking into that and fixing it!

(Or maybe Meson isn't checking?) I still need to investigate that
difference and fix it, so I recommend Meson if you're looking to
test-drive a build.

There is no corresponding check in the Meson build, which seems like a TODO.

Okay, I'll look into that too when I get time.

Thanks,
--Jacob

#85Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#84)
7 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi all,

v14 rebases over latest and fixes a warning when assertions are
disabled. 0006 is temporary and hacks past a couple of issues I have
not yet root caused -- one of which makes me wonder if 0001 needs to
be considered alongside the recent pg_combinebackup and incremental
JSON work...?

--Jacob

Attachments:

since-v13.diff.txttext/plain; charset=US-ASCII; name=since-v13.diff.txtDownload
1:  e7f87668ab ! 1:  b6e8358f44 common/jsonapi: support FRONTEND clients
    @@ Commit message
         We can now partially revert b44669b2ca, now that json_errdetail() works
         correctly.
     
    - ## src/bin/pg_verifybackup/parse_manifest.c ##
    -@@ src/bin/pg_verifybackup/parse_manifest.c: json_parse_manifest(JsonManifestParseContext *context, char *buffer,
    - 	/* Run the actual JSON parser. */
    - 	json_error = pg_parse_json(lex, &sem);
    - 	if (json_error != JSON_SUCCESS)
    --		json_manifest_parse_failure(context, "parsing failed");
    -+		json_manifest_parse_failure(context, json_errdetail(json_error, lex));
    - 	if (parse.state != JM_EXPECT_EOF)
    - 		json_manifest_parse_failure(context, "manifest ended unexpectedly");
    - 
    -
      ## src/bin/pg_verifybackup/t/005_bad_manifest.pl ##
     @@ src/bin/pg_verifybackup/t/005_bad_manifest.pl: use Test::More;
      my $tempdir = PostgreSQL::Test::Utils::tempdir;
    @@ src/common/Makefile: override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
     +override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
      LIBS += $(PTHREAD_LIBS)
      
    - # If you add objects here, see also src/tools/msvc/Mkvcbuild.pm
    + OBJS_COMMON = \
     
      ## src/common/jsonapi.c ##
     @@
    @@ src/common/meson.build: foreach name, opts : pgcommon_variants
              'dependencies': opts['dependencies'] + [ssl],
            }
     
    + ## src/common/parse_manifest.c ##
    +@@ src/common/parse_manifest.c: json_parse_manifest(JsonManifestParseContext *context, char *buffer,
    + 	/* Run the actual JSON parser. */
    + 	json_error = pg_parse_json(lex, &sem);
    + 	if (json_error != JSON_SUCCESS)
    +-		json_manifest_parse_failure(context, "parsing failed");
    ++		json_manifest_parse_failure(context, json_errdetail(json_error, lex));
    + 	if (parse.state != JM_EXPECT_EOF)
    + 		json_manifest_parse_failure(context, "manifest ended unexpectedly");
    + 
    +
      ## src/include/common/jsonapi.h ##
     @@
      #ifndef JSONAPI_H
2:  0ab79a168f ! 2:  5fa08a8033 libpq: add OAUTHBEARER SASL mechanism
    @@ Commit message
         - fix libcurl initialization thread-safety
         - harden the libcurl flow implementation
         - figure out pgsocket/int difference on Windows
    +    - fix intermittent failure in the cleanup callback tests (race
    +      condition?)
         - ...and more.
     
      ## configure ##
    @@ src/interfaces/libpq/Makefile: endif
      endif
     
      ## src/interfaces/libpq/exports.txt ##
    -@@ src/interfaces/libpq/exports.txt: PQclosePrepared           188
    - PQclosePortal             189
    - PQsendClosePrepared       190
    +@@ src/interfaces/libpq/exports.txt: PQsendClosePrepared       190
      PQsendClosePortal         191
    -+PQsetAuthDataHook         192
    -+PQgetAuthDataHook         193
    -+PQdefaultAuthDataHook     194
    + PQchangePassword          192
    + PQsendPipelineSync        193
    ++PQsetAuthDataHook         194
    ++PQgetAuthDataHook         195
    ++PQdefaultAuthDataHook     196
     
      ## src/interfaces/libpq/fe-auth-oauth-curl.c (new) ##
     @@
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 */
     +	cnt = sscanf(interval_str, "%f", &parsed);
     +
    -+	Assert(cnt == 1); /* otherwise the lexer screwed up */
    ++	if (cnt != 1)
    ++	{
    ++		/*
    ++		 * Either the lexer screwed up or our assumption above isn't true, and
    ++		 * either way a developer needs to take a look.
    ++		 */
    ++		Assert(cnt == 1);
    ++		return 1; /* don't fall through in release builds */
    ++	}
    ++
     +	parsed = ceilf(parsed);
     +
     +	if (parsed < 1)
    @@ src/interfaces/libpq/fe-auth.c: pg_fe_sendauth(AuthRequest areq, int payloadlen,
      			{
      				/* Use this message if pg_SASL_continue didn't supply one */
      				if (conn->errorMessage.len == oldmsglen)
    -@@ src/interfaces/libpq/fe-auth.c: PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user,
    - 
    - 	return crypt_pwd;
    +@@ src/interfaces/libpq/fe-auth.c: PQchangePassword(PGconn *conn, const char *user, const char *passwd)
    + 		}
    + 	}
      }
     +
     +PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
      		case CONNECTION_AUTH_OK:
      			{
      				/*
    -@@ src/interfaces/libpq/fe-connect.c: makeEmptyPGconn(void)
    +@@ src/interfaces/libpq/fe-connect.c: pqMakeEmptyPGconn(void)
      	conn->verbosity = PQERRORS_DEFAULT;
      	conn->show_context = PQSHOW_CONTEXT_ERRORS;
      	conn->sock = PGINVALID_SOCKET;
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +
      extern char *PQencryptPassword(const char *passwd, const char *user);
      extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
    + extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
      
     +typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
     +extern void	PQsetAuthDataHook(PQauthDataHook_type hook);
3:  fb0cc3f87e ! 3:  13cf3f80b8 backend: add OAUTHBEARER SASL mechanism
    @@ src/backend/libpq/hba.c: parse_hba_auth_opt(char *name, char *val, HbaLine *hbal
     
      ## src/backend/libpq/meson.build ##
     @@
    - # Copyright (c) 2022-2023, PostgreSQL Global Development Group
    + # Copyright (c) 2022-2024, PostgreSQL Global Development Group
      
      backend_sources += files(
     +  'auth-oauth.c',
4:  153347752c = 4:  83a55ba4eb Add pytest suite for OAuth
5:  8b85e542a7 ! 5:  49a3b2dfd1 squash! Add pytest suite for OAuth
    @@ .cirrus.tasks.yml: task:
        # NB: Intentionally build without -Dllvm. The freebsd image size is already
        # large enough to make VM startup slow, and even without llvm freebsd
     @@ .cirrus.tasks.yml: task:
    -         --buildtype=debug \
    -         -Dcassert=true -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
    +         -Dcassert=true -Dinjection_points=true \
    +         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
              -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
     +        -Doauth=curl \
              -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
-:  ---------- > 6:  a68494323f XXX temporary patches to build and test
v14-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/x-gzip; name=v14-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v14-0001-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/x-gzip; name=v14-0001-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
���ev14-0001-common-jsonapi-support-FRONTEND-clients.patch�\{[������z����
������0����G�JZcYR%�����3�+Y���m���{�;�s�����8�	3���<��Z����k����i�Zf��;]�n�V{4�v:�����X��j�>���Z�xd��
�|���$p��O��>>��<�P��?��������4�dg|uV?�7��F�Uk�Z�x55V�g�/��^��z����������������h~��������%�\GxqT,������������3v�����.�SG�z�x��l�^,<��Bc�H��_���H����Xp 5bWq�x�'�������X(�P(��4b����[w���o�u0��;��,��c��,���xz�������6�����lx��"��'�-l����C[�@�Tf����N1x
LD�[�3ulQ��������@�x,��u*>��c
��4�l����)����0���4.�B!@^���M���0v`"3��^���V����
�k��D]�>����0��3�X�V�,
�}����[h9��	:���~�v`��6&�sF"���e2nSQ��7��;1r\����q��q��t��q�v�U����t�����ts���c����A�����j�y;��Nm�8���qu���P5��n���F���v�J����ufW��j�X����U��N���v2��_t<[|d���Z��\�E�7z�^�&X�Vk�Z�(�����z.���j��uX~v|�`p
O��3?O�l2c��b�N���?�oCq��i�/���7��UPd�X�������EN�X{SO|`a���j�j���<������09y���S3��8#�}����`xV~Z�l4�z�A����k@+��>|��dq)��Z��&?��{��6M��jr�^_6����I���Z54���>�;�h��..�O_]��3V}����q�����g;7;�����7;;�M�^���^�~y���+5>������>H�����_���������K��r��7i�#�1�?��9�g��|_��~}98|I����/�\G����9�����1����f��F�Y?�6t�����5m�\���K�1���k����V\���S���LL�#C}��}����#�=+V�sF0�Q
:�hYlH	���$(��+�	7���}��y�'��=q<����*�{�
�c'-jh�Z<8�K!+ ��������������	�!�C'4�q�C�Q�Z��0C�U%�����
�@48�`W��/�^��2�{��������Q)r�+�jF�K��9H�+w�-3���/��3����$��
�`���
@e�l����
�8�W�o�~�C��{�.]��*�_�]����������Y����c��@D_Et����Pi�4*EP\��#��fI���A0���#�)�T���I����@B����L�&'�h��]F<�����X�X��@@B�P`���4�$�hw���^W��qwB1���H��S�����+>j�B��M+�4U����-��K>�F]pF����ix �AX��
9���-����K�
\
�!Hl��G�p!�kA����Pe�(I
��l��zq&B�``*QVs0���2��v������W�lD���9(���Q[��W^��S�PLu�F���b����q|9��`�~����<%�SD~��������U~����@^AG��?K�wZ�@�Z���~�;v�aP��]5��h�R�b�D-\	Q��2�����T��W9MFO?�R�t���^(���C?NK��k���AS�i�g��\Zj��*�7Y!�gb	n[N*�N����<O'��qfj� ����u9'%�Nf������!?b����6k=�pi%��b��!���M0�_�H��^�g�I��l�,�v�	�%���F2���H�4h~�&o��������e���2�W-��z��h�`���s�!���=��wMrq'������6$�R"{\q/�JE.\R!��=%57�m��=w�Zq"Sl!�p�n��JL-*��Eb�
�yA��[1m���Xl'��Qc4i��G0����$#����=�q�����}���U�S�S�y"�\�g��-�Zw����qe��J�(�����	{l,�E�4��Z;*:|���,`8���d��s
�1�a���,�q.�1~��J\	~�F�����	�rV���i�$�:�q��\���
���/���MmW�%�k�9k���sW���i���[8�J�y��g�$`1$c�j%�qxy5P�I8Z�H���|���?�d>VO�c?���9�����_����,:+�j��qgE>���V'?�(3u1d$`F�q��4Q�������U^��C��V�����5LN\`�h��8v�F��"��0�\LY%�C�7|9���`B^�:�Ep/g&I���U�;v��Xg��Q8��\g�u�	���_�D�w,�;����1a�4�E=t�
\
�&@��t{���'�)b����{c�T+S4x,0.$��c}H\
6��h�d�4����Ur���( �Z�`]���.�"�B&)Z�9!��a�u�R&]�8t;���F�/F.�������Nnl`���<�w|xrj^G�/
P��.o�O��/�����b`/��e������v����-C>)��	�p�#bb"	�	�[H`�9a�KY!�!A�5[�{ci!����g��1�G��5�Xi�E�-f���>r��0���@cb���c#�r��<�.-$?����<z
Z_9\N�s���`��O�>�j�Q��	pI`&~xutr��c���N�����I������/��XB��AK��-Ti��Z�h3�}}���r�y$+�g�6l�A:v���������J��c�s�"�zb��8�;������5�@����
Xm�����8�ywg���~s����7��^�h�yUiEOe9r���9��6\wo��M����9���<����q��v��-8{�s��n�9|�s��x��:��8����1�F��k��6�&S��VWwN�G�M�����T<�s��@X0�����44�dg�����������6���edSP+`?��!�d��+�W?�"L�zb�=I�Q�%�EB���
.��HR���B�5y$�yPJ�.���'�����1�������- ��E�G�����
�^P��"B�I5����L �91Z��iK�uj�U�k-6����o2{e���J�`����_�4sp�,�������������kDY	U�=;�����P9�P�K�����?MF�a�@n��z�@��nYo4:Z��:^�K'YY�h+V�
�gu��FY��`�a�<���c��A�^�*au��d��
5�T�,@�Q�����	dw��QVr;h����:Rl���	����%�����X����%�������=9������
��Tc���z�@ �����yY��X��wl�y�V��)��d����3���E.�/���E��6*��1�.u$C�8��X>?8JF�
O�HOL�3y��95�9MAD�vm
SNyK�A��G�s�R��C����R{"s�����dE��b�;cT����>	|�H���T���E7,d�����G.��q2����`�m�x�r��{d+���u�
�A�}��**�I%��\xJKN�L��x�rBk��:;T��po
��:T�%UGd��7n0����(4J;`w�i�D}Ge�������j@��"=`X<�Z<)� =8����"K3���������T�����q�t�Q�~!�*�0�Ps��O���2N�=<=y)���G��Q	3�2:�{L),��������>���w�Z� �n�v�,���LKp![�O�[�1$}m9)��J9�����`��OA�)��)��?��d�JR'
95��
e������r��M2t���T�������%����E]eK ���q��7;�;_�a6�I����&�������������/k���:�H}���f�[�d#���v��6PL���V��\��2�����M�����/3��R��/MhM�~��r���T��z�/6������Q�x6
9�9��o�W����*V
�:�|Cu<���jC�o����	�f���k���VJ�O����W@�������O���n��@=�L�H��M�^<�[�� ��X%:+��������������r��nn�5��	�J-��W���A���C��C����4�=���l4��������Br/I��	���{SP���X���(�:w�'�E~+��N�7�0�k��$�
|���:�%X��9fc��,��k�T������z��=��<<�:=�>|q:����x�D�%U���i�U��L*�^�z$�t���HU���/"|�-�Z��xw��r|�/���M=�[�&�%��m�|���^��L��l�c^6J��H������0?D�`��C���_�������r�'�T��`LN��n��ZN�)�00h�5����4v ���7RzG��]�M\����r�d	6�x
�`��=�[k0�1��hi��L�=bt�,9�!����5�^�*��(��A!��H�3��h'��D�7���X�2^���0:���el��}�!I~9��.�)#���/���(M�~x<.\��4�]jlg�H]�����1���~�a�/�s�es9S�������>P�0�/�� ]�Z�Nky�&�u�'�y��}�C�f��u{HB_� ����?����M��~�O[��������bEQ]����6e4|"t�;��ZXz��r_�+����7��o��k���Z���k��e�����:�x{��_YK�e������vy����ih��HvCU�+����%P�!�Y�,��1���&�"W�46�pk�4y_����x��.*���+�i'����������c|������v
���q��	yw��]*�K0���������w�8��Htu�������hqqiy%/�!N���u�����A�FH$�$���
lf&����	��O��_�{�z�3|���]#
�w���6��L�I��
"�����b�NHg���h�RkiW����F)KB��i��{����;��	
��-�z������X�}�����b��}?��]�����F�'��8L{_ZMc������DI�-������uq��ZP�	������/;?~4�=��?v���!W/R�����!�:�R����:5JI���B��?R,�q{������6]i74z
�:�[�.�1?����$�YR���~+���l�(������{���?���uaT��_C�J�[?�=7�����0�	�Krr)�DY8���|��acH`e�
�q�2v�����R�/XE}�2��J?J�my	���V���%��43�3
�@�]se-�$�x�o��������}{�/T��=U;x��{�N�5j�z�a���n�\���QY���z��[�q+{?9���3�%wc/�� O��v��&�S�"s+�rWVh����L��������+������x%?y �y��H��q2aEI���R:����y�g��
]H����/u����::>�#}s�Jy�`xL��fM:��%�\�����|����Z��jZ6������J���A ���WxL�	�b�$M�9
�]4�H�������{3�]�x�M�ww\g�������>[5�����m��XIhr�89��z����H��^(9�*�iL��0V���C`�T���Q|Q����fD��(���s�N�JU�����H&DomG���~����Z&�\a�����g�o���zz��K0����������XO�)�I����qK/��������qO��g"�4-�qBr+����-����|�z�wUP����5A���c����;}��v,��-O�Q�$^�����:��mi�?�������R��LU�����!
9�%�����>���5������T�Q��_���}}�!�c��=^��<���l��cm��>K���O��<]����dIYg�*| Z�7QI((A��f�g��F���c�6`^��r9E���D����Z*+!P"��|��W������Qyi����d#���M�3�em���C2���e6��4���E>���,�w�?N
v14-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/x-gzip; name=v14-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v14-0004-Add-pytest-suite-for-OAuth.patch.gzapplication/x-gzip; name=v14-0004-Add-pytest-suite-for-OAuth.patch.gzDownload
���ev14-0004-Add-pytest-suite-for-OAuth.patch�[kw�6����W`��F�%Z���jO�i�\���n{�yU��$���m����� !Jv�=�����	��g.��d�����=��
�@��A?���a?<�{;\���s�9����J���z�c���^�����y���d�����Ms��v~�3���o��<��M&;l�}_�@a�g�������!��z��U��*���}�]��e��]����������\���B��M�����|�h\�/���b�|�|���|�d2X,��_��F0��GQ0��/8��T���	$�d1�&B���m����]$���������S���'�`������h���2�]�a\-�`�%qR(�2(�r\h��)>f�>�E��W<���"��QC��I�)���&�i
���h��D��y��L��8�4�T
��^���6���m$���������4F�������Z���8���=	��f����N����Dd��e,���K����$�o���w�������.'����Wg%|�Z�w�,{G0k����f1;�t�~�Y3�N:���;���7���3��� �����>�h�$N(�'Y�Ly��l���u0E�����;]_��g���i&�&�!�����Dt�^���geb���i�{����-s������k��Y��f���.�fjm����#�g0�!( �����z�=DO��Tk��`A&@R6OB�����}�6n!h��z6�
8O����S�-���gZw�l�Y�����T6^���&���S�=U�u����%���<���S����l��
�l>�	~�����d��]���o?����<l����]�!!���T�~<�''���?���P�n�E5�_��KHM;���{���5���C�8o7�(/�~\�2_Z]���
��{�G{����`#9J��+P�g���$�"��Y���?`��agq������?u��@�w� E��D���]$*�f������=�!�	8$����i���"gr�d�\���!�V�\�3OjN`&�9;��S��9~���wp�L�^@�w2�Q�{������;��g�������!�
���� �^��1SN#��XQ�I{%���a*?�E��(�;�H�TZ�u<��p�y�"y�|�x��GdJW�S9��Wg!���oZ8�����y~qv���0�����}����������.��9�:��^�!X���������s��z������&������}�5?�������{�����q�#��pAK���^��f��/�{���	xtmP�i�&��q�������_��Jz�;�svad���\ �L�w2��_�"0�����������)�#�
�Vd
�[W~!#�v�"H����)�e���$��.�~���lc�t���V�������I��gv7�`j&�	l��������8hJ�U�2	���'{�?����0�wjMD�������������x�qH7�c1�0�����n�'A��"x�
`�4.p�	��yV@LF/����������C��������"�z?�1��<���8�2ch'#9:���`
�'@+���GN;:2�cd#�Y� �9GNU��<��0����R�V������l��^�\y
�����8m��V&(#1�!�0�����D�V��O#Hle$in!�������q��O��
�Y�$J	�JfI�>H*E�v��i�p��9h������!�B���%SF�8M��P�)G��L�&��L@�GG�d�2#��K��z����`wI��9��%U_�_X,D������$�
�����@ZB>aaEuy[O(���9����L���T��J`GOg�0�)��J<`����������0�����
�<�Y��h�M����EJ�y��]:���FU���K�aZ;��V}]'�a�~��p����_p4��,�q��`������Qa��M<���@����e���F|�T�v��d�����E�����o4���Mv L.m8(7>�,�:[��$"����O�/J��"&Z!n)�����vTPI�Q�P�$0d�s��]qL025��o����mrCY��7�Z��@_	6''�e��&I�-K9x������@������;�6�4����5bCi�"�)*�2K@���7��qo�n"DFQM�j��<F����TX����T�6I��*�I����M���c#��[��(�`����-v��:����o�o�/�!
��
��6/���M&��7��W}��������>>v	���� ��tt�bl
�t��:���-�n�=	��9:�nq�(�*�Z%w�&���s�����H�����{J����S����7�*U�z����`����*k�����f��)��?
�������Bd�mnl����dg�����2�59Y�����R�i�R����P�u>��&4�g�UXZ����<��d����_�f|����df�����?\�!���k�����>����a���!� 3�?�p����56��y�[jK���d�"8�y������S)��:q����K3H� |I������W	�/'UCj^���z�g�7�Fo�gO�;���a|u
9��6�ve����t�V��x=��ov�r����;S3��/��|o.�"o����[H��h6� �c'��o!l��K�t	���V�82^��1�%;���8��YtAeA�F��6
^H/I�%f���?9f������o�sJ�������I_��Xxi)M:�E�L
/n`SMU�5	����W�i[J���?5q~�3�tJ]�r2���z<��| ��� C�1���*$f���R�-�o�A]��V�F7��|��L/^�<����Gi1O��~��t��Ky����D��NKI������K�R�9�6�i��W���]� ~�O��r�:����q��M���
�
�vF3_T��p�w��o��w5:9��Z��T��[V�0�����d�@������&���� ���k? --������SN���<��`��E2�	��SO�,�Z�%�A�5R�B:���5��Dv�R
�5~y�W�D��U����uZ�9��y�E�Z�U���"g���5��W�y���4s���JJp!'���x���s�)�����=h�Zx�0B����|0���Z
���=P�%c�����A�����a�t�e�(�u�3"������]�&�RX�,h��Fr�>��3�\G����8��}Ke&�Hj��37���P�2��JV�2T3x�C*�E����j|�r�N��Bv�CN-�+���4}!���9��I�I��2������%�����=�N�K���M��
h*bL3�95��db�m.��Cl��u��u�������{��+&�4��:��/Y�[����6��l��JI�e^�������S�i���3����������*|%���L23��$�bH����}N��R�&����yM�5���&��E���w-�%�N��*vDXH}����.��""��b�y�@�������Cn����<�������c���:h�ME�W����h�8.���0��&�?H�!9`u[sCzw9��V�y�T����	�k��y�v�������8��G[WY�v\����J�J�^�)��m�I���gK�$���U�����M��|A]^��9�����0{����[	Rwv�y�������x�Fb�w��rv�A~��<&�8tL<s��%��cT{s�X�`���^x0��vZ���c]\��oOS�8���j�b���������-9t��c�m#��R��P��7���,���{O�*,!���]-�m��Q��6�nI���P��t�M�����	$f������V��Wj��l|���l�d	v����I�w��d����Y{��>����]��<�U1��e�4O�`������ �xi&�2�l^13�*�����m�9�G�K�j��q�XW�Z/��x
E(;8��BaC�z_g���:]��6��
����t��E!�aS+������!g���MDS���\���otz�H�v�k�H��P�_��{>|#(faO7�����2��g9s�]�</@>�C�TOc	6K����Q�O~�~�=l��t����}p(�]�w���?��%�%H��^�l�A�i$�,�>�A$�)�n�n����#����*W�m���(���Jf���G,/m�y�2��o��v:�W����k�q����+�y(n|���0���I�_-@�����
<�M'��U��V
����N���i�A���������L��I��j���y
�N���H1�`6�D��d����������Qw��5����	�;�
��,�Su���/��'E>��l�=���v6	pP� �+,�Z���v�V|�<p����
���_��k4��oB�b�4�
z�s�jt�>?��R��e�1a����S�_�������uYA
�JdYO7�;;X����$�m	�<���`�����UIq��x��x�p��b�54��{?Z�\����j�|Zu�|�9�	��Z�`�#-(�;l�F'������g �US��={���Zf�����Wj,�t~�a[��G�^�6��_�����d�������"������$?�������\��%���L_���*����h�����zJ�������4Q�c�������ock���1�~a�!�[�w3z��_���$��I�A
�+�
/�\�T�K=;K�M^-�-���.���T.��K��pn�O���|A�y�'�b��o.�H��f�s����}'�\�K����)����fx����h���Q�{�����E����K��*�x$��E��py5C���W�[vl��������o.�1N�������=�F2o��Nsy=MG�7QmqBob+<��Fa��!_<�~�;�f���E#Zw��0���!�Q�3����(��@�����3�����p�	u�X$�U�@�l�t��T�Z� `@>����=+27���8�U�q�{<�M�l�$��e�C�����A���M�KK�M9�s�dv8(��)��4���O���Y��D�s�d(c�	=�$��0.0��<�� 8��7���1�Un��+�T�C����l=W�a��Xf�l����ul}�N���UZ;�QPDX���*d��y���	uL�)�������U e�]���
�n�w���j�^u;��+�����j���\����m!�nZ���ru���y���� ����
U��\!K�;KL.O��������By]��7�\��]w���$��LZ�f�LX�����J�w�j��kV�]Q��,x.'����-����E��^'���-<�5�_�}��X�z��]�I89���'w5����&����]��D�n���:���
P�#����GV.�b���}Ds�W�i��%�)�b5����'o��T���t�w$&�K���o��;K|Y����L�V�d+{���&
�z���y�`��W?C�&���>	�j#v4_��N�M���jht�ay�!����[M�+0{^��+�|5���*���NQ����������3!�������NG���U��`p���#�Z�����A�������&9:��]&0,a
pC�[6)+|�luV6\�:�W{o����.
����w.��e�hN0�o{���\����{��J��e��o��������2�3��j���}!�RIy�����}e:����q"�;������dns������9��z,uiT�`V�ac~YA=���w�N��aX��]h�l�q����k��@�<�vK@��|'�w�Y�
@�u�#l�3����8F;�h2��v���fW��p�K��
0t%�Z�+�� D�v����g�������hxn9'�\3�7�^=
e(Y�C��y.�B�\���]L�	ZV�SI�b�9�L]�>;cq���x����7e�"���&�K��JM<��N���K�8I��7�p�U��_M��N�>�����g���A��`6t;�������ur�:$Q�F	���8c����q�=9P���c�	:�z������D�3M60�
���h���:�d�p@�����"��������e#t/��P�@Cy�cb6�t�_�a~��;Au�����ZM�v���+N��^���)�@�q�5�2���?������yzY4M���`�
�"����-��l����_{p�knK����"/�����Z
�s���TL�}��G�Mr~&���9\|A�I���<��=��{lB%iY�'Kc�03�C�*?�Q�nU���Z��M<a�4K�I��Vp�����S���j�l'_�����b����WQ�����3������|�����1�.T�!���X.P�N�����#��H�t0���>��)@<�z����Y���b,�=�Sf�p��IpI^��*������ uT���k�X��!�,1��0��|�1���j����T��0>% ���v�����TyI��0�n�4iQ��e���=L��O�r�&]��3����B���1�|��ho��f����X�i:Y7t���?���@��������Y�+�����r��Lt���6kf����b_�e�q�u��{�8*L�MZ�RNNs+%�����t��D�sd�?|����[��H��3J�B��8�i����K�F^��u�H��{���� �,�X�����270�}��A�
N�1c����r���U��a� ���>����3/�}4�O������@7"HL��E�U�X�x���y����\��%�}`I$��=���m�{��(�1*h�:(�|����3��e��*N��w��0![8�{���.����x������B`����l�!!���w@�75��_��/����n�Yw�����~�F%Pv@�g,a���>Ag� gY4�N]nK��n�P�����G�������c�}�AQ�<����@�$~�x��l�
���%�?p����
�����67���A�"�f��D��E�l
������b��(:���u��	� :��-j`�?C�3���`7���F�3\���������&���F�r=Or�z�
QU�q��^ll�h���qNy��ki��>T�
z&��x=���1U��H7r��Z�����w�������x�.�c�)Q�(��>��p��L�;1Dz��6�e�(���EKV6%$��REc+��7�@�������=$J�?�T5�E�0Q�(B)�K(�;��z�	BYn�?��S��
q�s5�O\�-������=Y:��sV����������<��s
��r-�`.�hn�},�G��$m�s9�qa4,��{p�Jo�~GQ�:vw�
��D�2��^��m������h��}gL+��2!c�M�"��R&MS��["��������hB^��8�~>������6v�.!�mR�z�OVN#2�8�f��2�i��j9E��2�%����To��HE��/�k4&���@TI���F���<p�EM2i���B�����d@3:l�>U��;�T:Q�y���?��Kx�z�W�����r�1��"���[��N�j0b��>������[^�.~�R� 4�@^�,�u�"s�4>�*8Bh�V��Mz�uR�=P��F%*J��G����TS_���#s&Z���������>fg78�V,rD�jY���74;�A���s�w<�g�4�!M�;za�
�ccg�sm2�~?����;���^�v�r�yv����������d�j�����W�����������<�?|q�L�[�`R����_�������r�:�>�6m(���M� ��%����	�E�49���9�hhC�����L^�����L�we���n�������5����'�^�"#��[X�C�&��Q�T-�^!y2g�U�|,S#��>������>>p8.=f2�6+��H;��=>:|uz����|c~sj�����?���M�n��hw�K-�4�(�\W�4~�n�%3ztz�rp�iF&V��������[p��M��P?L�	���j�p�6qS�7��=��c�����Kx�CD�����L������g��F(+�QUdB���)1G.������P[������U�G}-��)�9GC���*�2����5!�>�S���]�0&9�e�T;u<���<��$��Od,�n������0����ibBG��8G��jR}�L��E�L5�V�sJtD���r2cS�U*�6(w%Gd��>�b����}`��x�h������,37Y�G{��af'K�1mx-�����s�|KY�M��c(�@0_E�d,�������O1qw���5Z��YW�����Y�^��'��	Xm�9�5���z���������	���B5K�������|Pp��������q~��Oc�BWd!"h��������#Z���R�"���A�k�e�UBYt�`��_��	�C.z>3�&��+��v3�j[E28��/�I��i�9vIAe��?e j3���?%Ib<\a�K���IeS�D�C���<��]$���0�2�)4�Y*a0�3���bwJ���#@B�����O�2{������M�w���������a�~1dI�k���a�.�1��������z�V�h%�<��L����|F��j��e�����N�C]�<!&�A�/��5�
�)�2�'���b��j���e_e�����r�Q�R���`��&��LOQ$ZdM)�$�nh��%M'�/^�g����������$j!dU�_T+�c2Y���{�$�s�XZ�y�;pZ�<�?^���F��c�"1��)���DM��nbs%�z�^�JNuH����J.�,
�7t�Gy���Hg�`�4����L�x	�0L,^��S1��z	���!��x����	bp���������T�6�i�CI9�Ou�:��a��j�3tY��a����"��=Y��������C���f���XX$����XzS�:YK���kf/S�������G��hl���O����T���bU�
G�d�%�P<��,�������{����wV�F!W����e�g�(6�������-��ZO���U����e+n#�h��[|��{�:�G�'m���>6�r��Pn������Xp��cAN���:�VM+xA�m�������0�s�n�Fg!�S#��y�JQ���~�-wzS�W���7��w>��f�����J�k|� ��:�QV�
����(4/���9����������/^=;x�1Pf����&f���p;l5�����i��vs�������W����}�4����>��g����Z�$}��|L�[��>
<��aF����f���T���Jb-Z�����V��!|���k�j����e�Y^����G?Ubb��X�4��F���L	3q�&��}j�����z�!~��L��w�=r��+����x0�� [��bS3F�L\�(���^���F�'�l���T�{%��v�������~�pV�R��T��`��:�%��=6+����X f�IS7�$����D�	�~8t�pB�[���]��V+����=��K<�S��i:Nz���U��tF[u�e�����V!�J:4.fd��D��}5�kGqu�U�z\I�����x��s�4|�U�L)L7�oi<���>E$�'�o���6Y<�`+���`#Y4�A.w�[T��s�M�Q���,���}��S���ij������z��P`7q�'J�Kqj��L
~y����7����6�uq����:$��.�UW�����q]278�!SK����ELb�����`B��Z��~n'O�T��	<��N�����!U���RzJJ����u9Ys4���o?�~A7@����s���d�^��b�&<�Y�����R!�"��(p��)C[��<���w: y����T��KKrZr��k>Dcr��z&o�����]����uY��B�29�!4Ke�����Sc%������x8�C����.�<�C��������������Y����"�8zO���%�w����	����!���Q�H�:�V�cYV/��^Y`�s k7�c������@�xR��a�4{��)�~��T?�Y�E����,"2�27f�t~g4���������Z�z�~K'�U�T����r���������k�������g�h>L'�>�*)M2�zSw(^k�0;K �)|��m�����(2�2�b���X�Y���l�ds2UT�til�m�����������*�V3�zWV������*�4���?
���RJ����K�G���[(L������GK)dj��
�d��o�V�RenoKC5SqGK(l|���L��ZL&��eU����[��4��7�.�L�����u���\�@�:~!�k����.p�/P�����QY������n���1�[�qdT��%��~NUQ���`?����U+6��V��'Lr����rO�����7@n�J�����V
rE&��pVDtTQ��C�&��b�?�`*��T�������p��$������K����O51+�F��nfz��k���tB2��9Tk���4'�H�7�	P����\NF
��~'�-�O������#�o�ed�
�9G��.ZlR�[Oq�������������~H�����Z���R{|�W��o��p��tP5������f�*�����
�����J�mQi�Q�;w�D��u�����|�reA�h��C��(I�e�s<�&
�f(�D�t�C�s�<�?��D�����8PrSB����G���
�����
q���pG�jT8so7k�^�R��?���0����L��O�J�E�Be^��lS�
uz,<���j�iJhCX[�Q1E���v1����J�y\�+��W�T�3F0�u��(�d}�N�#�����
#���	p�Vea����
Q[�@�R��3p�o�D��Z�3��.���&Id�
��.��O�L0g]�)�����Q[V$��VY6�4�����n����$T�+e��2j��]�H�X���SSe���"��[}#����S7�dg��+K&>1F��Z���(<�/�\i�dD��R1/�@����0<��C&�v����E������������R&�)�s7!wb�^�������������4|v����@vp��a�	�0qoy(��(�o�r�������M��w�'��I�s�0�{�9��:���ZZ�b<���F��&-.!�i3��@d����!+e��5�q7�^���)�n�wQE�m&����J����/�~�eeO���Jmv-x$rB�����C;���4X��R�T1���;��3a��JT�E��|@�����z����o+�����*�����*�����1!�
�di����6.��6�ib]�k~:HH���E[:���T����@iQ��0#�B���{���ZIliV�����Wtq���NP.gO�B�c1]�9�l�d�w������
����4��'!���������o��l��fIeu���
Q�i7B9.S�F���������,I��q��Mrl�HL�,��l,a����������������4�N�v�����pWm��H���
A�Uz������"N����KR�h��*���c�d2�H�s�� V�>E�����:��	�"T��*���� ����NCn����;k�*��*�dv�%���,mP�?�T��\����Q����qn���!�����r�G
�:0��\���V���I��!%�����~O�V��oLG�KG�d�wo+��&a���.�^s_�9G��N��p�/H.�� ����#��~���<hGyB���
�a�N�_�0Q�����	G]����&���8���3��|6��?L>6�r������o�n:#J�/Y�"��F������z���J��Td��/�p���VnRL�*ud��� ������X�����/�s����&���M���{z�p�B��r_z5H�mz������q]��r�.�>���bk2�o�����*^�V�UZ��!<���bs�LM�-���'y����������*��G�=���s�������������sw�g��<�]���l����~�����������\s�������@�����l�;U�6��z���	��P7�SNT��eN�W�u���/
0�C���H��oF�S�K��<�V���U��5!'*1*���6+���um����f�l�G�c�<)���O�ILzU�l�i.�����1zjPr�z�]��@z<d
�T���������0�����E���-w�#��J�����l�W�,���[�-�f���d�?*��p3A�����n��0V���<�pX�N.��
+�:H,W�%����{ryk��\�;�[���P���]����'�hE������Nr
2�`��������8�@��.Z�4���0�����c�\�h{��t~��d��� �(��*j4�8��7Yg�Y��h:/Jn������R_�����L�z�(_�\���W2\j����J��iT�n��-���H"G���PV:�F�c�%F����eWbC�3���?���|�t�����F�H�	J���������,�I�q�T��juO/��=�.�VEM���'C��tK���3|S���I����/�U�e�2#@���u��� eq������e> ����^�9bv ���:B���mY��ph7��n�����@�lB�{
�g_j����9�;���"��8h@��@�5�����(W67�6@���f3��
�@0��yLR��o��2En�1Z���Q�XMK�}YU��.�gN����RD7������q�:;�w����Z<J!�\!�������<�����tZ
��f�0)�Ea[�������������+���7j����a����F��M�#����4 �&����^|�����������*5	m��)���j�eNo�sSr������T�l_ML����'=������g9��,]�0��|�{�3��Z-t������d���Wb���vE%�g��9/y��aN���	��vr~����:a�n���a�^*���<h�Y������w����2�2bI[CG�o3fzK��J�}|#�,	�������|6'}Uq�M�	�.�0
�3@,b)��G��ml�hu�$��ik����m�V��J�������o �>Z�����"����}f�����������T���=��,�G�U�h����,����%@�����HY=�-X)����)1]w�d�.j�����$�U\j�~���uK��/��U��r�(��W)o0�jH�g�C���40��}5���H {������@�s��7���Zf�
w;�V�'�:I�4?Q�l��i�cn��L�y��r���J#����K�N�f>V��$�i������D��
�T�f����������Rwp'���@�n��S&�9!���/�����U��RGFg�&cU��O����lD�n�E��&�+M�8�C)Yv?�M��/]��+h�_��q�������
���|����V�%n�����B1��F������5A������j����������M����4��8��u����s���z,�'-u���D�v�e��_b������$��Q��������]������n�
I^Q����\6����
�tk'��O�_T��ukO�onnV�Ry��HJ�n����j�U-���*C%^Q���Q���	�(Z)"0� Pg�����)�N�P��gn�%
qZ�k���3Z����~�.�k��F����G�1�������j���it���]���8���0��� ��a^��f�s�,�V ��^j�����+)�?�"�� ������H�[������<^�����q*��Q���u����*A�i���y���fe���5�������^"����
����������Z?�Z�d�o{���ne�c�2����^��o�����������J�F�#OfS9^��r���.w�z�P:\t����?����@9�Jj\�E�Rc
H[W�P"+����	������������V5�{{��^|������&�s�%z�5�%��^��A�qs���P
��on=�����^	T�����^)Tq*Z�Pf����e��*�6��ML'����aP?���
%��%"�f�$Ey�`,E����0��U=��W�����1o��f���_���������<*���kNB���[�*���Y�*qN���:�����KGS���sb��^�~d5�)*/�]g���-�iO�\mQ�5v�>��`	�}e1��^;qg��������V���������&�[+e�[�E����y�lB��W?�x�&���AQ�j���������T���QF5�I@���7�~��}+�����`
s2�.����&�-�U$�<�����;_2����EBb���s+�|��aS��l(lm��\�� .%=`�P�<�vyC��sT����[���v->�^[����`�n��|6A���P�i��M��x+�fs�n;�`����p������}���s��
���������v��������/��a�A���h���������t,�w�����a_��h����)�UH[+%Z�x]'���r�����*d�R�6}nfK���S�Td�������q�1���84=�����������V�,	���iVgE2S����	����[�SP�.oj��T����e~�������K�J�v��)ly6�4aE�ya�� �E��R�B��Cu����Y^Iw����J���q|��3�K�_�7�~����2��Gw�_���8�%��~8==
�����	�,��k��,���t��-��;��1b������Xe-7��dle�����6��V���4���w������L�C�������E<>���o���n��8����bO���-{��m�Z���,���Ey�����-��a{}��wO�������Y���l�+�6����`���������G=�+7��z��pO�cm���:5�}�s���v��Wx!�u�!n��mE"�{�C�������o�p#��/�X��s��r��S&�*>��D0�0�@,-"B��D�R^���e,�c�Q�%�����{_�/���r�Era�������{��]�s\��p��$�Y��LG���O$��p��'�:-��}��~�s��FO-��7�%Gu|�S!���cf�(�!�����r�_yl�C��&C�2H[u��zP����bqT"����l^�IMp����Y���S;L��&yH�1���3^�����j�a����F��:�RIwjh�J�������]98��������z���4%�et
�If8���Q.^)K�X{p6�<i���gt����>7���x��q��~�g�8���<?@Q�1*H�������M����wwv�J@��
6�OZ������Q����F���������������fs3��jnm�~���;�� �N����|lm���	p(�h�z8�|Dw���g�n�Hz1���!*`�i���,�O������~2"�]��
�l� ���K^���`�(��;��q��F2f����9=89��z�z�.��T^r
7���f&�	�.��!��J�T��LKU��#����P��-�8��r0hg�w.���1�G;�{7�,��A�M
tf^�T0
|O��Qi
Q�pJJ�S����6Qw
bT����CX*�j@�A� �:����]�' �a��.��������5M(),7��9���
W���_�k�!c�C��@��*\D�OYg�1�Jyy�NI����q����.�����B��C��X�^�x�*��+�>lE�'��G��^�n<�G[�+^h+���%��
�'y_M�IWw6J��NR�1��y>,~�.>�����9Y:���������gL?��
�u+]a}���p2.���1�$rKg��zk`)J�<z�g�f����,�iQ�)���X^7�F�O��$
\���(�x��_��v������d�ft'dG|��H������Id�����m��W�!t7Dq��5?��?���~?�Z�K�rJ����
N��OS�%����
�F,��W���0��j��?>F���U�LhT������=��*Tza��x8h��ZSA����N"*#�,���6J)93y�&W�1=o�A���\.4����;�=�{�V����o���j�M�a4A�������\T�r�\,9��5�����vth�I?^�N�(��2���[A�n���j����	�����}t�:��N�q�����2�'/2�ap��c�l6��T�4UR)l>�
�m������PGZt��8�:�iv]	��<%�����t8�
����((%�����	��#5�����S���u�g����%��Q�'�4���qd0k�>�4 <[��V�<
�'����-���_B��M5�'����"�i6& ��NW(�%������Hn�K,�u�i;x<��X$*�Qc�����!V�IP�j
k#W��O�=��l�P��0�0?�l�c��U�lM�#p���w����0v��SZ-�i�yo�����(��������������()���I�c��k
���C4����k-#uI���3�c�pMk�t�h	v�Jk�����K>�W��5:���r�%�++6N�������p8��!�����QV�!��?��Z�����V������6�7�u���J�T�U���_������<,v}���o��l��zLw-�t��39��m+~�:�	���������%c|���U4�������A�[�#�	x�f�9zM���BL���2�_pIK�eQ=Y
���Y��n�aG�k������j�_�z��fP5dO��iO�/Y�c�C����"�9,~����%O����e6c4�8��XB�^^��C���M�E�)���rq^}T��sq���C<����=�x��X,�ag�e�(|��P��vCF������A��}��^���`?:;g�w��x��A�������X2.���"����P�v#-()���.������7���q6���������p�.�1���|�>��@�f���IG���	Gz�CO�yOq!4*��2�_����y�xC��2����e1y����4��,����?��������>5���MSK��V�;�m�j���9�q��Pk��G���t4���(r����x����]����
Gy�^�G��GZ���r���"j=�J�����������28�[-�S��[[�,��6��U�+9���$8�a����b���a�5N����BVI����u���b�K@
q�40YFhu�I���}�dd���Mj����q��w�b`rY��a�<��3�j%S�rR��QM��"����bp��d1q�b`�������	1Vw$7��v�U�l��)_���b��B��ZAG�d8q8nKId ��Ov�^�U&�Zj�8
F��������![O��-�������S\�r��
b�3��� 8����ylX�Sb����������*a$B����+!.5d�3�N�m��aP?e���7����^X~���r��TkA!<4�����/����x	
!��6��B��	��F�8R�E� 7��*&���3!�s>!s��<�������������y���X����z���B�D�����^AM�G`nC�4;Z����2�= ���[
����7�s�C�(S^m(��)):�H��0R$��Bej�b�IB���������;�:#9��Xat���8l������g�.��f�C��Q��_Gn�q�nnc�DZ���!5|G�W���G���P}���@a[�)�������x���L�R8&&_8�xA-��zX=i�(5�'q����h%:�hLX|iu��j�*5���Uh��7Gs�s�r��m�����CLf%b����,�d�1�l~�r_��$K�#)^,&�3���-XJX���[����?p��j:eS�a�6Xw0�J�6|��������+u�����70�O'�U�j�Z��I���3�=�%b��<�?��n���<�	���>����o��Mc�z�u�Y4��QC�U/a	,c!��08��q����SM?���7�O��tJ���
�������B��?�<	*b2�*��[�J��w���P(�R/Lr���U����U'�;=�����l=�-8��l"v����L������J��V��3tR<!��������%�&l�����d	�L��~��AY�C��K��?���4R�1�ut�2&��L�2����I5��9vJm(���<@0�,H�<2F�Zac
���E��a>w�
�a% ��nCo�k2>����?O�5A,����];&*d�1:�Y&��j�������#��6!�]A����G�p
srC+$��3R�"Z�6fi%�$�+ut���b4}}8,���*`
)�������b
tJ5�M�/��l�$iH���Q��
w.}MV����� j����hx�jrV�
Uo�c��n���=�/��Z�"e8F=����-q���j>��a����������SU����mn���+��^����P��\
���t���Kc��XzmEs��s�s�M����:��L�*�M8@(
���I�
f�v���>���������y�oZ;	�8~_���OH%��:�5L��T�nt��]��
�l����oi���3@�#�����j�"A�]���K.`��8eyX>gw�����������)������FcQ�{���P��1����`j��K,v5��4B1���&�����n���>COw��t(F�4��S���Y�FR�����X���������\�4���V��SM�c?-��yn�/9���A���<��s�pX|��aT�U�Y~��1(���J�K���y���������c��&���+|�m���w�.�bm��W�+/����)Y�3t�Fi����������h'�F���Fbj��P�:	Q��RJ�m �Py$iF���$7 �C���!�
;�P�R����	��?d��0���i�����"�|��)�uZ�
��/��E������8��H�FB�8L}�o�c��\����}�03KL)�Rn2=p1�����a%��7Q �*v���6=r<,G���}/�D2�qd��[0��������pQ�-���I�-��C�u��3��`�CB�b~IV|�g.�����4�����l������i
8D���hJ�E/DU!������#e�d��^��������5rd#��Np�����	th��d_�1*����&y�Q��i�|�(�U��dB�������8����#"w�aw���\G�U��������^�������j��SL����2���Y����<�����{��x�F�QL'/��g����c��I�@��i�����'����X������9���dMS���&�T�fP��pj��{�b��|��_��4�P���{�����gz����E��5cpE6YZ��J�~L�����,��+'?gT����f���nLH{�MV
���k���1�
=x��UV�b�(�P"r�M5Y�C1�(�c��@�No����VG0��s������	� ����:.��B��+��=�}�8>���:���e� ���������~m���Y����{���K�a�G$(�S�5kQ&�mx7�������R�E�����&����l\�C�6��2qL������GK�=�M>Cz�L����#�It��z��iL���r��|�s,���#�w�f�h���Et�DFq�X��������ja���g]b����0L�8Ng���I�e!,{c4���Y@u��v�����X��T*kT���� �V���������`��S�yp���4�P_���s�	��g �/�(�q`,��~w���&>'�+�VKUL7iM��TL5���~_w
W��uM]d����	{��t����ac�a�+:���Z�Y�s?,�9	F�us��jt�HO�����
��7	}��#_����>�����jI����
�;zU�W���(��PB�����Q������I�����q�)8��o���~z�P�J!Z�����PQ3W��*��c���;�<�fFC5�*��{�%U�
t�(����[K�S���bK.�NX�����?��|}�9����W��H
�^x�~��	�V�O�<�!��e\_U�,{����S�c����g�3����pe�3�e|1}��#=�I��hfe�fo����>rC^�/N,��=�x�_6�C1Q�F�q/�&5��~�~e��X[,�)x��R����������-��Eq\�\$y�M���aTxExa }����RT���M]"h�Ud����7��1�oBRl�e<J���|6��4����(uK���u5K��������(�N<��B������~jdz1��~���V�Nj����c�����v���5�/k
�xF�
P���B�O��h�^��	�	+P��`�l,+Y�:4����� -� ��rNA���3%��
o0:C�I����)F\FC$��@��X�*��8���!aj����6m)���n�����gf:���x8�7���T%�L�jc��������X�g����6=�?6�4�_�yM�xT�Ac/�5��&����f�.�$3a��P?I��k�49�Tc�m<,�,��I�AiQ@m3�Z�k�*�jx��P�`w�������I=��g�[�f�����9xi}�a�jo�Cn1�O��1��Jj�gu��lY�L��9@e7P��mT����&�zT�b-lO�p���]���4��i��idB�e�V��X�s��.��T��66*Md�{
��{q���h��$U�(�^e�.�_���P���[\]�_���WIRe|��$}LJf�������	o������=��Qt����������F	�����1C'N����������<�B��RZ:�C���>f>O:l���}I��j��RJQ��}�!~
LL�4��!9�����'C�&hs���I�5�9oB?��E���8c�9"J&d�W_=6� ��L:�a���� ��!��.a)@���� ;�#�����C~��s�����o�7�������7�&���������NY[��M>e	2���O�is�-@�"���;V�O`��E����#�M�>�K4����c
):}d�-��P`�c<��&�����$_XS<Fq)ES�<FC������6�Y�M��4'������{e)sm����s_���w����GO��/��}���~je������o���'��Q��(����)�U��JEA`��d(i��q���j�z��2��8)b��>*}��q��)�����}O�Yh��)��P5-�S�7��Z���a�]t�^��dq����	�gc|sO���q���_�D��J�~�=����_"<��������h7~4�>Y.�^1��mo�r���/)�f{D��-�l�qX��r
�U>��S�;D�x8��z��6Yv��wn�n�{;�K��4���'���h+�I�����_�O�+x��M��\�&�h�tA=�& z�H��}��������p>U�#�jo���p����$���j?wI�IP:���#
$M����`;��*m+���M���(�VlD������n����w�X���#�s>�-{?���~�����Zj����-�������O�;K��g"��<��)�S�N7��)�>��D��8z�����;��/�~:T�E^��8�#UV����D	�@h���r��2%x����4��uFWT��	�C���(��5��?��B�.	OCMSG9���k�=�k*4���c�T�Y�q�%`3���*���J��`�D�"��3�K��0a���<��y�`�S��H�A��:���HUhW������?��"4����W��d���$D�o��5ch����	��E��	N`�n�����Z���e��k�����~u`u��*�z�_\����E~��\����s���:N���x��B�_�P���S�,0s�"sD�j�E��p��{���cY�u'����~�x��kj=�V�Q���#�a�)����O��FQ�8w��1��;�4M�����7��<�K$���+o8�������*�=�\>����hHQ�`.��-KGv���xgs���U��=�AI`��K��cx?�S�	��c���<*�	|��1K�E���U���0�2��M�a%i�Z2���Q��C���F�C�wFd>4��pF%��Ve�V����L3�e���j��l�r���y��<89����������?��>|�����y�����8z}�?���G��4���g�S��:_�?~}tz���E�����f�����������^t^�vN~:F�t��n��~u�����������W�/����J�wpz��������`T���?�:m�"�p��+�K�vH��:���B�������`�Bh�O4/��(�E
On6�Y-��|sE������<��n�\�\�@��O��d�P���u	%$�����7{�F��
���w������ ��VK�������a��Wp�D��8F�f)o�������[�����9P>��	�:t��k<�9�#�����g�~���];��
��K�'��P�qN�H8LE���5�YW�&kh���R����?�x�J��d*G�����A�����zD��ef�r�'�taD���f	���B�SFbd�9����/+�����N�	f�2�������;�(�.k�K��\��K��3��s�
Z�m�C������.#����5���\�Q�w�[�J!%R:B��P��Z  �3V��y�'��g���!V��y����+K���e�1 �k�T��Ex�J�@3C0�+wT��Xe>����,���O������.�<�A�u
(9��;����<`�d(}>��TN���8�������\�I��������6�W�����
�.w��� .��Q� .B-�x�H�����H~��EIC����q���d
N�A�sb��8���.��H����;��R�\!��rZ�$�#W�O��'�4R�d�lxn���$�;�)��r��K��#tH��2�����T�p�v3�I�)~�O�q��gz��vEUugx1�J��x,��I(��y7�0�o�e�w���O�'���P[�������_����Q;��c-��N�����>o�f9��I�#�����g�
S�H�{��x	�P�~�]B1��Y��R�<�?��^�[���x��_��Uk@��.h��R�
�6�����s��Zuwu (�����bn.�7��4�;t��2	�����X�@XzC ��\V���o���i��mT�'O������{:�wv�[_�����=H����� ���^>/��?|�M�r��:����z�7���R��dI0�1��s�/�S+����%���_~7z���������1��N#d[o��r�a���Q'���{#��7�F�<�����FH!�$�����)��
@.���1�������;����S�~'�}rz�2898u��U�0�`pz�%����0u�=���!���^?:�$N�vc���~!,�����T���}�T�9���l��C.����4}<$�5���Y�F����l��������8xz����5�P���O��S^p�%V����*
��2?K����H("N#/����Ze��>�2�mx����G�����$�A��DR�UO�x�(�-u-4��~/����=�����F�3�eM���S#*�"�d���4'�'`�A"g�n<��O
�tN�*���%���TMD9�BX����E�1�Fk	a����G���Ur�n<��U��z�n����9����(���c��v�(���)�2��p*�2{����a�:����&��n��!^x8o���h�<L��"'��E�T_��1 ��U�(g�d��Ze���0#�*D;")[��[|GS|Y�~e1������, �E�T�g5��������f�Moh6�\q)nX����E�����pd�������_�%�g��12t�����f�eg3|���^pLa��y����b��e*�������(G��m2���rV�����+S&`��$PSC������
I�s&r*���8�W�jDwp�d�s�Ih(0x<���,���B��k�bu*�\N��S����82��5�j��a,ae}�/N$/�q������#��E�U������y�_���� V�)JP�P�PrW����\��H�A�����E�W�h6���q?���R�zA��%�����%�t��%����.�z��2_�=]=�B4�����s����[�4��k+���jB�WWN��dE�E���G��S�<��V�(�����2e<2��0Co��v��;���z��'B��z��
;���r��1�EE	W������YIq%%��}+�cx��N]�'^��\���#.m�%�-;�6�^��}+�b��M�����o���Tmq�H�S�$���5��0��@�e�e����b���=�������N����"J��|c��$W����T(�9gQ���=��io��#4�~w|��e�~g�E��.��QN�rv������f���T�����+�a
����f���Fi_�Ta�i�av���Y
��p������G� �.�LH�m �$G'#X���Yq�&�����A�Q����/�1�w��j|�]@n�L�-���*�	i,4�K��c�`����+��`��/#��
�^b�g�|�m��}�5n6�^m�fq��R�}
?�����Wa���a;���jKF��A/�]'�������j(�Yd/��#F�v��\�\-��G@2��2���������sDDv�.5�#H��|!%��:�[��h��t�+�3��C,����y�#��K��#g\��P���Z���n����B��d��~D,0��T������B�$�����,����X6G���^J�3j@I��Q��H3�S����5�@R��@�E6����vF����=u���X�v�8L^�����6���*����%c�#f[j�������@��U�I���c"���A�A�6�
*��my�U�7�������/"e�l4�.�-��p+�`���S�����*�|>��3��d�LL�:�a�!�'N�5^���
G�����<K%�@�E�wm5&�!�*�N:��m>���X����e��F%~`:�f �e�<1s��7��[+�XH9�����FB+Sq��������K��Q:9��	%t.q���8s��
*Z�[;M�������������7�

��i~��}(-qP��W��xM�B[�L�"��-��)^J�2B�Yv-�O�o�P�H%��JO��K�����l@{��8O� .
��0�jU���Q��������sQ�e�����E��8�&'����l*���i<"I���gH��W![��si_zJq0���3�� ��p��2 ��n����v�����>�}ix�v�u���\(�A�%�IJ� �X�z�<�D.�����a��y���I��u��~��z'�{_F��d)��`�{���h����&�����j`Z�^ K���6����R<S6j~�-��3���_b����O�s��k�uN����o�
�9w�V������h�`�����h�F��w&�g{kAY��Oh7����������&��-�R�CYpOK!����[�z7�k�����w��v���/^����Mt�<��&
[n��Fx��>"���}8���?C�.p�V�<6��k0
���C+�4����M&��7(!,�f��lu+��a������S��Ly�C�@e�d�a���I�Z���d�g���%�H'4�K@����Z3�<���Z�0%*�j<V����0�"
\""�uXl`:���S���B���[������]O}�8kR?�
�V�W��P��tww�.x����'�)�66PO/`F�^���4�iS�������m\W)�,�6�pi}l��N������<��,��3�g��}%A���x����w�'lt����B��xMYSuy��-l�������A��y�Z���p�����i8������
����:��h�����f����EK
�tz6S*�1���5
��X 3��9`��!T�_��)y_a�|p���H�2�e�ahqM*�l����G�62����H��Jh��Oj�R���-�&������-5�4�;�@H{'0��M�+�X�hOg�<�y L]#��V���`+��PF��z��Yn_b����g5����<���B}Ni����$�Z��6��U�tE����j�VjB�7���J��S���!�,���H����m��p/r�i�R&}���Mk��<0�h�6� z��w��������">5��|+Z���<����� �|Y�'�V���������!
���=I���X�
R��
� (q�����^GV]�|�����8c7N��_�
��Fi��.p���&�y��+���t���C
��w6l<v;��
z4f�\�`��i��_��E���+/� %.=����3f[�3��jtv��O�|H�4�|�u ��0�f���������m��%��/�#-Xg>�H�T
&.I��9Nf^�Z���������2
�n�nl��&�&����%��9����k�W��e����1�p��p��]k���se~^
1�]�, �#�K��6����0���m��j0����z'������P��'U`m8�Z_���E0G&#�Hn�(�B(��p]'B`@�VS�{�]�}�����=��qZ�����H���J�j7�]�F��������Tg�i�;>DW�I�G��;5��'}Q��|��}>n���8��Oz�>��s�>����T[�Y,w�tT�z�]�}|s+�i.�����`��{���%.�2D$����!���]��d�@@"eA�u=�������U�
`�+��X9�-�#0�����?��Z��~%<�����e��W�5)����Vi�����	����-���x�"����(?a��F�X�Oq1n�y��X���q�����l���^�8.�T�"�Sh�������sQ�S�:6�9���4�:��a��U+J.����	���fTH�>�������<h���k���@�Na�j���^���~��e�x#o����0���n�I�tG5�Q�`G�O����q�������Zc��W����3��2K#"CwB���/���������g�Zt��0�
I/�r���P	���g�.��s���sa�R��2Xp8�����C��x+��sB��2�B��g=��&�s��������K�	9�9Xm������-sY:�&&����<8t�K����&H���p��P��`0��`�R]�|��H(�p���R���{�����=���o+�A��}������\u W��S�aF�9����v���K����m�'���s:�8�r���������D�����~���Qc�F	gYw�cg�d���/�B���W27����{C� ��f�?Y�"��d9��=w��L��:�I�g�m~ypo "��x�[������]��������]UHJ=�?r�����z�p�1�M����P�c2K�y��w��Lr���A�������X����V�kO,��5�|�����1�w�p����TN�����k�����~�W9+V:��?�5��l�{�^���T
(W��$��P]bD��B�PDI�m����C`�O{�����q��7�����/�f�~?A�\&��B����1��`�?]�ba6O���4�bv�"�_�c�Y��P2�����G�,�����C�#Z`�Rl�\���~�]0+�
�K�SY7��4r�J,��5�
Ba�Z�b�3$=�98���8R!ia���q4�a!Y���f'_`�$���!�?��/y�������t�����C'.h�c������Zr�R�����l��*,����iDK"~��T�eR��bU�K*�i8y����	����=���b����n�F7_�����N�*�,��.��+"��<.#�"��������zK8�R���a�V�E7��*	����X[���
�O�l�2�����Yz��e*5��o)K���uJ�T����GT��^�3@<Y(�����R��,�h*���E�C<Q���]z1J���R~�Rq7+��Q"�+=R^H�t���R~x�H�25�p���S�I�a��b���L7(mU1�r����
/�����M�=�5v����/_�`%�4N�Gg�#�5�FU!�O��1��c��RE���F]�,�%��}i��\u�U�#�Z:��\2U@�b`�s��ca���Q��	<���$� ��22\���#��2:����zGUL�R�vRIe?Wv��x2]Qo�rx6X�K���Kv���-�;��
1! ���QS�>����LB���+�d�)�%e�
7�6���:!���`���ss_XR��</������������ ����	���D�jf����_���h>?��%'(���y�dTV��G@23	�
Ur/E��$�`��cI]F�JI�.!���/������Q�{
Aqm�W��@����u�m�<PIj0)X��q�b� �z>��^$}����u�]��WH���<V�
Z���;;_%�W���	�����n�=z����F��_<@�������\�~ll67����V+�����������k��l�4��//��������'��~�x��b8���l��zY'A}u^���r0�
������\
��rC��/�!bG������4k�{�c���8�E���~u��i\7��cZ%,��r�j�)�-����=��p;z���w{�VW�|��6f����A7?����}��|'xv�::
��?���2��������+Tw>��)�91G�N����������qE��azV��dU$5���u&X�(�Uq��U���_RO��*�'���=$��8PA���g��6iJ�&���j�X>��x��C��?�X����>���d�M)����#�h��>+M��"��1o|�&�w@��:����Me�2Ma�Q�
���?�1e������k�^��h��2�<=��$
�r�8uQ�
����{��&�*�M�R`���Q�E*�^/�L3�����L��/3��q�2B�;j�����%e�S��U?�gXJ�H�	���j*4��7���Y,���=SJ?�N'�����Y������1�~�j=Q�O_|���j�v(�r�r��f��"�����7�R*9
R	����'S�H�do^5�h-���`6d&�������H��e����0�K�g�#�9��8R���/����dc��j���E�`F��c:���j.�R�31c���D�Ue2����:K��2*����.����`b����x��n���	(bT�^B)_�������C�������t����p�7	4V���^���#���f�$�������W?w��������~<\���3�!"b��:�wS���)�inI�, ��~�{I$�e\�P�2K�f�J�����9�Cg�(V��Wg�L��vFh��;u������T���YE�N��
����Jq���(�iT�tu���k�m�-�=1J	�x�L>�"�I��1�.�������,A})w�3��b��.	���]�;!���#� g y<a��t������U�s�k;�����`�*+U�T�doU�p��m��}}�H��vSD�y0A�9]d���R����Y�1��CNjR���v_����Yqa�� �K�.�^�*^�@�~�}��>����
3��I��>��d�xj����&Y�x���C����<���N�2RY�9��*+d���J�S�.���F��YU���eI�

'�Q76S�( g�����cR��:J��M���9��n��h�_-�x���e�Z]3�����wF6:zt�O
�z���1l�Y�	u:D�:|1��"K�}q^�q��u9��G����V4^V.���mG��G��k�����`���e��UJp��:��&��k����O-N�����n������<_3��D<���e�����.�y����x�YXu�&��SS�U��^�%I���,#|H�����������7 � b^������mj(^u(!	",�i
�]}���#,��`JhZ-������f����^�����A��P������/��*�"��6$+u���i0Ay2�z�����Jm�U���x�������R��<����~��t�I�p=�����y�2 9*[�����2�I�H��;d���b��xpv����p4N'�\�..����X��
^@���
�
�����i�&�3�!f��{��%-)����V���[���C�^d8���N�Y������D����~������������`w;��	v�����`�q��$����n��v��n��G�o��
YFh����<�
m�v�G�E?o�tJ�`��i��v�G��G��GO�G�C�iUm0rXg��E�
����f^lX�"��U���\�<���1���A6HC:[t����mz��w[SYC����)p��6[��V��z����B�y;~�TA^����.����J��.ZsnZ'U_��ip����WJC|���o�O�[md��LZ-e"���(���=�_��O�
�������0��Xf�Xk�O������m�������kA�
���9��3i�L��dv�/��nJn����[�,����R�������G��|�������t���U�z����g�P`���q��\7�L��|50-wL���<������iit�X�%��J���]V��7{���������Y��c>��b��pG��QC��c�/��(�Bdb�+������t���3��6�Z�����|>P��D�Mu�MJ��Z�n����Z������n|��bS��v�����������.�~)�m�T���o��D�_#�����cwS�7�x�^���8���?�^��M<�w!�]�F����� E,��0`�������g���[j��	��o�tX�n��b�o�����p�k���{!
�WR�R��y@�4�E���s�7�	�<Ak�llO��u|�G7�0�Lzl�a��
T�2�dV�eu�D��!����u���>	�P}M��ib)��%�r9�����J�����(��|e�z��U�C����J��B�6������y������>�z{�
������G��-t�z}x\a�\��a�K��u�Ss���+~��x�+*����w�.���+g�O��S�eV�����f*0��"����F8��������S��9rWh�,���~t��V���-�8~M�����M�G��w��(�l��wQ���#����#w�8(����oj\�]�<�|�n����h����Z����39.x8���4^bv���2���>�z�ET��M���o�5�I
�eP��Rvn���k�Yf�;���t���j��}h.����*$�0���o�U�^�A�H�B��'#��a(+n�=$'*8��Iu��Z���#�K��J����C
js0���~�����`k�#�\r�@��:+H�:G��
>s��r>�In�q�*k��h��k��sT���|���sL
����^�$'.$�S6����n~"+}|�9�\����������
��A��"�r��2U����5FdW���"P���A}_���;<{Ek��{V�ON��O����$�Y��Fg7���s����8�z���_�/����s���X�c�5��uiP�x4��}�m�����&OJ<9��]���sB�H����'���	���:��)�������5}��Y����k��{�Z�����~=hK����]g#*&��e����K����r>o}���$�F3��gs[�����K�A�V6��~����7�^���o/�)�
����C������yQ�%��'rU���`����W�����e]�����7o�R����_��m���M�1�Q�����L-�hj�"�,�h��W�S���f�J�I��J�{;�����*��'�RV0��/����n�u��"+�:)�_Y��l�Az�8�����MEz��_[�&K�~�
So�����q�^5�������5l��z���������������^���[��n���,'���j=��
������Wk�����U����Y>�������k�y��"��+G=Zz�7�=�����*�B��.�������F^i���T����r����Lj�25�����?W�������Ps�M=��Y*�*���oO%����[���g�>������o|�pUH�n�����^���JCFk�Kr��zY.�Y������J�� �},���r	���P�����P��[����zyZ�^G�����d>��Wc���R���,]n������xzK-����M��z<�2]>���w���j���R2�p����-^�jk�� �}��H���?�����G%b��=,��'o���(B��*E�wt3gK.N��u�J6�2�v�?�;�sU�0��PEz����:�j>_���_��1�
`�.d�<��bK�GB�|Q�sa]�A�
C�K4)�\:^�p��mK�J3��o��*���A��F�P�z�������Q)�g�&�te?���&>���F�r�h:���A�z������B3���:�d��@���\0��@��P��7����*�$"��H!���E����Q%y�����H���h%O�3�t#��
���i��J<�����=��'\�ne�A]�����n�����%���~��E���U�{��8�?
*]��������T����
������<
���+{c���U]��6
c�b�
����v����t�V��.�h�iw=Q#����'��}{�5��NN^���Vu�:�Wm�"��N�2z�L�"c~��8�$���v�5&����=L��B�0���L�x�d*�`!�8�s|O��h�!�/.��������z����NN���S4���i/����`����Ek�-�����6��C���5UfL.�Q�>��O���?9��T�,�����^�p��f�i^7��79
�R_~MF��V�����$����h�iw��������x�|�u2�+k�R�� �d_��M�7^a�������#�$~����iF5�[�����vm
�t�t���a���+��l^u���}�.0
�4
�&����M��)N������9R*3~���s�����V������R�����'H��O]r����/N�V�2 ���*�:)
�n4���g��R��� ��?
��`
y��?���4Ma�I<P����h� ���,�a��3�P��17z��#��T+jP��cJ�����gq��~�����&9b��CB�����"F�9�;��8�k8_,
�V�,x��C����H�T9f��;�.���)���\H�r�5��a�p�Ui�E�������k�����;X������[j��O1b����s:�dX�C��lo��K���;��q���N�I��s�e��h������g���i�B��q��"}��;;�/��0�t{�������8��wv��d8���aG�Q�w�%Q�N/j������8Y'8����05�����%����a�X�(ef.jW�U�CG���#�M'5��`��N�fY2�n?j��9����XC�7�����o?1n,������hmz�$,���`
�|B�AQ�!��Z� (��X3�����P���;P����~�|=��ikw��)8�5ew��g]uYDes:\C?T6����V Sl�,Kg��xpyr6���N'��p���������=�NP��G��G�vK
MPJ����kX��l�"v�)�W���?�������Gv+�����GSj�q��UP@G*����~����u��,�~����������x���=@�;G��A���,���1'�@��4�cKLzP<MdY�R�T�]?P���Q�WD���������A�!"4[0�,5#�"4��H��!��4�'��n��~��������x��0>��h��}�_�����0���:�����x�O�N�j��%�lj�����������_?��'������J�J�W�7�v��Tz�&�5��9�
4mXI��e�

82o[�ip��85�W�����,�k;�!�R�Cm�C��#�����}e���\
����8������w�^C��0R���������oA1��_^���;9�<�����Pjl��Lir@C��~���s�y���OJ�� �hL�
���p���w��{���8���������~P8���)���1����d�*���>�i�Ya�
�c�t�1��C�
1F&_���M�����6�����c
a��;kb;H�I~����g*����*|5������pG�>�l��3���3-�
�.gkGs(5��>��������j|C��`�hk����=�t4��2�t�|ZN�;N���� ����'z���8�|Z0���l�����>g��
�3�Y������s/x.�-�P~o{�����?:�OQ<������'������]�o����NPZ'�d��"u��&	�8x�r���u�1�����p�
��\��e��U�|_!��l6a��m���a�/Qz��I�u��H�������d�j�sm|��|�n�������n�����
v14-0005-squash-Add-pytest-suite-for-OAuth.patch.gzapplication/x-gzip; name=v14-0005-squash-Add-pytest-suite-for-OAuth.patch.gzDownload
v14-0006-XXX-temporary-patches-to-build-and-test.patch.gzapplication/x-gzip; name=v14-0006-XXX-temporary-patches-to-build-and-test.patch.gzDownload
#86Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#85)
8 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Feb 20, 2024 at 5:00 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

v14 rebases over latest and fixes a warning when assertions are
disabled.

v15 is a housekeeping update that adds typedefs.list entries and runs
pgindent. It also includes a temporary patch from Daniel to get the
cfbot a bit farther (see above discussion on libcurl/exit).

--Jacob

Attachments:

since-v14.diff.txttext/plain; charset=US-ASCII; name=since-v14.diff.txtDownload
1:  b6e8358f44 ! 1:  92cf9bdcb3 common/jsonapi: support FRONTEND clients
    @@ src/common/jsonapi.c
     +#define createStrVal		createPQExpBuffer
     +#define resetStrVal			resetPQExpBuffer
     +
    -+#else /* !FRONTEND */
    ++#else							/* !FRONTEND */
     +
     +#define STRDUP(s) pstrdup(s)
     +#define ALLOC(size) palloc(size)
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, char *js
      	{
     -		lex->strval = makeStringInfo();
     +		/*
    -+		 * This call can fail in FRONTEND code. We defer error handling to time
    -+		 * of use (json_lex_string()) since we might not need to parse any
    -+		 * strings anyway.
    ++		 * This call can fail in FRONTEND code. We defer error handling to
    ++		 * time of use (json_lex_string()) since we might not need to parse
    ++		 * any strings anyway.
     +		 */
     +		lex->strval = createStrVal();
      		lex->flags |= JSONLEX_FREE_STRVAL;
    @@ src/common/jsonapi.c: json_count_array_elements(JsonLexContext *lex, int *elemen
      	 */
      	memcpy(&copylex, lex, sizeof(JsonLexContext));
     -	copylex.strval = NULL;		/* not interested in values here */
    -+	copylex.parse_strval = false;		/* not interested in values here */
    ++	copylex.parse_strval = false;	/* not interested in values here */
      	copylex.lex_level++;
      
      	count = 0;
    @@ src/common/jsonapi.c: parse_object(JsonLexContext *lex, JsonSemAction *sem)
      	JsonParseErrorType result;
      
      #ifndef FRONTEND
    ++
     +	/*
     +	 * TODO: clients need some way to put a bound on stack growth. Parse level
     +	 * limits maybe?
    @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *l
      char *
      json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
      {
    -+	int		toklen = lex->token_terminator - lex->token_start;
    ++	int			toklen = lex->token_terminator - lex->token_start;
     +
     +	if (error == JSON_OUT_OF_MEMORY)
     +	{
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
      			return _("Unicode escape values cannot be used for code point values above 007F when the encoding is not UTF8.");
      		case JSON_UNICODE_UNTRANSLATABLE:
     -			/* note: this case is only reachable in backend not frontend */
    ++
     +			/*
     +			 * note: this case is only reachable in backend not frontend.
     +			 * #ifdef it away so the frontend doesn't try to link against
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
     +	{
     +		/*
     +		 * We don't use a default: case, so that the compiler will warn about
    -+		 * unhandled enum values.  But this needs to be here anyway to cover the
    -+		 * possibility of an incorrect input.
    ++		 * unhandled enum values.  But this needs to be here anyway to cover
    ++		 * the possibility of an incorrect input.
     +		 */
     +		appendStrVal(lex->errormsg,
     +					 "unexpected json parse error type: %d", (int) error);
2:  5fa08a8033 ! 2:  0a58e64ade libpq: add OAUTHBEARER SASL mechanism
    @@ src/include/common/oauth-common.h (new)
     +/* Name of SASL mechanism per IANA */
     +#define OAUTHBEARER_NAME "OAUTHBEARER"
     +
    -+#endif /* OAUTH_COMMON_H */
    ++#endif							/* OAUTH_COMMON_H */
     
      ## src/include/pg_config.h.in ##
     @@
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + */
     +struct async_ctx
     +{
    -+	OAuthStep	step;		/* where are we in the flow? */
    ++	OAuthStep	step;			/* where are we in the flow? */
     +
     +#ifdef HAVE_SYS_EPOLL_H
    -+	int			timerfd;	/* a timerfd for signaling async timeouts */
    ++	int			timerfd;		/* a timerfd for signaling async timeouts */
     +#endif
    -+	pgsocket	mux;		/* the multiplexer socket containing all descriptors
    -+							   tracked by cURL, plus the timerfd */
    -+	CURLM	   *curlm;		/* top-level multi handle for cURL operations */
    -+	CURL	   *curl;		/* the (single) easy handle for serial requests */
    -+
    -+	struct curl_slist  *headers;	/* common headers for all requests */
    -+	PQExpBufferData		work_data;	/* scratch buffer for general use (remember
    -+									   to clear out prior contents first!) */
    -+
    -+	/*
    -+	 * Since a single logical operation may stretch across multiple calls to our
    -+	 * entry point, errors have three parts:
    ++	pgsocket	mux;			/* the multiplexer socket containing all
    ++								 * descriptors tracked by cURL, plus the
    ++								 * timerfd */
    ++	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
    ++	CURL	   *curl;			/* the (single) easy handle for serial
    ++								 * requests */
    ++
    ++	struct curl_slist *headers; /* common headers for all requests */
    ++	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
    ++								 * clear out prior contents first!) */
    ++
    ++	/*------
    ++	 * Since a single logical operation may stretch across multiple calls to
    ++	 * our entry point, errors have three parts:
     +	 *
     +	 * - errctx:	an optional static string, describing the global operation
    -+	 * 				currently in progress. It'll be translated for you.
    ++	 *				currently in progress. It'll be translated for you.
     +	 *
     +	 * - errbuf:	contains the actual error message. Generally speaking, use
    -+	 * 				actx_error[_str] to manipulate this. This must be filled
    -+	 * 				with something useful on an error.
    ++	 *				actx_error[_str] to manipulate this. This must be filled
    ++	 *				with something useful on an error.
     +	 *
    -+	 * - curl_err:	an optional static error buffer used by cURL to put detailed
    -+	 * 				information about failures. Unfortunately untranslatable.
    ++	 * - curl_err:	an optional static error buffer used by cURL to put
    ++	 *				detailed information about failures. Unfortunately
    ++	 *				untranslatable.
     +	 *
     +	 * These pieces will be combined into a single error message looking
     +	 * something like the following, with errctx and/or curl_err omitted when
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 *
     +	 *     connection to server ... failed: errctx: errbuf (curl_err)
     +	 */
    -+	const char	   *errctx;	/* not freed; must point to static allocation */
    -+	PQExpBufferData	errbuf;
    -+	char			curl_err[CURL_ERROR_SIZE];
    ++	const char *errctx;			/* not freed; must point to static allocation */
    ++	PQExpBufferData errbuf;
    ++	char		curl_err[CURL_ERROR_SIZE];
     +
     +	/*
     +	 * These documents need to survive over multiple calls, and are therefore
     +	 * cached directly in the async_ctx.
     +	 */
    -+	struct provider		provider;
    -+	struct device_authz	authz;
    ++	struct provider provider;
    ++	struct device_authz authz;
     +
     +	bool		user_prompted;	/* have we already sent the authz prompt? */
     +};
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +{
     +	struct async_ctx *actx = ctx;
     +
    -+	Assert(actx); /* oauth_free() shouldn't call us otherwise */
    ++	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
     +
     +	/*
    -+	 * TODO: in general, none of the error cases below should ever happen if we
    -+	 * have no bugs above. But if we do hit them, surfacing those errors somehow
    -+	 * might be the only way to have a chance to debug them. What's the best way
    -+	 * to do that? Assertions? Spraying messages on stderr? Bubbling an error
    -+	 * code to the top? Appending to the connection's error message only helps
    -+	 * if the bug caused a connection failure; otherwise it'll be buried...
    ++	 * TODO: in general, none of the error cases below should ever happen if
    ++	 * we have no bugs above. But if we do hit them, surfacing those errors
    ++	 * somehow might be the only way to have a chance to debug them. What's
    ++	 * the best way to do that? Assertions? Spraying messages on stderr?
    ++	 * Bubbling an error code to the top? Appending to the connection's error
    ++	 * message only helps if the bug caused a connection failure; otherwise
    ++	 * it'll be buried...
     +	 */
     +
     +	if (actx->curlm && actx->curl)
     +	{
     +		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
    ++
     +		if (err)
     +			libpq_append_conn_error(conn,
     +									"cURL easy handle removal failed: %s",
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	if (actx->curlm)
     +	{
     +		CURLMcode	err = curl_multi_cleanup(actx->curlm);
    ++
     +		if (err)
     +			libpq_append_conn_error(conn,
     +									"cURL multi handle cleanup failed: %s",
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + */
     +struct json_field
     +{
    -+	const char *name;		/* name (key) of the member */
    ++	const char *name;			/* name (key) of the member */
     +
    -+	JsonTokenType type;		/* currently supports JSON_TOKEN_STRING,
    -+							 * JSON_TOKEN_NUMBER, and JSON_TOKEN_ARRAY_START */
    ++	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
    ++								 * JSON_TOKEN_NUMBER, and
    ++								 * JSON_TOKEN_ARRAY_START */
     +	union
     +	{
    -+		char  **scalar;				/* for all scalar types */
    ++		char	  **scalar;		/* for all scalar types */
     +		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
     +	};
     +
    -+	bool		required;	/* REQUIRED field, or just OPTIONAL? */
    ++	bool		required;		/* REQUIRED field, or just OPTIONAL? */
     +};
     +
     +/* Documentation macros for json_field.required. */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +/* Parse state for parse_oauth_json(). */
     +struct oauth_parse
     +{
    -+	PQExpBuffer errbuf; /* detail message for JSON_SEM_ACTION_FAILED */
    -+	int			nested; /* nesting level (zero is the top) */
    ++	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
    ++	int			nested;			/* nesting level (zero is the top) */
     +
     +	const struct json_field *fields;	/* field definition array */
     +	const struct json_field *active;	/* points inside the fields array */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	Assert(ctx->active);
     +
     +	/*
    -+	 * At the moment, the only fields we're interested in are strings, numbers,
    -+	 * and arrays of strings.
    ++	 * At the moment, the only fields we're interested in are strings,
    ++	 * numbers, and arrays of strings.
     +	 */
     +	switch (ctx->active->type)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 * We should never start parsing a new field while a previous one is
     +		 * still active.
     +		 *
    -+		 * TODO: this code relies on assertions too much. We need to exit sanely
    -+		 * on internal logic errors, to avoid turning bugs into vulnerabilities.
    ++		 * TODO: this code relies on assertions too much. We need to exit
    ++		 * sanely on internal logic errors, to avoid turning bugs into
    ++		 * vulnerabilities.
     +		 */
     +		Assert(!ctx->active);
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	if (ctx->active)
     +	{
     +		if (ctx->active->type != JSON_TOKEN_ARRAY_START
    -+			/* The arrays we care about must not have arrays as values. */
    ++		/* The arrays we care about must not have arrays as values. */
     +			|| ctx->nested > 1)
     +		{
     +			report_type_mismatch(ctx);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	if (ctx->active)
     +	{
     +		/*
    -+		 * This assumes that no target arrays can contain other arrays, which we
    -+		 * check in the array_start callback.
    ++		 * This assumes that no target arrays can contain other arrays, which
    ++		 * we check in the array_start callback.
     +		 */
     +		Assert(ctx->nested == 2);
     +		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (ctx->active)
     +	{
    -+		JsonTokenType	expected;
    ++		JsonTokenType expected;
     +
     +		/*
     +		 * Make sure this matches what the active field expects. Arrays must
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			*ctx->active->scalar = token;
     +			ctx->active = NULL;
     +
    -+			return JSON_SUCCESS; /* don't free the token */
    ++			return JSON_SUCCESS;	/* don't free the token */
     +		}
    -+		else /* ctx->target_array */
    ++		else					/* ctx->target_array */
     +		{
     +			struct curl_slist *temp;
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +static bool
     +parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
     +{
    -+	PQExpBuffer			resp = &actx->work_data;
    -+	char			   *content_type;
    -+	JsonLexContext		lex = {0};
    -+	JsonSemAction		sem = {0};
    -+	JsonParseErrorType	err;
    -+	struct oauth_parse	ctx = {0};
    -+	bool				success = false;
    ++	PQExpBuffer resp = &actx->work_data;
    ++	char	   *content_type;
    ++	JsonLexContext lex = {0};
    ++	JsonSemAction sem = {0};
    ++	JsonParseErrorType err;
    ++	struct oauth_parse ctx = {0};
    ++	bool		success = false;
     +
     +	/* Make sure the server thinks it's given us JSON. */
     +	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	if (err != JSON_SUCCESS)
     +	{
     +		/*
    -+		 * For JSON_SEM_ACTION_FAILED, we've already written the error message.
    -+		 * Other errors come directly from pg_parse_json(), already translated.
    ++		 * For JSON_SEM_ACTION_FAILED, we've already written the error
    ++		 * message. Other errors come directly from pg_parse_json(), already
    ++		 * translated.
     +		 */
     +		if (err != JSON_SEM_ACTION_FAILED)
     +			actx_error_str(actx, json_errdetail(err, &lex));
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +parse_provider(struct async_ctx *actx, struct provider *provider)
     +{
     +	struct json_field fields[] = {
    -+		{ "issuer",         JSON_TOKEN_STRING, { &provider->issuer },         REQUIRED },
    -+		{ "token_endpoint", JSON_TOKEN_STRING, { &provider->token_endpoint }, REQUIRED },
    ++		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
    ++		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
     +
    -+		/*
    -+		 * The following fields are technically REQUIRED, but we don't use them
    -+		 * anywhere yet:
    ++		/*----
    ++		 * The following fields are technically REQUIRED, but we don't use
    ++		 * them anywhere yet:
     +		 *
     +		 * - jwks_uri
     +		 * - response_types_supported
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 * - id_token_signing_alg_values_supported
     +		 */
     +
    -+		{ "device_authorization_endpoint", JSON_TOKEN_STRING,      { &provider->device_authorization_endpoint },  OPTIONAL },
    -+		{ "grant_types_supported",         JSON_TOKEN_ARRAY_START, { .array = &provider->grant_types_supported }, OPTIONAL },
    ++		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
    ++		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
     +
    -+		{ 0 },
    ++		{0},
     +	};
     +
     +	return parse_oauth_json(actx, fields);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 * either way a developer needs to take a look.
     +		 */
     +		Assert(cnt == 1);
    -+		return 1; /* don't fall through in release builds */
    ++		return 1;				/* don't fall through in release builds */
     +	}
     +
     +	parsed = ceilf(parsed);
     +
     +	if (parsed < 1)
    -+		return 1; /* TODO this slows down the tests considerably... */
    ++		return 1;				/* TODO this slows down the tests
    ++								 * considerably... */
     +	else if (INT_MAX <= parsed)
     +		return INT_MAX;
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
     +{
     +	struct json_field fields[] = {
    -+		{ "device_code",      JSON_TOKEN_STRING, { &authz->device_code },      REQUIRED },
    -+		{ "user_code",        JSON_TOKEN_STRING, { &authz->user_code },        REQUIRED },
    -+		{ "verification_uri", JSON_TOKEN_STRING, { &authz->verification_uri }, REQUIRED },
    ++		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
    ++		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
    ++		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
     +
     +		/*
    -+		 * The following fields are technically REQUIRED, but we don't use them
    -+		 * anywhere yet:
    ++		 * The following fields are technically REQUIRED, but we don't use
    ++		 * them anywhere yet:
     +		 *
     +		 * - expires_in
     +		 */
     +
    -+		{ "interval", JSON_TOKEN_NUMBER, { &authz->interval_str }, OPTIONAL },
    ++		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
     +
    -+		{ 0 },
    ++		{0},
     +	};
     +
     +	if (!parse_oauth_json(actx, fields))
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +{
     +	bool		result;
     +	struct json_field fields[] = {
    -+		{ "error", JSON_TOKEN_STRING, { &err->error }, REQUIRED },
    ++		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
     +
    -+		{ "error_description", JSON_TOKEN_STRING, { &err->error_description }, OPTIONAL },
    ++		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
     +
    -+		{ 0 },
    ++		{0},
     +	};
     +
     +	result = parse_oauth_json(actx, fields);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +parse_access_token(struct async_ctx *actx, struct token *tok)
     +{
     +	struct json_field fields[] = {
    -+		{ "access_token", JSON_TOKEN_STRING, { &tok->access_token }, REQUIRED },
    -+		{ "token_type",   JSON_TOKEN_STRING, { &tok->token_type },   REQUIRED },
    ++		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
    ++		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
     +
     +		/*
    -+		 * The following fields are technically REQUIRED, but we don't use them
    -+		 * anywhere yet:
    ++		 * The following fields are technically REQUIRED, but we don't use
    ++		 * them anywhere yet:
     +		 *
    -+		 * - scope (only required if different than requested -- TODO check it)
    ++		 * - scope (only required if different than requested -- TODO check)
     +		 */
     +
    -+		{ 0 },
    ++		{0},
     +	};
     +
     +	return parse_oauth_json(actx, fields);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	{
     +		switch (op)
     +		{
    -+		case EPOLL_CTL_ADD:
    -+			actx_error(actx, "could not add to epoll set: %m");
    -+			break;
    ++			case EPOLL_CTL_ADD:
    ++				actx_error(actx, "could not add to epoll set: %m");
    ++				break;
     +
    -+		case EPOLL_CTL_DEL:
    -+			actx_error(actx, "could not delete from epoll set: %m");
    -+			break;
    ++			case EPOLL_CTL_DEL:
    ++				actx_error(actx, "could not delete from epoll set: %m");
    ++				break;
     +
    -+		default:
    -+			actx_error(actx, "could not update epoll set: %m");
    ++			default:
    ++				actx_error(actx, "could not update epoll set: %m");
     +		}
     +
     +		return -1;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			break;
     +
     +		case CURL_POLL_REMOVE:
    ++
     +			/*
     +			 * We don't know which of these is currently registered, perhaps
     +			 * both, so we try to remove both.  This means we need to tolerate
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	}
     +
     +	/*
    -+	 * We can't use the simple errno version of kevent, because we need to skip
    -+	 * over ENOENT while still allowing a second change to be processed.  So we
    -+	 * need a longer-form error checking loop.
    ++	 * We can't use the simple errno version of kevent, because we need to
    ++	 * skip over ENOENT while still allowing a second change to be processed.
    ++	 * So we need a longer-form error checking loop.
     +	 */
     +	for (int i = 0; i < res; ++i)
     +	{
     +		/*
     +		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
    -+		 * whether successful or not. Failed entries contain a non-zero errno in
    -+		 * the `data` field.
    ++		 * whether successful or not. Failed entries contain a non-zero errno
    ++		 * in the `data` field.
     +		 */
     +		Assert(ev_out[i].flags & EV_ERROR);
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		{
     +			switch (what)
     +			{
    -+			case CURL_POLL_REMOVE:
    -+				actx_error(actx, "could not delete from kqueue: %m");
    -+				break;
    -+			default:
    -+				actx_error(actx, "could not add to kqueue: %m");
    ++				case CURL_POLL_REMOVE:
    ++					actx_error(actx, "could not delete from kqueue: %m");
    ++					break;
    ++				default:
    ++					actx_error(actx, "could not add to kqueue: %m");
     +			}
     +			return -1;
     +		}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	else if (timeout == 0)
     +	{
     +		/*
    -+		 * A zero timeout means cURL wants us to call back immediately.  That's
    ++		 * A zero timeout means cURL wants us to call back immediately. That's
     +		 * not technically an option for timerfd, but we can make the timeout
     +		 * ridiculously short.
     +		 *
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
     +	}
     +
    -+	if (timerfd_settime(actx->timerfd, 0 /* no flags */, &spec, NULL) < 0)
    ++	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
     +	{
     +		actx_error(actx, "setting timerfd to %ld: %m", timeout);
     +		return -1;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	}
     +
     +	/*
    -+	 * The multi handle tells us what to wait on using two callbacks. These will
    -+	 * manipulate actx->mux as needed.
    ++	 * The multi handle tells us what to wait on using two callbacks. These
    ++	 * will manipulate actx->mux as needed.
     +	 */
     +	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket);
     +	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx);
     +
     +	/*
    -+	 * Set up an easy handle. All of our requests are made serially, so we only
    -+	 * ever need to keep track of one.
    ++	 * Set up an easy handle. All of our requests are made serially, so we
    ++	 * only ever need to keep track of one.
     +	 */
     +	actx->curl = curl_easy_init();
     +	if (!actx->curl)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, stderr);
     +
     +	/*
    -+	 * Only HTTP[S] is allowed.
    -+	 * TODO: disallow HTTP without user opt-in
    ++	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
     +	 */
     +	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS);
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 * pretty strict when it comes to provider behavior, so we have to check
     +	 * what comes back anyway.)
     +	 */
    -+	actx->headers = curl_slist_append(actx->headers, "Accept:"); /* TODO: check result */
    ++	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
     +	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers);
     +
     +	return true;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 * Sanity check.
     +	 *
     +	 * TODO: even though this is nominally an asynchronous process, there are
    -+	 * apparently operations that can synchronously fail by this point, such as
    -+	 * connections to closed local ports. Maybe we need to let this case fall
    -+	 * through to drive_request instead, or else perform a curl_multi_info_read
    -+	 * immediately.
    ++	 * apparently operations that can synchronously fail by this point, such
    ++	 * as connections to closed local ports. Maybe we need to let this case
    ++	 * fall through to drive_request instead, or else perform a
    ++	 * curl_multi_info_read immediately.
     +	 */
     +	if (running != 1)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +{
     +	CURLMcode	err;
     +	int			running;
    -+	CURLMsg	   *msg;
    ++	CURLMsg    *msg;
     +	int			msgs_left;
     +	bool		done;
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		if (msg->msg != CURLMSG_DONE)
     +		{
     +			/*
    -+			 * Future cURL versions may define new message types; we don't know
    -+			 * how to handle them, so we'll ignore them.
    ++			 * Future cURL versions may define new message types; we don't
    ++			 * know how to handle them, so we'll ignore them.
     +			 */
     +			continue;
     +		}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +{
     +	long		response_code;
     +
    -+	/*
    ++	/*----
     +	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
     +	 *
    -+	 *     A successful response MUST use the 200 OK HTTP status code and return
    -+	 *     a JSON object using the application/json content type that contains a
    -+	 *     set of Claims as its members that are a subset of the Metadata values
    -+	 *     defined in Section 3.
    ++	 *     A successful response MUST use the 200 OK HTTP status code and
    ++	 *     return a JSON object using the application/json content type that
    ++	 *     contains a set of Claims as its members that are a subset of the
    ++	 *     Metadata values defined in Section 3.
     +	 *
     +	 * Compared to standard HTTP semantics, this makes life easy -- we don't
     +	 * need to worry about redirections (which would call the Issuer host
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 */
     +	actx->errctx = "failed to parse OpenID discovery document";
     +	if (!parse_provider(actx, &actx->provider))
    -+		return false; /* error message already set */
    ++		return false;			/* error message already set */
     +
     +	/*
     +	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	const struct curl_slist *grant;
     +	bool		device_grant_found = false;
     +
    -+	Assert(provider->issuer); /* ensured by get_discovery_document() */
    ++	Assert(provider->issuer);	/* ensured by get_discovery_document() */
     +
    -+	/*
    -+	 * First, sanity checks for discovery contents that are OPTIONAL in the spec
    -+	 * but required for our flow:
    ++	/*------
    ++	 * First, sanity checks for discovery contents that are OPTIONAL in the
    ++	 * spec but required for our flow:
     +	 * - the issuer must support the device_code grant
    -+	 * - the issuer must have actually given us a device_authorization_endpoint
    ++	 * - the issuer must have actually given us a
    ++	 *   device_authorization_endpoint
     +	 */
     +
     +	grant = provider->grant_types_supported;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
     +	PQExpBuffer work_buffer = &actx->work_data;
     +
    -+	Assert(conn->oauth_client_id); /* ensured by get_auth_token() */
    -+	Assert(device_authz_uri); /* ensured by check_for_device_flow() */
    ++	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
    ++	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
     +
     +	/* Construct our request body. TODO: url-encode */
     +	resetPQExpBuffer(work_buffer);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (conn->oauth_client_secret)
     +	{
    -+		/*
    ++		/*----
     +		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
     +		 *
    -+		 *   Including the client credentials in the request-body using the two
    -+		 *   parameters is NOT RECOMMENDED and SHOULD be limited to clients
    -+		 *   unable to directly utilize the HTTP Basic authentication scheme (or
    -+		 *   other password-based HTTP authentication schemes).
    ++		 *   Including the client credentials in the request-body using the
    ++		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
    ++		 *   clients unable to directly utilize the HTTP Basic authentication
    ++		 *   scheme (or other password-based HTTP authentication schemes).
     +		 *
     +		 * TODO: should we omit client_id from the body in this case?
     +		 */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 */
     +	if (response_code != 200
     +		&& response_code != 400
    -+		/*&& response_code != 401 TODO */)
    ++		 /* && response_code != 401 TODO */ )
     +	{
     +		actx_error(actx, "unexpected response code %ld", response_code);
     +		return false;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 */
     +	actx->errctx = "failed to parse device authorization";
     +	if (!parse_device_authz(actx, &actx->authz))
    -+		return false; /* error message already set */
    ++		return false;			/* error message already set */
     +
     +	return true;
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	const char *device_code = actx->authz.device_code;
     +	PQExpBuffer work_buffer = &actx->work_data;
     +
    -+	Assert(conn->oauth_client_id); /* ensured by get_auth_token() */
    -+	Assert(token_uri); /* ensured by get_discovery_document() */
    -+	Assert(device_code); /* ensured by run_device_authz() */
    ++	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
    ++	Assert(token_uri);			/* ensured by get_discovery_document() */
    ++	Assert(device_code);		/* ensured by run_device_authz() */
     +
     +	/* Construct our request body. TODO: url-encode */
     +	resetPQExpBuffer(work_buffer);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (conn->oauth_client_secret)
     +	{
    -+		/*
    ++		/*----
     +		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
     +		 *
    -+		 *   Including the client credentials in the request-body using the two
    -+		 *   parameters is NOT RECOMMENDED and SHOULD be limited to clients
    -+		 *   unable to directly utilize the HTTP Basic authentication scheme (or
    -+		 *   other password-based HTTP authentication schemes).
    ++		 *   Including the client credentials in the request-body using the
    ++		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
    ++		 *   clients unable to directly utilize the HTTP Basic authentication
    ++		 *   scheme (or other password-based HTTP authentication schemes).
     +		 *
     +		 * TODO: should we omit client_id from the body in this case?
     +		 */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 */
     +	if (response_code != 200
     +		&& response_code != 400
    -+		/*&& response_code != 401 TODO */)
    ++		 /* && response_code != 401 TODO */ )
     +	{
     +		actx_error(actx, "unexpected response code %ld", response_code);
     +		return false;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	{
     +		actx->errctx = "failed to parse access token response";
     +		if (!parse_access_token(actx, tok))
    -+			return false; /* error message already set */
    ++			return false;		/* error message already set */
     +	}
     +	else if (!parse_token_error(actx, &tok->err))
     +		return false;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 * probably need to consider both the TLS backend libcurl is compiled
     +	 * against and what the user has asked us to do via PQinit[Open]SSL.
     +	 *
    -+	 * Recent versions of libcurl have improved the thread-safety situation, but
    -+	 * you apparently can't check at compile time whether the implementation is
    -+	 * thread-safe, and there's a chicken-and-egg problem where you can't check
    -+	 * the thread safety until you've initialized cURL, which you can't do
    -+	 * before you've made sure it's thread-safe...
    ++	 * Recent versions of libcurl have improved the thread-safety situation,
    ++	 * but you apparently can't check at compile time whether the
    ++	 * implementation is thread-safe, and there's a chicken-and-egg problem
    ++	 * where you can't check the thread safety until you've initialized cURL,
    ++	 * which you can't do before you've made sure it's thread-safe...
     +	 *
    -+	 * We know we've already initialized Winsock by this point, so we should be
    -+	 * able to safely skip that bit. But we have to tell cURL to initialize
    ++	 * We know we've already initialized Winsock by this point, so we should
    ++	 * be able to safely skip that bit. But we have to tell cURL to initialize
     +	 * everything else, because other pieces of our client executable may
     +	 * already be using cURL for their own purposes. If we initialize libcurl
     +	 * first, with only a subset of its features, we could break those other
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	if (!state->async_ctx)
     +	{
     +		/*
    -+		 * Create our asynchronous state, and hook it into the upper-level OAuth
    -+		 * state immediately, so any failures below won't leak the context
    -+		 * allocation.
    ++		 * Create our asynchronous state, and hook it into the upper-level
    ++		 * OAuth state immediately, so any failures below won't leak the
    ++		 * context allocation.
     +		 */
     +		actx = calloc(1, sizeof(*actx));
     +		if (!actx)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		case OAUTH_STEP_DISCOVERY:
     +		case OAUTH_STEP_DEVICE_AUTHORIZATION:
     +		case OAUTH_STEP_TOKEN_REQUEST:
    -+		{
    -+			PostgresPollingStatusType status;
    ++			{
    ++				PostgresPollingStatusType status;
     +
    -+			status = drive_request(actx);
    ++				status = drive_request(actx);
     +
    -+			if (status == PGRES_POLLING_FAILED)
    -+				goto error_return;
    -+			else if (status != PGRES_POLLING_OK)
    -+			{
    -+				/* not done yet */
    -+				free_token(&tok);
    -+				return status;
    ++				if (status == PGRES_POLLING_FAILED)
    ++					goto error_return;
    ++				else if (status != PGRES_POLLING_OK)
    ++				{
    ++					/* not done yet */
    ++					free_token(&tok);
    ++					return status;
    ++				}
     +			}
    -+		}
     +
     +		case OAUTH_STEP_WAIT_INTERVAL:
     +			/* TODO check that the timer has expired */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			break;
     +
     +		case OAUTH_STEP_TOKEN_REQUEST:
    -+		{
    -+			const struct token_error *err;
    ++			{
    ++				const struct token_error *err;
     +#ifdef HAVE_SYS_EPOLL_H
    -+			struct itimerspec spec = {0};
    ++				struct itimerspec spec = {0};
     +#endif
     +#ifdef HAVE_SYS_EVENT_H
    -+			struct kevent ev = {0};
    ++				struct kevent ev = {0};
     +#endif
     +
    -+			if (!finish_token_request(actx, &tok))
    -+				goto error_return;
    ++				if (!finish_token_request(actx, &tok))
    ++					goto error_return;
     +
    -+			if (!actx->user_prompted)
    -+			{
    -+				int			res;
    -+				PQpromptOAuthDevice prompt = {
    -+					.verification_uri = actx->authz.verification_uri,
    -+					.user_code = actx->authz.user_code,
    -+					/* TODO: optional fields */
    -+				};
    ++				if (!actx->user_prompted)
    ++				{
    ++					int			res;
    ++					PQpromptOAuthDevice prompt = {
    ++						.verification_uri = actx->authz.verification_uri,
    ++						.user_code = actx->authz.user_code,
    ++						/* TODO: optional fields */
    ++					};
     +
    -+				/*
    -+				 * Now that we know the token endpoint isn't broken, give the
    -+				 * user the login instructions.
    -+				 */
    -+				res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
    -+									 &prompt);
    ++					/*
    ++					 * Now that we know the token endpoint isn't broken, give
    ++					 * the user the login instructions.
    ++					 */
    ++					res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
    ++										 &prompt);
     +
    -+				if (!res)
    -+				{
    -+					fprintf(stderr, "Visit %s and enter the code: %s",
    -+							prompt.verification_uri, prompt.user_code);
    -+				}
    -+				else if (res < 0)
    -+				{
    -+					actx_error(actx, "device prompt failed");
    -+					goto error_return;
    ++					if (!res)
    ++					{
    ++						fprintf(stderr, "Visit %s and enter the code: %s",
    ++								prompt.verification_uri, prompt.user_code);
    ++					}
    ++					else if (res < 0)
    ++					{
    ++						actx_error(actx, "device prompt failed");
    ++						goto error_return;
    ++					}
    ++
    ++					actx->user_prompted = true;
     +				}
     +
    -+				actx->user_prompted = true;
    -+			}
    ++				if (tok.access_token)
    ++				{
    ++					/* Construct our Bearer token. */
    ++					resetPQExpBuffer(&actx->work_data);
    ++					appendPQExpBuffer(&actx->work_data, "Bearer %s",
    ++									  tok.access_token);
     +
    -+			if (tok.access_token)
    -+			{
    -+				/* Construct our Bearer token. */
    -+				resetPQExpBuffer(&actx->work_data);
    -+				appendPQExpBuffer(&actx->work_data, "Bearer %s",
    -+								  tok.access_token);
    ++					if (PQExpBufferDataBroken(actx->work_data))
    ++					{
    ++						actx_error(actx, "out of memory");
    ++						goto error_return;
    ++					}
     +
    -+				if (PQExpBufferDataBroken(actx->work_data))
    -+				{
    -+					actx_error(actx, "out of memory");
    -+					goto error_return;
    ++					state->token = strdup(actx->work_data.data);
    ++					break;
     +				}
     +
    -+				state->token = strdup(actx->work_data.data);
    -+				break;
    -+			}
    -+
    -+			/*
    -+			 * authorization_pending and slow_down are the only acceptable
    -+			 * errors; anything else and we bail.
    -+			 */
    -+			err = &tok.err;
    -+			if (!err->error || (strcmp(err->error, "authorization_pending")
    -+								&& strcmp(err->error, "slow_down")))
    -+			{
    -+				/* TODO handle !err->error */
    -+				if (err->error_description)
    -+					appendPQExpBuffer(&actx->errbuf, "%s ",
    -+									  err->error_description);
    ++				/*
    ++				 * authorization_pending and slow_down are the only acceptable
    ++				 * errors; anything else and we bail.
    ++				 */
    ++				err = &tok.err;
    ++				if (!err->error || (strcmp(err->error, "authorization_pending")
    ++									&& strcmp(err->error, "slow_down")))
    ++				{
    ++					/* TODO handle !err->error */
    ++					if (err->error_description)
    ++						appendPQExpBuffer(&actx->errbuf, "%s ",
    ++										  err->error_description);
     +
    -+				appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
    ++					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
     +
    -+				goto error_return;
    -+			}
    ++					goto error_return;
    ++				}
     +
    -+			/*
    -+			 * A slow_down error requires us to permanently increase our retry
    -+			 * interval by five seconds. RFC 8628, Sec. 3.5.
    -+			 */
    -+			if (!strcmp(err->error, "slow_down"))
    -+			{
    -+				actx->authz.interval += 5; /* TODO check for overflow? */
    -+			}
    ++				/*
    ++				 * A slow_down error requires us to permanently increase our
    ++				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
    ++				 */
    ++				if (!strcmp(err->error, "slow_down"))
    ++				{
    ++					actx->authz.interval += 5;	/* TODO check for overflow? */
    ++				}
     +
    -+			/*
    -+			 * Wait for the required interval before issuing the next request.
    -+			 */
    -+			Assert(actx->authz.interval > 0);
    ++				/*
    ++				 * Wait for the required interval before issuing the next
    ++				 * request.
    ++				 */
    ++				Assert(actx->authz.interval > 0);
     +#ifdef HAVE_SYS_EPOLL_H
    -+			spec.it_value.tv_sec = actx->authz.interval;
    ++				spec.it_value.tv_sec = actx->authz.interval;
     +
    -+			if (timerfd_settime(actx->timerfd, 0 /* no flags */, &spec, NULL) < 0)
    -+			{
    -+				actx_error(actx, "failed to set timerfd: %m");
    -+				goto error_return;
    -+			}
    ++				if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
    ++				{
    ++					actx_error(actx, "failed to set timerfd: %m");
    ++					goto error_return;
    ++				}
     +
    -+			*altsock = actx->timerfd;
    ++				*altsock = actx->timerfd;
     +#endif
     +#ifdef HAVE_SYS_EVENT_H
    -+			// XXX: I guess this wants to be hidden in a routine
    -+			EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
    -+				   actx->authz.interval * 1000, 0);
    -+			if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
    -+			{
    -+				actx_error(actx, "failed to set kqueue timer: %m");
    -+				goto error_return;
    -+			}
    -+			// XXX: why did we change the altsock in the epoll version?
    ++				/* XXX: I guess this wants to be hidden in a routine */
    ++				EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
    ++					   actx->authz.interval * 1000, 0);
    ++				if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
    ++				{
    ++					actx_error(actx, "failed to set kqueue timer: %m");
    ++					goto error_return;
    ++				}
    ++				/* XXX: why did we change the altsock in the epoll version? */
     +#endif
    -+			actx->step = OAUTH_STEP_WAIT_INTERVAL;
    -+			break;
    -+		}
    ++				actx->step = OAUTH_STEP_WAIT_INTERVAL;
    ++				break;
    ++			}
     +
     +		case OAUTH_STEP_WAIT_INTERVAL:
     +			actx->errctx = "failed to obtain access token";
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
     +
     +error_return:
    ++
     +	/*
     +	 * Assemble the three parts of our error: context, body, and detail. See
     +	 * also the documentation for struct async_ctx.
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (actx->curl_err[0])
     +	{
    -+		size_t len;
    ++		size_t		len;
     +
     +		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
     +
    @@ src/interfaces/libpq/fe-auth-oauth-iddawc.c (new)
     +run_iddawc_auth_flow(PGconn *conn, const char *discovery_uri)
     +{
     +	struct _i_session session;
    -+	PQExpBuffer	token_buf = NULL;
    ++	PQExpBuffer token_buf = NULL;
     +	int			err;
     +	int			auth_method;
     +	bool		user_prompted = false;
    @@ src/interfaces/libpq/fe-auth-oauth-iddawc.c (new)
     +		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
     +
     +	err = i_set_parameter_list(&session,
    -+		I_OPT_CLIENT_ID, conn->oauth_client_id,
    -+		I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
    -+		I_OPT_TOKEN_METHOD, auth_method,
    -+		I_OPT_SCOPE, conn->oauth_scope,
    -+		I_OPT_NONE
    -+	);
    ++							   I_OPT_CLIENT_ID, conn->oauth_client_id,
    ++							   I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
    ++							   I_OPT_TOKEN_METHOD, auth_method,
    ++							   I_OPT_SCOPE, conn->oauth_scope,
    ++							   I_OPT_NONE
    ++		);
     +	if (err)
     +	{
     +		iddawc_error(conn, err, "failed to set client identifier");
    @@ src/interfaces/libpq/fe-auth-oauth-iddawc.c (new)
     +	if (err)
     +	{
     +		iddawc_request_error(conn, &session, err,
    -+							"failed to obtain device authorization");
    ++							 "failed to obtain device authorization");
     +		goto cleanup;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-iddawc.c (new)
     +	}
     +
     +	/*
    -+	 * Poll the token endpoint until either the user logs in and authorizes the
    -+	 * use of a token, or a hard failure occurs. We perform one ping _before_
    -+	 * prompting the user, so that we don't make them do the work of logging in
    -+	 * only to find that the token endpoint is completely unreachable.
    ++	 * Poll the token endpoint until either the user logs in and authorizes
    ++	 * the use of a token, or a hard failure occurs. We perform one ping
    ++	 * _before_ prompting the user, so that we don't make them do the work of
    ++	 * logging in only to find that the token endpoint is completely
    ++	 * unreachable.
     +	 */
     +	err = i_run_token_request(&session);
     +	while (err)
    @@ src/interfaces/libpq/fe-auth-oauth-iddawc.c (new)
     +							&& strcmp(error_code, "slow_down")))
     +		{
     +			iddawc_request_error(conn, &session, err,
    -+								"failed to obtain access token");
    ++								 "failed to obtain access token");
     +			goto cleanup;
     +		}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-iddawc.c (new)
     +			user_prompted = true;
     +		}
     +
    -+		/*
    -+		 * We are required to wait between polls; the server tells us how long.
    ++		/*---
    ++		 * We are required to wait between polls; the server tells us how
    ++		 * long.
     +		 * TODO: if interval's not set, we need to default to five seconds
     +		 * TODO: sanity check the interval
     +		 */
    @@ src/interfaces/libpq/fe-auth-oauth-iddawc.c (new)
     +
     +		/*
     +		 * XXX Reset the error code before every call, because iddawc won't do
    -+		 * that for us. This matters if the server first sends a "pending" error
    -+		 * code, then later hard-fails without sending an error code to
    ++		 * that for us. This matters if the server first sends a "pending"
    ++		 * error code, then later hard-fails without sending an error code to
     +		 * overwrite the first one.
     +		 *
     +		 * That we have to do this at all seems like a bug in iddawc.
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +static char *
     +client_initial_response(PGconn *conn, const char *token)
     +{
    -+	static const char * const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
    ++	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
     +
     +	PQExpBufferData buf;
     +	char	   *response = NULL;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		 * Either programmer error, or something went badly wrong during the
     +		 * asynchronous fetch.
     +		 *
    -+		 * TODO: users shouldn't see this; what action should they take if they
    -+		 * do?
    ++		 * TODO: users shouldn't see this; what action should they take if
    ++		 * they do?
     +		 */
     +		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
     +		return NULL;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +struct json_ctx
     +{
    -+	char		   *errmsg; /* any non-NULL value stops all processing */
    -+	PQExpBufferData errbuf; /* backing memory for errmsg */
    -+	int				nested; /* nesting level (zero is the top) */
    ++	char	   *errmsg;			/* any non-NULL value stops all processing */
    ++	PQExpBufferData errbuf;		/* backing memory for errmsg */
    ++	int			nested;			/* nesting level (zero is the top) */
     +
    -+	const char	   *target_field_name; /* points to a static allocation */
    -+	char		  **target_field;      /* see below */
    ++	const char *target_field_name;	/* points to a static allocation */
    ++	char	  **target_field;	/* see below */
     +
     +	/* target_field, if set, points to one of the following: */
    -+	char		   *status;
    -+	char		   *scope;
    -+	char		   *discovery_uri;
    ++	char	   *status;
    ++	char	   *scope;
    ++	char	   *discovery_uri;
     +};
     +
     +#define oauth_json_has_error(ctx) \
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +static JsonParseErrorType
     +oauth_json_object_start(void *state)
     +{
    -+	struct json_ctx	   *ctx = state;
    ++	struct json_ctx *ctx = state;
     +
     +	if (ctx->target_field)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +static JsonParseErrorType
     +oauth_json_object_end(void *state)
     +{
    -+	struct json_ctx	   *ctx = state;
    ++	struct json_ctx *ctx = state;
     +
     +	--ctx->nested;
     +	return JSON_SUCCESS;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +static JsonParseErrorType
     +oauth_json_object_field_start(void *state, char *name, bool isnull)
     +{
    -+	struct json_ctx	   *ctx = state;
    ++	struct json_ctx *ctx = state;
     +
     +	if (ctx->nested == 1)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +static JsonParseErrorType
     +oauth_json_array_start(void *state)
     +{
    -+	struct json_ctx	   *ctx = state;
    ++	struct json_ctx *ctx = state;
     +
     +	if (!ctx->nested)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +static JsonParseErrorType
     +oauth_json_scalar(void *state, char *token, JsonTokenType type)
     +{
    -+	struct json_ctx	   *ctx = state;
    ++	struct json_ctx *ctx = state;
     +
     +	if (!ctx->nested)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			ctx->target_field = NULL;
     +			ctx->target_field_name = NULL;
     +
    -+			return JSON_SUCCESS; /* don't free the token we're using */
    ++			return JSON_SUCCESS;	/* don't free the token we're using */
     +		}
     +
     +		oauth_json_set_error(ctx,
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +static bool
     +handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
     +{
    -+	JsonLexContext		lex = {0};
    -+	JsonSemAction		sem = {0};
    -+	JsonParseErrorType	err;
    -+	struct json_ctx		ctx = {0};
    -+	char			   *errmsg = NULL;
    ++	JsonLexContext lex = {0};
    ++	JsonSemAction sem = {0};
    ++	JsonParseErrorType err;
    ++	struct json_ctx ctx = {0};
    ++	char	   *errmsg = NULL;
     +
     +	/* Sanity check. */
     +	if (strlen(msg) != msglen)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	else if (status == PGRES_POLLING_OK)
     +	{
     +		/*
    -+		 * We already have a token, so copy it into the state. (We can't
    -+		 * hold onto the original string, since it may not be safe for us to
    -+		 * free() it.)
    ++		 * We already have a token, so copy it into the state. (We can't hold
    ++		 * onto the original string, since it may not be safe for us to free()
    ++		 * it.)
     +		 */
    -+		PQExpBufferData	token;
    ++		PQExpBufferData token;
     +
     +		if (!request->token)
     +		{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		{
     +			/*
     +			 * We already have a token, so copy it into the state. (We can't
    -+			 * hold onto the original string, since it may not be safe for us to
    -+			 * free() it.)
    ++			 * hold onto the original string, since it may not be safe for us
    ++			 * to free() it.)
     +			 */
    -+			PQExpBufferData	token;
    ++			PQExpBufferData token;
     +
     +			initPQExpBuffer(&token);
     +			appendPQExpBuffer(&token, "Bearer %s", request.token);
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	{
     +		/*
     +		 * Either we already have one, or we aren't able to derive one
    -+		 * ourselves. The latter case is not an error condition; we'll just ask
    -+		 * the server to provide one for us.
    ++		 * ourselves. The latter case is not an error condition; we'll just
    ++		 * ask the server to provide one for us.
     +		 */
     +		return true;
     +	}
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +				}
     +
     +				/*
    -+				 * Decide whether we're using a user-provided OAuth flow, or the
    -+				 * one we have built in.
    ++				 * Decide whether we're using a user-provided OAuth flow, or
    ++				 * the one we have built in.
     +				 */
     +				if (!setup_token_request(conn, state))
     +					return SASL_FAILED;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +				if (state->token)
     +				{
     +					/*
    -+					 * A really smart user implementation may have already given
    -+					 * us the token (e.g. if there was an unexpired copy already
    -+					 * cached). In that case, we can just fall through.
    ++					 * A really smart user implementation may have already
    ++					 * given us the token (e.g. if there was an unexpired copy
    ++					 * already cached). In that case, we can just fall
    ++					 * through.
     +					 */
     +				}
     +				else
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +					/*
     +					 * Otherwise, we have to hand the connection over to our
     +					 * OAuth implementation. This involves a number of HTTP
    -+					 * connections and timed waits, so we escape the synchronous
    -+					 * auth processing and tell PQconnectPoll to transfer
    -+					 * control to our async implementation.
    ++					 * connections and timed waits, so we escape the
    ++					 * synchronous auth processing and tell PQconnectPoll to
    ++					 * transfer control to our async implementation.
     +					 */
    -+					Assert(conn->async_auth); /* should have been set already */
    ++					Assert(conn->async_auth);	/* should have been set
    ++												 * already */
     +					state->state = FE_OAUTH_REQUESTING_TOKEN;
     +					return SASL_ASYNC;
     +				}
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
     +			 */
     +			*output = strdup(kvsep);
    -+			*outputlen = strlen(*output); /* == 1 */
    ++			*outputlen = strlen(*output);	/* == 1 */
     +
     +			state->state = FE_OAUTH_SERVER_ERROR;
     +			return SASL_CONTINUE;
     +
     +		case FE_OAUTH_SERVER_ERROR:
    ++
     +			/*
    -+			 * After an error, the server should send an error response to fail
    -+			 * the SASL handshake, which is handled in higher layers.
    ++			 * After an error, the server should send an error response to
    ++			 * fail the SASL handshake, which is handled in higher layers.
     +			 *
    -+			 * If we get here, the server either sent *another* challenge which
    -+			 * isn't defined in the RFC, or completed the handshake successfully
    -+			 * after telling us it was going to fail. Neither is acceptable.
    ++			 * If we get here, the server either sent *another* challenge
    ++			 * which isn't defined in the RFC, or completed the handshake
    ++			 * successfully after telling us it was going to fail. Neither is
    ++			 * acceptable.
     +			 */
     +			appendPQExpBufferStr(&conn->errorMessage,
     +								 libpq_gettext("server sent additional OAuth data after error\n"));
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     +	char	   *token;
     +
     +	void	   *async_ctx;
    -+	void	  (*free_async_ctx) (PGconn *conn, void *ctx);
    ++	void		(*free_async_ctx) (PGconn *conn, void *ctx);
     +} fe_oauth_state;
     +
     +extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
    @@ src/interfaces/libpq/fe-auth-scram.c: scram_exchange(void *opaq, char *input, in
     +			return SASL_CONTINUE;
      
      		case FE_SCRAM_PROOF_SENT:
    -+		{
    -+			bool		match;
    -+
    - 			/* Receive server signature */
    - 			if (!read_server_final_message(state, input))
    +-			/* Receive server signature */
    +-			if (!read_server_final_message(state, input))
     -				goto error;
    -+				return SASL_FAILED;
    - 
    - 			/*
    - 			 * Verify server signature, to make sure we're talking to the
    - 			 * genuine server.
    - 			 */
    +-
    +-			/*
    +-			 * Verify server signature, to make sure we're talking to the
    +-			 * genuine server.
    +-			 */
     -			if (!verify_server_signature(state, success, &errstr))
    -+			if (!verify_server_signature(state, &match, &errstr))
    - 			{
    - 				libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
    --				goto error;
    -+				return SASL_FAILED;
    - 			}
    - 
    --			if (!*success)
     -			{
    -+			if (!match)
    - 				libpq_append_conn_error(conn, "incorrect server signature");
    +-				libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
    +-				goto error;
     -			}
    --			*done = true;
    +-
    +-			if (!*success)
    + 			{
    +-				libpq_append_conn_error(conn, "incorrect server signature");
    ++				bool		match;
    ++
    ++				/* Receive server signature */
    ++				if (!read_server_final_message(state, input))
    ++					return SASL_FAILED;
    ++
    ++				/*
    ++				 * Verify server signature, to make sure we're talking to the
    ++				 * genuine server.
    ++				 */
    ++				if (!verify_server_signature(state, &match, &errstr))
    ++				{
    ++					libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
    ++					return SASL_FAILED;
    ++				}
    ++
    ++				if (!match)
    ++					libpq_append_conn_error(conn, "incorrect server signature");
     +
    - 			state->state = FE_SCRAM_FINISHED;
    - 			state->conn->client_finished_auth = true;
    ++				state->state = FE_SCRAM_FINISHED;
    ++				state->conn->client_finished_auth = true;
    ++				return match ? SASL_COMPLETE : SASL_FAILED;
    + 			}
    +-			*done = true;
    +-			state->state = FE_SCRAM_FINISHED;
    +-			state->conn->client_finished_auth = true;
     -			break;
    -+			return match ? SASL_COMPLETE : SASL_FAILED;
    -+		}
      
      		default:
      			/* shouldn't happen */
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      		}
     +#ifdef USE_OAUTH
     +		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
    -+				!selected_mechanism)
    ++				 !selected_mechanism)
     +		{
     +			selected_mechanism = OAUTHBEARER_NAME;
     +			conn->sasl = &pg_oauth_mech;
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
     +	if (status == SASL_ASYNC)
     +	{
     +		/*
    -+		 * The mechanism should have set up the necessary callbacks; all we need
    -+		 * to do is signal the caller.
    ++		 * The mechanism should have set up the necessary callbacks; all we
    ++		 * need to do is signal the caller.
     +		 */
     +		*async = true;
     +		return STATUS_OK;
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_continue(PGconn *conn, int payloadlen, b
     +	if (status == SASL_ASYNC)
     +	{
     +		/*
    -+		 * The mechanism should have set up the necessary callbacks; all we need
    -+		 * to do is signal the caller.
    ++		 * The mechanism should have set up the necessary callbacks; all we
    ++		 * need to do is signal the caller.
     +		 */
     +		*async = true;
     +		return STATUS_OK;
    @@ src/interfaces/libpq/fe-auth.c: PQchangePassword(PGconn *conn, const char *user,
     +int
     +PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
     +{
    -+	return 0; /* handle nothing */
    ++	return 0;					/* handle nothing */
     +}
     
      ## src/interfaces/libpq/fe-auth.h ##
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
     +				if (async && (res == STATUS_OK))
     +				{
     +					/*
    -+					 * We'll come back later once we're ready to respond. Don't
    -+					 * consume the request yet.
    ++					 * We'll come back later once we're ready to respond.
    ++					 * Don't consume the request yet.
     +					 */
     +					conn->status = CONNECTION_AUTHENTICATING;
     +					goto keep_going;
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
     +					 * received it.
     +					 */
     +					conn->status = CONNECTION_AWAITING_RESPONSE;
    -+					conn->altsock = PGINVALID_SOCKET; /* TODO: what frees this? */
    ++					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
    ++														 * this? */
     +					goto keep_going;
     +				}
     +
    @@ src/interfaces/libpq/libpq-fe.h: typedef enum
      
     +typedef enum
     +{
    -+	PQAUTHDATA_PROMPT_OAUTH_DEVICE,	/* user must visit a device-authorization URL */
    ++	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
    ++									 * URL */
     +	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
     +} PGAuthData;
     +
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
      
     +typedef struct _PQpromptOAuthDevice
     +{
    -+	const char *verification_uri;			/* verification URI to visit */
    -+	const char *user_code;					/* user code to enter */
    ++	const char *verification_uri;	/* verification URI to visit */
    ++	const char *user_code;		/* user code to enter */
     +} PQpromptOAuthDevice;
     +
     +typedef struct _PQoauthBearerRequest
     +{
     +	/* Hook inputs (constant across all calls) */
    -+	const char * const openid_configuration;	/* OIDC discovery URI */
    -+	const char * const scope;					/* required scope(s), or NULL */
    ++	const char *const openid_configuration; /* OIDC discovery URI */
    ++	const char *const scope;	/* required scope(s), or NULL */
     +
     +	/* Hook outputs */
     +
    -+	/*
    ++	/*---------
     +	 * Callback implementing a custom asynchronous OAuth flow.
     +	 *
     +	 * The callback may return
    -+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor has
    -+	 *   been stored in *altsock and libpq should wait until it is readable or
    -+	 *   writable before calling back;
    ++	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
    ++	 *   has been stored in *altsock and libpq should wait until it is
    ++	 *   readable or writable before calling back;
     +	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
     +	 *   request->token has been set; or
     +	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
     +	 *
    -+	 * This callback is optional. If the token can be obtained without blocking
    -+	 * during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN hook, it
    -+	 * may be returned directly, but one of request->async or request->token
    -+	 * must be set by the hook.
    ++	 * This callback is optional. If the token can be obtained without
    ++	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
    ++	 * hook, it may be returned directly, but one of request->async or
    ++	 * request->token must be set by the hook.
     +	 */
     +	PostgresPollingStatusType (*async) (PGconn *conn,
     +										struct _PQoauthBearerRequest *request,
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +	 * Callback to clean up custom allocations. A hook implementation may use
     +	 * this to free request->token and any resources in request->user.
     +	 *
    -+	 * This is technically optional, but highly recommended, because there is no
    -+	 * other indication as to when it is safe to free the token.
    ++	 * This is technically optional, but highly recommended, because there is
    ++	 * no other indication as to when it is safe to free the token.
     +	 */
    -+	void	  (*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
    ++	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
     +
     +	/*
    -+	 * The hook should set this to the Bearer token contents for the connection,
    -+	 * once the flow is completed.  The token contents must remain available to
    -+	 * libpq until the hook's cleanup callback is called.
    ++	 * The hook should set this to the Bearer token contents for the
    ++	 * connection, once the flow is completed.  The token contents must remain
    ++	 * available to libpq until the hook's cleanup callback is called.
     +	 */
     +	char	   *token;
     +
     +	/*
    -+	 * Hook-defined data. libpq will not modify this pointer across calls to the
    -+	 * async callback, so it can be used to keep track of application-specific
    -+	 * state. Resources allocated here should be freed by the cleanup callback.
    ++	 * Hook-defined data. libpq will not modify this pointer across calls to
    ++	 * the async callback, so it can be used to keep track of
    ++	 * application-specific state. Resources allocated here should be freed by
    ++	 * the cleanup callback.
     +	 */
     +	void	   *user;
     +} PQoauthBearerRequest;
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
      extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
      
     +typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
    -+extern void	PQsetAuthDataHook(PQauthDataHook_type hook);
    ++extern void PQsetAuthDataHook(PQauthDataHook_type hook);
     +extern PQauthDataHook_type PQgetAuthDataHook(void);
     +extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
     +
    @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
      	char	   *load_balance_hosts; /* load balance over hosts */
      
     +	/* OAuth v2 */
    -+	char	   *oauth_issuer;			/* token issuer URL */
    -+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery document */
    -+	char	   *oauth_client_id;		/* client identifier */
    ++	char	   *oauth_issuer;	/* token issuer URL */
    ++	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
    ++										 * document */
    ++	char	   *oauth_client_id;	/* client identifier */
     +	char	   *oauth_client_secret;	/* client secret */
    -+	char	   *oauth_scope;			/* access token scope */
    -+	bool		oauth_want_retry;		/* should we retry on failure? */
    ++	char	   *oauth_scope;	/* access token scope */
    ++	bool		oauth_want_retry;	/* should we retry on failure? */
     +
      	/* Optional file to write trace info to */
      	FILE	   *Pfdebug;
    @@ src/makefiles/meson.build: pgxs_deps = {
        'pam': pam,
        'perl': perl_dep,
        'python': python3_dep,
    +
    + ## src/tools/pgindent/typedefs.list ##
    +@@ src/tools/pgindent/typedefs.list: ArrayMetaState
    + ArraySubWorkspace
    + ArrayToken
    + ArrayType
    ++AsyncAuthFunc
    + AsyncQueueControl
    + AsyncQueueEntry
    + AsyncRequest
    +@@ src/tools/pgindent/typedefs.list: CState
    + CTECycleClause
    + CTEMaterialize
    + CTESearchClause
    ++CURL
    ++CURLM
    + CV
    + CachedExpression
    + CachedPlan
    +@@ src/tools/pgindent/typedefs.list: NumericDigit
    + NumericSortSupport
    + NumericSumAccum
    + NumericVar
    ++OAuthStep
    + OM_uint32
    + OP
    + OSAPerGroupState
    +@@ src/tools/pgindent/typedefs.list: PFN
    + PGAlignedBlock
    + PGAlignedXLogBlock
    + PGAsyncStatusType
    ++PGAuthData
    + PGCALL2
    + PGChecksummablePage
    + PGContextVisibility
    +@@ src/tools/pgindent/typedefs.list: PQArgBlock
    + PQEnvironmentOption
    + PQExpBuffer
    + PQExpBufferData
    ++PQauthDataHook_type
    + PQcommMethods
    + PQconninfoOption
    + PQnoticeProcessor
    + PQnoticeReceiver
    ++PQoauthBearerRequest
    + PQprintOpt
    ++PQpromptOAuthDevice
    + PQsslKeyPassHook_OpenSSL_type
    + PREDICATELOCK
    + PREDICATELOCKTAG
    +@@ src/tools/pgindent/typedefs.list: RuleLock
    + RuleStmt
    + RunningTransactions
    + RunningTransactionsData
    ++SASLStatus
    + SC_HANDLE
    + SECURITY_ATTRIBUTES
    + SECURITY_STATUS
    +@@ src/tools/pgindent/typedefs.list: explain_get_index_name_hook_type
    + f_smgr
    + fasthash_state
    + fd_set
    ++fe_oauth_state
    ++fe_oauth_state_enum
    + fe_scram_state
    + fe_scram_state_enum
    + fetch_range_request
3:  13cf3f80b8 ! 3:  c56bc808b6 backend: add OAUTHBEARER SASL mechanism
    @@ src/backend/libpq/auth-oauth.c (new)
     +#include "storage/fd.h"
     +
     +/* GUC */
    -+char *oauth_validator_command;
    ++char	   *oauth_validator_command;
     +
    -+static void  oauth_get_mechanisms(Port *port, StringInfo buf);
    ++static void oauth_get_mechanisms(Port *port, StringInfo buf);
     +static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
    -+static int   oauth_exchange(void *opaq, const char *input, int inputlen,
    -+							char **output, int *outputlen, const char **logdetail);
    ++static int	oauth_exchange(void *opaq, const char *input, int inputlen,
    ++						   char **output, int *outputlen, const char **logdetail);
     +
     +/* Mechanism declaration */
     +const pg_be_sasl_mech pg_be_oauth_mech = {
    @@ src/backend/libpq/auth-oauth.c (new)
     +
     +struct oauth_ctx
     +{
    -+	oauth_state	state;
    ++	oauth_state state;
     +	Port	   *port;
     +	const char *issuer;
     +	const char *scope;
    @@ src/backend/libpq/auth-oauth.c (new)
     +oauth_exchange(void *opaq, const char *input, int inputlen,
     +			   char **output, int *outputlen, const char **logdetail)
     +{
    -+	char   *p;
    -+	char	cbind_flag;
    -+	char   *auth;
    ++	char	   *p;
    ++	char		cbind_flag;
    ++	char	   *auth;
     +
     +	struct oauth_ctx *ctx = opaq;
     +
    @@ src/backend/libpq/auth-oauth.c (new)
     +			break;
     +
     +		case OAUTH_STATE_ERROR:
    ++
     +			/*
     +			 * Only one response is valid for the client during authentication
     +			 * failure: a single kvsep.
    @@ src/backend/libpq/auth-oauth.c (new)
     +
     +	/*
     +	 * OAUTHBEARER does not currently define a channel binding (so there is no
    -+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a 'y'
    -+	 * specifier purely for the remote chance that a future specification could
    -+	 * define one; then future clients can still interoperate with this server
    -+	 * implementation. 'n' is the expected case.
    ++	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
    ++	 * 'y' specifier purely for the remote chance that a future specification
    ++	 * could define one; then future clients can still interoperate with this
    ++	 * server implementation. 'n' is the expected case.
     +	 */
     +	cbind_flag = *p;
     +	switch (cbind_flag)
    @@ src/backend/libpq/auth-oauth.c (new)
     +					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
     +			break;
     +
    -+		case 'y': /* fall through */
    ++		case 'y':				/* fall through */
     +		case 'n':
     +			p++;
     +			if (*p != ',')
    @@ src/backend/libpq/auth-oauth.c (new)
     +static char *
     +parse_kvpairs_for_auth(char **input)
     +{
    -+	char   *pos = *input;
    -+	char   *auth = NULL;
    ++	char	   *pos = *input;
    ++	char	   *auth = NULL;
     +
    -+	/*
    ++	/*----
     +	 * The relevant ABNF, from Sec. 3.1:
     +	 *
     +	 *     kvsep          = %x01
    @@ src/backend/libpq/auth-oauth.c (new)
     +
     +	while (*pos)
     +	{
    -+		char   *end;
    -+		char   *sep;
    -+		char   *key;
    -+		char   *value;
    ++		char	   *end;
    ++		char	   *sep;
    ++		char	   *key;
    ++		char	   *value;
     +
     +		/*
     +		 * Find the end of this kvpair. Note that input is null-terminated by
    @@ src/backend/libpq/auth-oauth.c (new)
     +		/*
     +		 * Find the end of the key name.
     +		 *
    -+		 * TODO further validate the key/value grammar? empty keys, bad chars...
    ++		 * TODO further validate the key/value grammar? empty keys, bad
    ++		 * chars...
     +		 */
     +		sep = strchr(pos, '=');
     +		if (!sep)
    @@ src/backend/libpq/auth-oauth.c (new)
     +		{
     +			/*
     +			 * The RFC also defines the host and port keys, but they are not
    -+			 * required for OAUTHBEARER and we do not use them. Also, per
    -+			 * Sec. 3.1, any key/value pairs we don't recognize must be ignored.
    ++			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
    ++			 * 3.1, any key/value pairs we don't recognize must be ignored.
     +			 */
     +		}
     +
    @@ src/backend/libpq/auth-oauth.c (new)
     +			 errmsg("malformed OAUTHBEARER message"),
     +			 errdetail("Message did not contain a final terminator.")));
     +
    -+	return NULL; /* unreachable */
    ++	return NULL;				/* unreachable */
     +}
     +
     +static void
     +generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
     +{
    -+	StringInfoData	buf;
    ++	StringInfoData buf;
     +
     +	/*
    -+	 * The admin needs to set an issuer and scope for OAuth to work. There's not
    -+	 * really a way to hide this from the user, either, because we can't choose
    -+	 * a "default" issuer, so be honest in the failure message.
    ++	 * The admin needs to set an issuer and scope for OAuth to work. There's
    ++	 * not really a way to hide this from the user, either, because we can't
    ++	 * choose a "default" issuer, so be honest in the failure message.
     +	 *
     +	 * TODO: see if there's a better place to fail, earlier than this.
     +	 */
    @@ src/backend/libpq/auth-oauth.c (new)
     +	 * TODO: JSON escaping
     +	 */
     +	appendStringInfo(&buf,
    -+		"{ "
    -+			"\"status\": \"invalid_token\", "
    -+			"\"openid-configuration\": \"%s/.well-known/openid-configuration\","
    -+			"\"scope\": \"%s\" "
    -+		"}",
    -+		ctx->issuer, ctx->scope);
    ++					 "{ "
    ++					 "\"status\": \"invalid_token\", "
    ++					 "\"openid-configuration\": \"%s/.well-known/openid-configuration\", "
    ++					 "\"scope\": \"%s\" "
    ++					 "}",
    ++					 ctx->issuer, ctx->scope);
     +
     +	*output = buf.data;
     +	*outputlen = buf.len;
    @@ src/backend/libpq/auth-oauth.c (new)
     +static bool
     +validate(Port *port, const char *auth, const char **logdetail)
     +{
    -+	static const char * const b64_set = "abcdefghijklmnopqrstuvwxyz"
    -+										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
    -+										"0123456789-._~+/";
    ++	static const char *const b64_set =
    ++		"abcdefghijklmnopqrstuvwxyz"
    ++		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
    ++		"0123456789-._~+/";
     +
     +	const char *token;
     +	size_t		span;
    @@ src/backend/libpq/auth-oauth.c (new)
     +
     +	/* TODO: handle logdetail when the test framework can check it */
     +
    -+	/*
    ++	/*-----
     +	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
     +	 * 2.1:
     +	 *
    @@ src/backend/libpq/auth-oauth.c (new)
     +	 *
     +	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
     +	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
    -+	 * compared case-insensitively. (This is not mentioned in RFC 6750, but it's
    -+	 * pointed out in RFC 7628 Sec. 4.)
    ++	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
    ++	 * it's pointed out in RFC 7628 Sec. 4.)
     +	 *
     +	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
     +	 */
    @@ src/backend/libpq/auth-oauth.c (new)
     +	/*
     +	 * Before invoking the validator command, sanity-check the token format to
     +	 * avoid any injection attacks later in the chain. Invalid formats are
    -+	 * technically a protocol violation, but don't reflect any information about
    -+	 * the sensitive Bearer token back to the client; log at COMMERROR instead.
    ++	 * technically a protocol violation, but don't reflect any information
    ++	 * about the sensitive Bearer token back to the client; log at COMMERROR
    ++	 * instead.
     +	 */
     +
     +	/* Tokens must not be empty. */
    @@ src/backend/libpq/auth-oauth.c (new)
     +	}
     +
     +	/*
    -+	 * Make sure the token contains only allowed characters. Tokens may end with
    -+	 * any number of '=' characters.
    ++	 * Make sure the token contains only allowed characters. Tokens may end
    ++	 * with any number of '=' characters.
     +	 */
     +	span = strspn(token, b64_set);
     +	while (token[span] == '=')
    @@ src/backend/libpq/auth-oauth.c (new)
     +	if (token[span] != '\0')
     +	{
     +		/*
    -+		 * This error message could be more helpful by printing the problematic
    -+		 * character(s), but that'd be a bit like printing a piece of someone's
    -+		 * password into the logs.
    ++		 * This error message could be more helpful by printing the
    ++		 * problematic character(s), but that'd be a bit like printing a piece
    ++		 * of someone's password into the logs.
     +		 */
     +		ereport(COMMERROR,
     +				(errcode(ERRCODE_PROTOCOL_VIOLATION),
    @@ src/backend/libpq/auth-oauth.c (new)
     +		/*
     +		 * If the validator is our authorization authority, we're done.
     +		 * Authentication may or may not have been performed depending on the
    -+		 * validator implementation; all that matters is that the validator says
    -+		 * the user can log in with the target role.
    ++		 * validator implementation; all that matters is that the validator
    ++		 * says the user can log in with the target role.
     +		 */
     +		return true;
     +	}
    @@ src/backend/libpq/auth-oauth.c (new)
     +	int			rfd = -1;
     +	int			wfd = -1;
     +
    -+	StringInfoData command = { 0 };
    ++	StringInfoData command = {0};
     +	char	   *p;
     +	FILE	   *fh = NULL;
     +
    @@ src/backend/libpq/auth-oauth.c (new)
     +		return false;
     +	}
     +
    -+	/*
    -+	 * Since popen() is unidirectional, open up a pipe for the other direction.
    -+	 * Use CLOEXEC to ensure that our write end doesn't accidentally get copied
    -+	 * into child processes, which would prevent us from closing it cleanly.
    ++	/*------
    ++	 * Since popen() is unidirectional, open up a pipe for the other
    ++	 * direction. Use CLOEXEC to ensure that our write end doesn't
    ++	 * accidentally get copied into child processes, which would prevent us
    ++	 * from closing it cleanly.
     +	 *
     +	 * XXX this is ugly. We should just read from the child process's stdout,
     +	 * but that's a lot more code.
    @@ src/backend/libpq/auth-oauth.c (new)
     +		goto cleanup;
     +	}
     +
    -+	/*
    ++	/*----------
     +	 * Construct the command, substituting any recognized %-specifiers:
     +	 *
     +	 *   %f: the file descriptor of the input pipe
    @@ src/backend/libpq/auth-oauth.c (new)
     +					p++;
     +					break;
     +				case 'r':
    ++
     +					/*
    -+					 * TODO: decide how this string should be escaped. The role
    -+					 * is controlled by the client, so if we don't escape it,
    -+					 * command injections are inevitable.
    ++					 * TODO: decide how this string should be escaped. The
    ++					 * role is controlled by the client, so if we don't escape
    ++					 * it, command injections are inevitable.
     +					 *
     +					 * This is probably an indication that the role name needs
    -+					 * to be communicated to the validator process in some other
    -+					 * way. For this proof of concept, just be incredibly strict
    -+					 * about the characters that are allowed in user names.
    ++					 * to be communicated to the validator process in some
    ++					 * other way. For this proof of concept, just be
    ++					 * incredibly strict about the characters that are allowed
    ++					 * in user names.
     +					 */
     +					if (!username_ok_for_shell(port->user_name))
     +						goto cleanup;
    @@ src/backend/libpq/auth-oauth.c (new)
     +	close(wfd);
     +	wfd = -1;
     +
    -+	/*
    ++	/*-----
     +	 * Read the command's response.
     +	 *
     +	 * TODO: getline() is probably too new to use, unfortunately.
    @@ src/backend/libpq/auth-oauth.c (new)
     +static bool
     +check_exit(FILE **fh, const char *command)
     +{
    -+	int rc;
    ++	int			rc;
     +
     +	rc = ClosePipeStream(*fh);
     +	*fh = NULL;
    @@ src/backend/libpq/auth-oauth.c (new)
     +	}
     +	else if (rc != 0)
     +	{
    -+		char *reason = wait_result_to_str(rc);
    ++		char	   *reason = wait_result_to_str(rc);
     +
     +		ereport(COMMERROR,
     +				(errmsg("failed to execute command \"%s\": %s",
    @@ src/backend/libpq/auth-oauth.c (new)
     +username_ok_for_shell(const char *username)
     +{
     +	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
    -+	static const char * const allowed = "abcdefghijklmnopqrstuvwxyz"
    -+										"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
    -+										"0123456789-_./:";
    -+	size_t	span;
    ++	static const char *const allowed =
    ++		"abcdefghijklmnopqrstuvwxyz"
    ++		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
    ++		"0123456789-_./:";
    ++	size_t		span;
     +
    -+	Assert(username && username[0]); /* should have already been checked */
    ++	Assert(username && username[0]);	/* should have already been checked */
     +
     +	span = strspn(username, allowed);
     +	if (username[span] != '\0')
    @@ src/include/libpq/oauth.h (new)
     +/* Implementation */
     +extern const pg_be_sasl_mech pg_be_oauth_mech;
     +
    -+#endif /* PG_OAUTH_H */
    ++#endif							/* PG_OAUTH_H */
     
      ## src/include/libpq/sasl.h ##
     @@
    @@ src/include/libpq/sasl.h: typedef struct pg_be_sasl_mech
      } pg_be_sasl_mech;
      
      /* Common implementation for auth.c */
    +
    + ## src/tools/pgindent/typedefs.list ##
    +@@ src/tools/pgindent/typedefs.list: normal_rand_fctx
    + nsphash_hash
    + ntile_context
    + numeric
    ++oauth_state
    + object_access_hook_type
    + object_access_hook_type_str
    + off_t
4:  83a55ba4eb = 4:  35ca8abdad Add pytest suite for OAuth
5:  49a3b2dfd1 = 5:  fb4cac4e99 squash! Add pytest suite for OAuth
6:  a68494323f = 6:  2008e60b3c XXX temporary patches to build and test
-:  ---------- > 7:  64611d33ef REVERT: temporarily skip the exit check
v15-0004-Add-pytest-suite-for-OAuth.patch.gzapplication/x-gzip; name=v15-0004-Add-pytest-suite-for-OAuth.patch.gzDownload
v15-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/x-gzip; name=v15-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v15-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/x-gzip; name=v15-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
v15-0001-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/x-gzip; name=v15-0001-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v15-0005-squash-Add-pytest-suite-for-OAuth.patch.gzapplication/x-gzip; name=v15-0005-squash-Add-pytest-suite-for-OAuth.patch.gzDownload
v15-0007-REVERT-temporarily-skip-the-exit-check.patch.gzapplication/x-gzip; name=v15-0007-REVERT-temporarily-skip-the-exit-check.patch.gzDownload
v15-0006-XXX-temporary-patches-to-build-and-test.patch.gzapplication/x-gzip; name=v15-0006-XXX-temporary-patches-to-build-and-test.patch.gzDownload
#87Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#86)
10 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Feb 22, 2024 at 6:08 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

v15 is a housekeeping update that adds typedefs.list entries and runs
pgindent.

v16 is more transformational!

Daniel contributed 0004, which completely replaces the
validator_command architecture with a C module API. This solves a
bunch of problems as discussed upthread and vastly simplifies the test
framework for the server side. 0004 also adds a set of Perl tests,
which will begin to subsume some of the Python server-side tests as I
get around to porting them. (@Daniel: 0005 is my diff against your
original patch, for review.)

0008 has been modified to quickfix the pgcommon linkage on the
Makefile side; my previous attempt at this only fixed Meson. The
patchset is now carrying a lot of squash-cruft, and I plan to flatten
it in the next version.

Thanks,
--Jacob

Attachments:

since-v15.diff.txttext/plain; charset=US-ASCII; name=since-v15.diff.txtDownload
 1:  92cf9bdcb3 =  1:  dc523009f2 common/jsonapi: support FRONTEND clients
 2:  0a58e64ade =  2:  af969e6cea libpq: add OAUTHBEARER SASL mechanism
 3:  c56bc808b6 =  3:  8906c9d445 backend: add OAUTHBEARER SASL mechanism
 -:  ---------- >  4:  e2566ab594 Introduce OAuth validator libraries
 -:  ---------- >  5:  26781a7f15 squash! Introduce OAuth validator libraries
 4:  35ca8abdad !  6:  295de92a5a Add pytest suite for OAuth
    @@ src/test/python/server/conftest.py (new)
     +
     +        yield conn_factory
     
    + ## src/test/python/server/oauthtest.c (new) ##
    +@@
    ++/*-------------------------------------------------------------------------
    ++ *
    ++ * oauthtest.c
    ++ *	  Test module for serverside OAuth token validation callbacks
    ++ *
    ++ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
    ++ * Portions Copyright (c) 1994, Regents of the University of California
    ++ *
    ++ * src/test/python/server/oauthtest.c
    ++ *
    ++ *-------------------------------------------------------------------------
    ++ */
    ++
    ++#include "postgres.h"
    ++
    ++#include "fmgr.h"
    ++#include "libpq/oauth.h"
    ++#include "utils/guc.h"
    ++#include "utils/memutils.h"
    ++
    ++PG_MODULE_MAGIC;
    ++
    ++static void test_startup(ValidatorModuleState *state);
    ++static void test_shutdown(ValidatorModuleState *state);
    ++static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
    ++											 const char *token,
    ++											 const char *role);
    ++
    ++static const OAuthValidatorCallbacks callbacks = {
    ++	.startup_cb = test_startup,
    ++	.shutdown_cb = test_shutdown,
    ++	.validate_cb = test_validate,
    ++};
    ++
    ++static char *expected_bearer = "";
    ++static bool set_authn_id = false;
    ++static char *authn_id = "";
    ++static bool reflect_role = false;
    ++
    ++void
    ++_PG_init(void)
    ++{
    ++	DefineCustomStringVariable("oauthtest.expected_bearer",
    ++							   "Expected Bearer token for future connections",
    ++							   NULL,
    ++							   &expected_bearer,
    ++							   "",
    ++							   PGC_SIGHUP,
    ++							   0,
    ++							   NULL, NULL, NULL);
    ++
    ++	DefineCustomBoolVariable("oauthtest.set_authn_id",
    ++							 "Whether to set an authenticated identity",
    ++							 NULL,
    ++							 &set_authn_id,
    ++							 false,
    ++							 PGC_SIGHUP,
    ++							 0,
    ++							 NULL, NULL, NULL);
    ++	DefineCustomStringVariable("oauthtest.authn_id",
    ++							   "Authenticated identity to use for future connections",
    ++							   NULL,
    ++							   &authn_id,
    ++							   "",
    ++							   PGC_SIGHUP,
    ++							   0,
    ++							   NULL, NULL, NULL);
    ++
    ++	DefineCustomBoolVariable("oauthtest.reflect_role",
    ++							 "Ignore the bearer token; use the requested role as the authn_id",
    ++							 NULL,
    ++							 &reflect_role,
    ++							 false,
    ++							 PGC_SIGHUP,
    ++							 0,
    ++							 NULL, NULL, NULL);
    ++
    ++	MarkGUCPrefixReserved("oauthtest");
    ++}
    ++
    ++const OAuthValidatorCallbacks *
    ++_PG_oauth_validator_module_init(void)
    ++{
    ++	return &callbacks;
    ++}
    ++
    ++static void
    ++test_startup(ValidatorModuleState *state)
    ++{
    ++}
    ++
    ++static void
    ++test_shutdown(ValidatorModuleState *state)
    ++{
    ++}
    ++
    ++static ValidatorModuleResult *
    ++test_validate(ValidatorModuleState *state, const char *token, const char *role)
    ++{
    ++	ValidatorModuleResult *res;
    ++
    ++	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
    ++
    ++	if (reflect_role)
    ++	{
    ++		res->authenticated = true;
    ++		res->authn_id = pstrdup(role);	/* TODO: constify? */
    ++	}
    ++	else
    ++	{
    ++		if (*expected_bearer && !strcmp(token, expected_bearer))
    ++			res->authenticated = true;
    ++		if (set_authn_id)
    ++			res->authn_id = authn_id;
    ++	}
    ++
    ++	return res;
    ++}
    +
      ## src/test/python/server/test_oauth.py (new) ##
     @@
     +#
    @@ src/test/python/server/test_oauth.py (new)
     +FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
     +
     +SHARED_MEM_NAME = "oauth-pytest"
    -+MAX_TOKEN_SIZE = 4096
     +MAX_UINT16 = 2**16 - 1
     +
     +
    @@ src/test/python/server/test_oauth.py (new)
     +        dbname = "oauth_test_" + id
     +
     +        user = "oauth_user_" + id
    ++        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
     +        map_user = "oauth_map_user_" + id
     +        authz_user = "oauth_authz_user_" + id
     +
    @@ src/test/python/server/test_oauth.py (new)
     +
     +        # Create our roles and database.
     +        user = sql.Identifier(ctx.user)
    ++        punct_user = sql.Identifier(ctx.punct_user)
     +        map_user = sql.Identifier(ctx.map_user)
     +        authz_user = sql.Identifier(ctx.authz_user)
     +        dbname = sql.Identifier(ctx.dbname)
     +
     +        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
    ++        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
     +        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
     +        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
     +        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
     +
    -+        # Make this test script the server's oauth_validator.
    -+        path = pathlib.Path(__file__).parent / "validate_bearer.py"
    -+        path = str(path.absolute())
    -+
    -+        cmd = f"{shlex.quote(path)} {SHARED_MEM_NAME} <&%f"
    -+        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
    -+
     +        # Replace pg_hba and pg_ident.
     +        c.execute("SHOW hba_file;")
     +        hba = c.fetchone()[0]
    @@ src/test/python/server/test_oauth.py (new)
     +        # Put things back the way they were.
     +        c.execute("SELECT pg_reload_conf();")
     +
    -+        c.execute("ALTER SYSTEM RESET oauth_validator_command;")
     +        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
     +        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
     +        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
    ++        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
     +        c.execute(sql.SQL("DROP ROLE {};").format(user))
     +
     +
    @@ src/test/python/server/test_oauth.py (new)
     +    return connect()
     +
     +
    -+@pytest.fixture(scope="session")
    -+def shared_mem():
    -+    """
    -+    Yields a shared memory segment that can be used for communication between
    -+    the bearer_token fixture and ./validate_bearer.py.
    -+    """
    -+    size = MAX_TOKEN_SIZE + 2  # two byte length prefix
    -+    mem = shared_memory.SharedMemory(SHARED_MEM_NAME, create=True, size=size)
    -+
    -+    try:
    -+        with contextlib.closing(mem):
    -+            yield mem
    -+    finally:
    -+        mem.unlink()
    -+
    -+
    -+@pytest.fixture()
    -+def bearer_token(shared_mem):
    ++def bearer_token(*, size=16):
     +    """
    -+    Returns a factory function that, when called, will store a Bearer token in
    -+    shared_mem. If token is None (the default), a new token will be generated
    -+    using secrets.token_urlsafe() and returned; otherwise the passed token will
    -+    be used as-is.
    -+
    -+    When token is None, the generated token size in bytes may be specified as an
    -+    argument; if unset, a small 16-byte token will be generated. The token size
    -+    may not exceed MAX_TOKEN_SIZE in any case.
    -+
    -+    The return value is the token, converted to a bytes object.
    -+
    -+    As a special case for testing failure modes, accept_any may be set to True.
    -+    This signals to the validator command that any bearer token should be
    -+    accepted. The returned token in this case may be used or discarded as needed
    -+    by the test.
    ++    Generates a Bearer token using secrets.token_urlsafe(). The generated token
    ++    size in bytes may be specified; if unset, a small 16-byte token will be
    ++    generated.
     +    """
     +
    -+    def set_token(token=None, *, size=16, accept_any=False):
    -+        if token is not None:
    -+            size = len(token)
    ++    if size % 4:
    ++        raise ValueError(f"requested token size {size} is not a multiple of 4")
     +
    -+        if size > MAX_TOKEN_SIZE:
    -+            raise ValueError(f"token size {size} exceeds maximum size {MAX_TOKEN_SIZE}")
    ++    token = secrets.token_urlsafe(size // 4 * 3)
    ++    assert len(token) == size
     +
    -+        if token is None:
    -+            if size % 4:
    -+                raise ValueError(f"requested token size {size} is not a multiple of 4")
    -+
    -+            token = secrets.token_urlsafe(size // 4 * 3)
    -+            assert len(token) == size
    -+
    -+        try:
    -+            token = token.encode("ascii")
    -+        except AttributeError:
    -+            pass  # already encoded
    -+
    -+        if accept_any:
    -+            # Two-byte magic value.
    -+            shared_mem.buf[:2] = struct.pack("H", MAX_UINT16)
    -+        else:
    -+            # Two-byte length prefix, then the token data.
    -+            shared_mem.buf[:2] = struct.pack("H", len(token))
    -+            shared_mem.buf[2 : size + 2] = token
    -+
    -+        return token
    -+
    -+    return set_token
    ++    return token
     +
     +
     +def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
    @@ src/test/python/server/test_oauth.py (new)
     +            )
     +
     +
    ++@pytest.fixture()
    ++def setup_validator():
    ++    """
    ++    A per-test fixture that sets up the test validator with expected behavior.
    ++    The setting will be reverted during teardown.
    ++    """
    ++    conn = psycopg2.connect("")
    ++    conn.autocommit = True
    ++
    ++    with contextlib.closing(conn):
    ++        c = conn.cursor()
    ++        prev = dict()
    ++
    ++        def setter(**gucs):
    ++            for guc, val in gucs.items():
    ++                # Save the previous value.
    ++                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
    ++                prev[guc] = c.fetchone()[0]
    ++
    ++                c.execute(
    ++                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
    ++                        sql.Identifier(guc)
    ++                    ),
    ++                    (val,),
    ++                )
    ++                c.execute("SELECT pg_reload_conf();")
    ++
    ++        yield setter
    ++
    ++        # Restore the previous values.
    ++        for guc, val in prev.items():
    ++            c.execute(
    ++                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
    ++                    sql.Identifier(guc)
    ++                ),
    ++                (val,),
    ++            )
    ++            c.execute("SELECT pg_reload_conf();")
    ++
    ++
     +@pytest.mark.parametrize("token_len", [16, 1024, 4096])
     +@pytest.mark.parametrize(
     +    "auth_prefix",
    @@ src/test/python/server/test_oauth.py (new)
     +        b"Bearer    ",
     +    ],
     +)
    -+def test_oauth(conn, oauth_ctx, bearer_token, auth_prefix, token_len):
    -+    begin_oauth_handshake(conn, oauth_ctx)
    -+
    ++def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
     +    # Generate our bearer token with the desired length.
     +    token = bearer_token(size=token_len)
    -+    auth = auth_prefix + token
    ++    setup_validator(expected_bearer=token)
    ++
    ++    conn = connect()
    ++    begin_oauth_handshake(conn, oauth_ctx)
     +
    ++    auth = auth_prefix + token.encode("ascii")
     +    send_initial_response(conn, auth=auth)
     +    expect_handshake_success(conn)
     +
    @@ src/test/python/server/test_oauth.py (new)
     +        "x-._~+/x",
     +    ],
     +)
    -+def test_oauth_bearer_corner_cases(conn, oauth_ctx, bearer_token, token_value):
    ++def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
    ++    setup_validator(expected_bearer=token_value)
    ++
    ++    conn = connect()
     +    begin_oauth_handshake(conn, oauth_ctx)
     +
    -+    send_initial_response(conn, bearer=bearer_token(token_value))
    ++    send_initial_response(conn, bearer=token_value.encode("ascii"))
     +
     +    expect_handshake_success(conn)
     +
    @@ src/test/python/server/test_oauth.py (new)
     +        ),
     +    ],
     +)
    -+def test_oauth_authn_id(conn, oauth_ctx, bearer_token, user, authn_id, should_succeed):
    -+    token = None
    -+
    ++def test_oauth_authn_id(
    ++    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
    ++):
    ++    token = bearer_token()
     +    authn_id = authn_id(oauth_ctx)
    -+    if authn_id is not None:
    -+        authn_id = authn_id.encode("ascii")
     +
    -+        # As a hack to get the validator to reflect arbitrary output from this
    -+        # test, encode the desired output as a base64 token. The validator will
    -+        # key on the leading "output=" to differentiate this from the random
    -+        # tokens generated by secrets.token_urlsafe().
    -+        output = b"output=" + authn_id + b"\n"
    -+        token = base64.urlsafe_b64encode(output)
    ++    # Set up the validator appropriately.
    ++    gucs = dict(expected_bearer=token)
    ++    if authn_id is not None:
    ++        gucs["set_authn_id"] = True
    ++        gucs["authn_id"] = authn_id
    ++    setup_validator(**gucs)
     +
    -+    token = bearer_token(token)
    ++    conn = connect()
     +    username = user(oauth_ctx)
    -+
     +    begin_oauth_handshake(conn, oauth_ctx, user=username)
    -+    send_initial_response(conn, bearer=token)
    ++    send_initial_response(conn, bearer=token.encode("ascii"))
     +
     +    if not should_succeed:
     +        expect_handshake_failure(conn, oauth_ctx)
    @@ src/test/python/server/test_oauth.py (new)
     +
     +    expected = authn_id
     +    if expected is not None:
    -+        expected = b"oauth:" + expected
    ++        expected = b"oauth:" + expected.encode("ascii")
     +
     +    row = resp.payload
     +    assert row.columns == [expected]
    @@ src/test/python/server/test_oauth.py (new)
     +            assert expected in detail
     +
     +
    -+def test_oauth_rejected_bearer(conn, oauth_ctx, bearer_token):
    -+    # Generate a new bearer token, which we will proceed not to use.
    -+    _ = bearer_token()
    -+
    ++def test_oauth_rejected_bearer(conn, oauth_ctx):
     +    begin_oauth_handshake(conn, oauth_ctx)
     +
     +    # Send a bearer token that doesn't match what the validator expects. It
    @@ src/test/python/server/test_oauth.py (new)
     +        b"Bearer    ",
     +        b"Bearer a===b",
     +        b"Bearer hello!",
    ++        b"Bearer trailingspace ",
    ++        b"Bearer trailingtab\t",
     +        b"Bearer me@example.com",
    ++        b"Beare abcd",
     +        b'OAuth realm="Example"',
     +        b"",
     +    ],
     +)
    -+def test_oauth_invalid_bearer(conn, oauth_ctx, bearer_token, bad_bearer):
    ++def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
     +    # Tell the validator to accept any token. This ensures that the invalid
     +    # bearer tokens are rejected before the validation step.
    -+    _ = bearer_token(accept_any=True)
    ++    setup_validator(reflect_role=True)
     +
    ++    conn = connect()
     +    begin_oauth_handshake(conn, oauth_ctx)
     +    send_initial_response(conn, auth=bad_bearer)
     +
    @@ src/test/python/server/test_oauth.py (new)
     +    err.match(resp)
     +
     +
    -+def test_oauth_empty_initial_response(conn, oauth_ctx, bearer_token):
    ++def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
    ++    token = bearer_token()
    ++    setup_validator(expected_bearer=token)
    ++
    ++    conn = connect()
     +    begin_oauth_handshake(conn, oauth_ctx)
     +
     +    # Send an initial response without data.
    @@ src/test/python/server/test_oauth.py (new)
     +    assert not pkt.payload.body
     +
     +    # Now send the initial data.
    -+    data = b"n,,\x01auth=Bearer " + bearer_token() + b"\x01\x01"
    ++    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
     +    pq3.send(conn, pq3.types.PasswordMessage, data)
     +
     +    # Server should now complete the handshake.
     +    expect_handshake_success(conn)
     +
     +
    -+@pytest.fixture()
    -+def set_validator():
    -+    """
    -+    A per-test fixture that allows a test to override the setting of
    -+    oauth_validator_command for the cluster. The setting will be reverted during
    -+    teardown.
    -+
    -+    Passing None will perform an ALTER SYSTEM RESET.
    -+    """
    -+    conn = psycopg2.connect("")
    -+    conn.autocommit = True
    -+
    -+    with contextlib.closing(conn):
    -+        c = conn.cursor()
    -+
    -+        # Save the previous value.
    -+        c.execute("SHOW oauth_validator_command;")
    -+        prev_cmd = c.fetchone()[0]
    -+
    -+        def setter(cmd):
    -+            c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (cmd,))
    -+            c.execute("SELECT pg_reload_conf();")
    -+
    -+        yield setter
    -+
    -+        # Restore the previous value.
    -+        c.execute("ALTER SYSTEM SET oauth_validator_command TO %s;", (prev_cmd,))
    -+        c.execute("SELECT pg_reload_conf();")
    -+
    -+
    -+def test_oauth_no_validator(oauth_ctx, set_validator, connect, bearer_token):
    ++# TODO: see if there's a way to test this easily after the API switch
    ++def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
     +    # Clear out our validator command, then establish a new connection.
     +    set_validator("")
     +    conn = connect()
    @@ src/test/python/server/test_oauth.py (new)
     +    expect_handshake_failure(conn, oauth_ctx)
     +
     +
    -+def test_oauth_validator_role(oauth_ctx, set_validator, connect):
    -+    # Switch the validator implementation. This validator will reflect the
    -+    # PGUSER as the authenticated identity.
    -+    path = pathlib.Path(__file__).parent / "validate_reflect.py"
    -+    path = str(path.absolute())
    ++@pytest.mark.parametrize(
    ++    "user",
    ++    [
    ++        pytest.param(
    ++            lambda ctx: ctx.user,
    ++            id="basic username",
    ++        ),
    ++        pytest.param(
    ++            lambda ctx: ctx.punct_user,
    ++            id="'unsafe' characters are passed through correctly",
    ++        ),
    ++    ],
    ++)
    ++def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
    ++    username = user(oauth_ctx)
     +
    -+    set_validator(f"{shlex.quote(path)} '%r' <&%f")
    ++    # Tell the validator to reflect the PGUSER as the authenticated identity.
    ++    setup_validator(reflect_role=True)
     +    conn = connect()
     +
    -+    # Log in. Note that the reflection validator ignores the bearer token.
    -+    begin_oauth_handshake(conn, oauth_ctx, user=oauth_ctx.user)
    ++    # Log in. Note that reflection ignores the bearer token.
    ++    begin_oauth_handshake(conn, oauth_ctx, user=username)
     +    send_initial_response(conn, bearer=b"dontcare")
     +    expect_handshake_success(conn)
     +
    @@ src/test/python/server/test_oauth.py (new)
     +    resp = receive_until(conn, pq3.types.DataRow)
     +
     +    row = resp.payload
    -+    expected = b"oauth:" + oauth_ctx.user.encode("utf-8")
    ++    expected = b"oauth:" + username.encode("utf-8")
     +    assert row.columns == [expected]
    -+
    -+
    -+def test_oauth_role_with_shell_unsafe_characters(oauth_ctx, set_validator, connect):
    -+    """
    -+    XXX This test pins undesirable behavior. We should be able to handle any
    -+    valid Postgres role name.
    -+    """
    -+    # Switch the validator implementation. This validator will reflect the
    -+    # PGUSER as the authenticated identity.
    -+    path = pathlib.Path(__file__).parent / "validate_reflect.py"
    -+    path = str(path.absolute())
    -+
    -+    set_validator(f"{shlex.quote(path)} '%r' <&%f")
    -+    conn = connect()
    -+
    -+    unsafe_username = "hello'there"
    -+    begin_oauth_handshake(conn, oauth_ctx, user=unsafe_username)
    -+
    -+    # The server should reject the handshake.
    -+    send_initial_response(conn, bearer=b"dontcare")
    -+    expect_handshake_failure(conn, oauth_ctx)
     
      ## src/test/python/server/test_server.py (new) ##
     @@
    @@ src/test/python/server/test_server.py (new)
     +    resp = pq3.recv1(conn)
     +    assert resp.type == pq3.types.ReadyForQuery
     
    - ## src/test/python/server/validate_bearer.py (new) ##
    -@@
    -+#! /usr/bin/env python3
    -+#
    -+# Copyright 2021 VMware, Inc.
    -+# SPDX-License-Identifier: PostgreSQL
    -+#
    -+# DO NOT USE THIS OAUTH VALIDATOR IN PRODUCTION. It doesn't actually validate
    -+# anything, and it logs the bearer token data, which is sensitive.
    -+#
    -+# This executable is used as an oauth_validator_command in concert with
    -+# test_oauth.py. Memory is shared and communicated from that test module's
    -+# bearer_token() fixture.
    -+#
    -+# This script must run under the Postgres server environment; keep the
    -+# dependency list fairly standard.
    -+
    -+import base64
    -+import binascii
    -+import contextlib
    -+import struct
    -+import sys
    -+from multiprocessing import shared_memory
    -+
    -+MAX_UINT16 = 2**16 - 1
    -+
    -+
    -+def remove_shm_from_resource_tracker():
    -+    """
    -+    Monkey-patch multiprocessing.resource_tracker so SharedMemory won't be
    -+    tracked. Pulled from this thread, where there are more details:
    -+
    -+        https://bugs.python.org/issue38119
    -+
    -+    TL;DR: all clients of shared memory segments automatically destroy them on
    -+    process exit, which makes shared memory segments much less useful. This
    -+    monkeypatch removes that behavior so that we can defer to the test to manage
    -+    the segment lifetime.
    -+
    -+    Ideally a future Python patch will pull in this fix and then the entire
    -+    function can go away.
    -+    """
    -+    from multiprocessing import resource_tracker
    -+
    -+    def fix_register(name, rtype):
    -+        if rtype == "shared_memory":
    -+            return
    -+        return resource_tracker._resource_tracker.register(self, name, rtype)
    -+
    -+    resource_tracker.register = fix_register
    -+
    -+    def fix_unregister(name, rtype):
    -+        if rtype == "shared_memory":
    -+            return
    -+        return resource_tracker._resource_tracker.unregister(self, name, rtype)
    -+
    -+    resource_tracker.unregister = fix_unregister
    -+
    -+    if "shared_memory" in resource_tracker._CLEANUP_FUNCS:
    -+        del resource_tracker._CLEANUP_FUNCS["shared_memory"]
    -+
    -+
    -+def main(args):
    -+    remove_shm_from_resource_tracker()  # XXX remove some day
    -+
    -+    # Get the expected token from the currently running test.
    -+    shared_mem_name = args[0]
    -+
    -+    mem = shared_memory.SharedMemory(shared_mem_name)
    -+    with contextlib.closing(mem):
    -+        # First two bytes are the token length.
    -+        size = struct.unpack("H", mem.buf[:2])[0]
    -+
    -+        if size == MAX_UINT16:
    -+            # Special case: the test wants us to accept any token.
    -+            sys.stderr.write("accepting token without validation\n")
    -+            return
    -+
    -+        # The remainder of the buffer contains the expected token.
    -+        assert size <= (mem.size - 2)
    -+        expected_token = mem.buf[2 : size + 2].tobytes()
    -+
    -+        mem.buf[:] = b"\0" * mem.size  # scribble over the token
    -+
    -+    token = sys.stdin.buffer.read()
    -+    if token != expected_token:
    -+        sys.exit(f"failed to match Bearer token ({token!r} != {expected_token!r})")
    -+
    -+    # See if the test wants us to print anything. If so, it will have encoded
    -+    # the desired output in the token with an "output=" prefix.
    -+    try:
    -+        # altchars="-_" corresponds to the urlsafe alphabet.
    -+        data = base64.b64decode(token, altchars="-_", validate=True)
    -+
    -+        if data.startswith(b"output="):
    -+            sys.stdout.buffer.write(data[7:])
    -+
    -+    except binascii.Error:
    -+        pass
    -+
    -+
    -+if __name__ == "__main__":
    -+    main(sys.argv[1:])
    -
    - ## src/test/python/server/validate_reflect.py (new) ##
    -@@
    -+#! /usr/bin/env python3
    -+#
    -+# Copyright 2021 VMware, Inc.
    -+# SPDX-License-Identifier: PostgreSQL
    -+#
    -+# DO NOT USE THIS OAUTH VALIDATOR IN PRODUCTION. It ignores the bearer token
    -+# entirely and automatically logs the user in.
    -+#
    -+# This executable is used as an oauth_validator_command in concert with
    -+# test_oauth.py. It expects the user's desired role name as an argument; the
    -+# actual token will be discarded and the user will be logged in with the role
    -+# name as the authenticated identity.
    -+#
    -+# This script must run under the Postgres server environment; keep the
    -+# dependency list fairly standard.
    -+
    -+import sys
    -+
    -+
    -+def main(args):
    -+    # We have to read the entire token as our first action to unblock the
    -+    # server, but we won't actually use it.
    -+    _ = sys.stdin.buffer.read()
    -+
    -+    if len(args) != 1:
    -+        sys.exit("usage: ./validate_reflect.py ROLE")
    -+
    -+    # Log the user in as the provided role.
    -+    role = args[0]
    -+    print(role)
    -+
    -+
    -+if __name__ == "__main__":
    -+    main(sys.argv[1:])
    -
      ## src/test/python/test_internals.py (new) ##
     @@
     +#
 5:  fb4cac4e99 !  7:  7d21be13c0 squash! Add pytest suite for OAuth
    @@ src/test/python/meson.build (new)
     @@
     +# Copyright (c) 2023, PostgreSQL Global Development Group
     +
    ++subdir('server')
    ++
     +pytest_env = {
     +  'with_oauth': oauth_library,
     +
    @@ src/test/python/server/conftest.py: import pq3
     +                        f"-c port={port}",
     +                        "-c listen_addresses=localhost",
     +                        "-c log_connections=on",
    ++                        "-c shared_preload_libraries=oauthtest",
    ++                        "-c oauth_validator_library=oauthtest",
     +                    ]
     +                ),
     +                "start",
    @@ src/test/python/server/conftest.py: import pq3
                  # Have ExitStack close our socket.
                  stack.enter_context(sock)
     
    + ## src/test/python/server/meson.build (new) ##
    +@@
    ++# Copyright (c) 2024, PostgreSQL Global Development Group
    ++
    ++if not oauth.found()
    ++  subdir_done()
    ++endif
    ++
    ++oauthtest_sources = files(
    ++  'oauthtest.c',
    ++)
    ++
    ++if host_system == 'windows'
    ++  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
    ++    '--NAME', 'oauthtest',
    ++    '--FILEDESC', 'passthrough module to validate OAuth tests',
    ++  ])
    ++endif
    ++
    ++oauthtest = shared_module('oauthtest',
    ++  oauthtest_sources,
    ++  kwargs: pg_test_mod_args,
    ++)
    ++test_install_libs += oauthtest
    +
      ## src/test/python/server/test_oauth.py ##
    -@@ src/test/python/server/test_oauth.py: MAX_TOKEN_SIZE = 4096
    +@@ src/test/python/server/test_oauth.py: SHARED_MEM_NAME = "oauth-pytest"
      MAX_UINT16 = 2**16 - 1
      
      
    @@ src/test/python/server/test_oauth.py: def oauth_ctx():
          conn.autocommit = True
      
          with contextlib.closing(conn):
    -@@ src/test/python/server/test_oauth.py: def test_oauth_empty_initial_response(conn, oauth_ctx, bearer_token):
    +@@ src/test/python/server/test_oauth.py: def receive_until(conn, type):
      
      
      @pytest.fixture()
    --def set_validator():
    -+def set_validator(postgres_instance):
    +-def setup_validator():
    ++def setup_validator(postgres_instance):
          """
    -     A per-test fixture that allows a test to override the setting of
    -     oauth_validator_command for the cluster. The setting will be reverted during
    -@@ src/test/python/server/test_oauth.py: def set_validator():
    - 
    -     Passing None will perform an ALTER SYSTEM RESET.
    +     A per-test fixture that sets up the test validator with expected behavior.
    +     The setting will be reverted during teardown.
          """
     -    conn = psycopg2.connect("")
     +    host, port = postgres_instance
 6:  2008e60b3c !  8:  26dcd5f828 XXX temporary patches to build and test
    @@ Commit message
           0001; has something changed?
         - construct 2.10.70 has some incompatibilities with the current tests
     
    + ## src/bin/pg_combinebackup/Makefile ##
    +@@ src/bin/pg_combinebackup/Makefile: include $(top_builddir)/src/Makefile.global
    + 
    + override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
    + LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
    ++# TODO: fix this properly
    ++LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
    + 
    + OBJS = \
    + 	$(WIN32RES) \
    +@@ src/bin/pg_combinebackup/Makefile: OBJS = \
    + 
    + all: pg_combinebackup
    + 
    +-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
    +-	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
    ++pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
    ++	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
    + 
    + install: all installdirs
    + 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
    +
      ## src/bin/pg_combinebackup/meson.build ##
     @@ src/bin/pg_combinebackup/meson.build: endif
      
    @@ src/bin/pg_combinebackup/meson.build: endif
      )
      bin_targets += pg_combinebackup
     
    + ## src/bin/pg_verifybackup/Makefile ##
    +@@ src/bin/pg_verifybackup/Makefile: top_builddir = ../../..
    + include $(top_builddir)/src/Makefile.global
    + 
    + # We need libpq only because fe_utils does.
    +-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
    ++LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
    + 
    + OBJS = \
    + 	$(WIN32RES) \
    +
      ## src/test/python/requirements.txt ##
     @@
      black
 7:  64611d33ef =  9:  0ff8e3786a REVERT: temporarily skip the exit check
v16-0005-squash-Introduce-OAuth-validator-libraries.patch.gzapplication/x-gzip; name=v16-0005-squash-Introduce-OAuth-validator-libraries.patch.gzDownload
v16-0001-common-jsonapi-support-FRONTEND-clients.patch.gzapplication/x-gzip; name=v16-0001-common-jsonapi-support-FRONTEND-clients.patch.gzDownload
v16-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/x-gzip; name=v16-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
��<�ev16-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch�<kW�H��[���,vc����1&a�`��{�drY*������wo�����$K�l ��={�:K�������8�9��n��Z3�:�m����u�d�z��7;�kw��d�}F,�j*�c��J���!�c��i�c���y����K�����!�-�J;5#v7V�jzA�(jU�V�km��r�]�h���7fE����w�w������u���c0m����wo���5�z��3��9|�i�C����h7��Gpu=��������
�_���m��|x�k��<p��yQm����V�S�#��h5�������:��.S{�^�%PG���u3wl���3�������`J�e��c��g�K�D�����t�|a���������u�`����=���������4
�>�TjV����+�z����������uE?>:@z
���OO,��P.��QygWj���7o5��T�8k����E�a�������>��,T�����X�0DZ��X���x)	r��KB�%I9!���D�8c���qQ��)�'���'d�#�@�B�.����m���4\��m��i���per�	�3�}�P�I���=�-��O��f/B���{g��$�Inrx��B�C?��/_l�
�u�'HJ�+��8�����i�	\}�qt.���hZ?%2
*2��!p����'J��v���x�^���Y�K!-���H���w�3���_��c���/���1�7pM"@�l61n��Q���y����x�T��+qU��+����Fh}y$�����@:�q3�yp��JU��8 %���Xh����f����&w�L��B�-�7�H�k��3��j��Oj��0���>G���m�-Y�\��<\���@�p�,g�XRK�$Z��*x	���S`�9��lj����d��h�E}f&B���w�����������w8�G�A�EH�QFl����
��6�]&���,'Kp<����.s-��<Rx~�����g����1�����*y�9���A �c��E�	��������Xb�9
d��T��2�����`WTFy?C)�is��o�lP����7m1��
>6��E!/�1�a1��_��x�n5��#j���$�������B|�>�yg:��5��!t�������\�0^�c$���[���M��2FhuL���5����t�A�%��5X�I3#������t�y����H`]]&�i�2����aZCM�%
X�s�'qrL����Y�#U�M����x���8�+�L<%Y:7�0�ef)�8���c�"����^a��R"�Y��� ��d*d;�3AC!��)j`�b��m6^L���qi%*G��A����!��yH��wL7���,��T���EK���fR1��xkA�pb*B��r�)G�`�:f�0�~u������7�;�{8q�6F�@�2�C������0����^DdX��w��t�\.kIrd��������t�z���Ps�1S���B�������(h�����������������CUc(L!��fG��s�;R�_���5�jw*�R8�,������P���G��NX����ay{��oC:ba���&�,�*otK@U����IxdY���z��48�Jq�jv��$WjVS{t���nJ���]�	����@rzL_�r�f�q��@w'�J�����N����&v0)~�'l�Kk?
��Y8a���j�7b�H����mz*q���]�N�����X>s�u��'UT���x���%n������"D����4D�(G�E��0�h�c��W+�F��h��	�����#���4�� ���X��G��4^k��h�;m���fG���Z�n��n����M�mix
I���PnWk%�f�����X,,:�!%x��1u��k��"gxi���3���W\����pl���3������f��wZ���|1$Gcn�D6i�}�7,��x��O�1�T�<~��Ez�
�
ks��pD�W�jd#�C��D�v�{�R��Pl���������R�
��m$/N{W	�!d6�'�ob���j�U)���I�z�^L�.��K������v.�hp!��@I��B��b(a6�nC�M��.a���}�������������K�S�����D ��H�������d��_��8�\�<������
4��).%V�#�mj�1f�����cy
	������<�l,�9L�;���!�h��������C�Y��'����
���d24��tO�WS_�8mT`_]��NcK���p�fL<Cn%��{f8����es�p)K�=�L
u�1��/�����	�X$a��7-�8R����RB!�=����],9y���c���e{1��4`��Y����r�U��+��"�$[;�>�r���3$������i������q��{+��@F���+�=K�{�jQY��y����z�F��a.����v-�����Hg"�����H4�F�^�V[��/�V>���X�J��U�"������6��a.��wD�������#�a�!#cC@�����u��t�I�-X&���-	g�-�$0s����F'��[;�{�'�G�{�!�RX������^0<a�$h}���G?���|c�i�����-�m�/T��T$�{(w�c'D]��[.z�!��eR��������Y3��Pm7����DC@�-hu#�'����{������`��T��ph5`V�.8�G�>���G��5;q���B�"���E�#����5w2�R���T��B�\�
��2�pI��e��i{�mB'�|�v�cn��9�'��P4o��QI�����%�B|IZ�+�V#�0�8%��G��K��@��O�S���GA0���:!_J�����T���e@�Ih�w�8h��)-��w�,4�%���$o��'2����U~Y�������S)-�UpN5���q��P�C���r����7�T%����T�T�3�����Cj�
m�����o��5���[v��-0�W�J:?�*?}Fl��D�u4@�%(tP��q?���+n���9��E�0���]���n���Y�o���l?1]a������<X����F���m��j;7.��Scx5�<?5������Y
�,m�c���:�����qL�WW��d��+�\��Q�-�q�i�����y���!���g��LO	=r��i���_Xm4��R-������p��&����Y>�b�}o27�p�:{�Y�g5{U��%R��%sC=���<�u�o�&	-��S��Wn%;���x��|1�jbCF*0}��V��(���g�FY�J<W��������w�x�Z���o��
�q�4n��V�Y�Z�u}��vj�Z�l���z����z�4D��m��5}�D��7��������M������������M!-nmI*D�w�����o��2D�Wo�wM,|tJ���|���	���	&���0 ����
�r[E�OeP��WsW�b_�B:�	��q�~Y>P��B"���|NO�ai�Z��8U2<sy��9e�����u��S{sDDH��x��u��
�F�%�R�[�6��� �[%��4����74��N"�I��m�v�(~��������T����m�/y�������3��O���Lx=����
����Z��<��EM*�,���m���w�juJ����F�v)�9`E�>k���F�~D��qy?���r4�����S:���Xc%�7����z%��Y�<�H�����H)!�����`��Q�
#Y�K)g�������-:�nV�-�d���:ufZ����-�P��H����������,dc!���}*7.�����/@���B���XZj����	x�G8P�~���
t��p ���|XMB@<��3�xdL�Q����p��BA�������d1����DYR��S���;�a�e�@�81��c(dX����bLER(���r0�9B.��E��$�#�jq�eQ�$s�
��'�3l��/_��]PS��@Mx����AV;j�����H0map������A����R|y����a�8.��p�'�pG'�-�����
R"P
�U��|E����a�Fj���w����o����	���3�@*0�X.H��Yn1.�j�z�Bc�����u��Sk8��G
�S���)�!���P�{�[J�\�r�#�	�*��7�"����D�������T<������^�T�f��'����}�V��7h���j�s9
��2����hZ�.^�L���$Z�;5�,���E�)�d������1��j�.���d���I�1iO�v����2�����T�*�)��VH�q�'	�D��������.hD�+-�N�{�trJn�����K4�;���taNY��9�6I��@���Jk�;�V�q*|�4g�T8
]��p���l�E^L�Oz����SK���g��[.�E��>�X���z�;���N��5�-��l�(��[����$���^''�������]]�����2W�<������'���&W������F�;�q��dF��T�'H�$��;��\����:��$���9���e
�zU���|t�c_g<�����[�nJQ����*�
j�T����������������?�2�5���\���Q�8�sj�6G����xol��eB����0C�����X��P�v[ezc��~3�po���[����oC<��Q�k6oI��%�[��%Z�H���z���������T����G�>U�9|�~xi�[��X7z/�&.$�z4%%8�]�$�4��m�`/5"�k���?��F��$d[�f�U�����O�Y��l�m�^�]���V���� ����	����<H_�S�n�9t�Y��&`zi52���|3+L�s��T��fz�I�b!��JJ,#G7�����xs{~qs~)�z8rEYo���_����/�}�-����"347�XJ]���o�a����������j
�j����<N�
�U�q[(�r��I��h������S$$�yi�8P���zLF9�n��F\
��r����S�L	3�V���uG���Ua��Z�Tmn
�3�YP��������847��O��4n�����b�1��]�^F9�~f_<-�d��#�G����-�{Bg=�V��-�n�!tc��8Q�������(������x��88v����4p��7ms��L*�f^p��'��!�k	����y����h��/
~��������������p?���P�O�;(�G�c��m|�����_����_��;���D��O����b�q����������.O�f��\������������������x��A�X!v��NV�xA����j@O��W��H�#T��?Q�5�C�G�����|1���] ��Le�aHvh�LQ��J����NkQ+�P�d����n������}��OQ�Wl�H2b�1��+���A8Nn:K��JP��Z%�i����x�$a;��o��n5�:�>�����C8/�����������=���j~�Rw�_�F*$��2����	�g<��av�b��a����rq�B�����M8�����?����LD�WJ@�|��l�<��Nw_�z9�r�}����_�{��{�U�V���������*��mjsSv�uXbc�m2�
�It>����yO��[��mh4�����m��D����I��?�����	������ko��ofpU��w��u��=�-�"�G�����\�y��������mo]�{���j�*���������~�Z��KJ!e�U�<]���2��/Rf[�������QQ�d��/������:[�1���wox�<��{}b���&Z�D��/�N�����tf�lr�<�&�A�
�>��0	=�#"C���`s��a^������
>�V���1���e)��i����j��w��n���P��i��'B���3��>;:���+�'�^�E&@H�t>�@���	�I�
w������h�K��G�!���`,���C�3|��O2W|����O����T6
*����^8�z��l<�:��H��������>WP
�vi�!�P��1�S�B
�Y����r�?���&�+x��N?v��q�tQ���5�M
!n��{`��3Xl���F��	�$rA1�=m�Z�l�nf�A���������F�x6��������%a��Nm[&���[mJ��kH%��3
��{�`y:@����)u������Q�NB���;�����#ZV����C\�)�`>��&��v���x�X���5|������H0�?}�P��E/�u���`���;�2�x���.h��*>�)�`>)-�x�����#Er���UGh��U S����Oc�&3B���������k��M�L}���rx�0��`G�������i���e)g��c��R]0�@a��{f"2w���y���<��$P�����HY��,m<}���������c���O�0���2;��$:,�$����4���G�:��O��q�)�k��+���`��vK�OQ[�z���o��],{��f�z5� B%�l�:����]�����&�V1Xj��7&c$
+�}J4-�*t����x:�w�PK_��7�����h�O>�M8��Qk��2��0�0�G�7�	����+B���1�[��}�"x.�dx���Pr���?X(8��vBz��"��C�6�	������CH��2*���>�R�Y����%@�e�gnx����BrNf��kC�"G`���!V�{���
@k�!��m����y�u������[��������Fc/�]���t��#��1�o��t���K6���8�{�X!k���aj;� :����f�g���H��!U����a�c��R<!���`"���3���T��-U*���`ZK�[�����]����Ltt���mx�"c�K�T
P�]o�_����l/��wF�*P}�p�������q��Jk����S��3k
�6��>at�>b�[�d�S�����p
fP�'r�L�{f���'��;	m�}C;J��oh���5�n�'�����D�����o%���3��)���,�i��8�2e��Df?O���Ct4���h�'7�������8y9G(F�������b�
�m��jB34)���������)b4N�qK47���4�=(�����[�� ���$@h��{�0:�.ol
!���p���_�G�c����I�q~����8X�pB���������� �Ej	'�A��������f�`	@tB%T(iV���Z�+0�k��"���l_c�?��k���yE�S��]��1]��>2��%�����Q<��'���K���q�!h`p&�4
m�QG��&i��:�#\��G���j�V$�8����@���(F@V&���z�����R�Bm^��tv1
���%	
M}�R0�-}�p����Y&�t������t��QLsO����j�XAK��@M{�D�m�Whq��F-19]	���4)J�B'@�a�rMd�$�,������s�w��R��Q�D@Jc�)�e�I#����E����v�����D�����&t�W����2@j;W8��m4��|�-F<���r6�3�ff!���/������_���x<bB=�!�a�p�����0v�N����G�'W6xs���b���%�?��C����H
�?c�MaY�|'?��'G�`Je�O�4n#�Bv��I`�H�h8(��1���s�`xwDh��M�hD�]kaF��lV��a�W����y�8��@�����}�uE-i���1�)6�!q":���]�.�������o����gE�MK>(���/����1-Kx�|(�G"v�D\�q����a��
en�4�w����t�R��9�D�B�q�Q��%��n#&��\'j�x	��Z��&��VpC��.�P�D>�X�uc,�+�]dJ�
E���G0����@���2�g��%���y��4�]�s���wA{2
��'�F	A;����~����_]
�]q�{i�,h��'e,|x�GI ��L�W�x0*0Q�o��pl���IrhW�[�����=`�0�m��"�T�%�0x�0�"�EZc����|���5R�_��=����S$}M����qP|�jaD�V[���)��p���M�����,j�JJ���a�q�O�MD7���8Jp���F�0�=&������������	�{F��NW6	#�{���U��q��_�A�+x�t�����WY�l{�����~�������]�!��X����-�)6S���l�������;��6d��E��!���� �(��n��&�:����~h^��v��n��w���IU��}le�������s*���
�%$����3�Z`Er�#FHH�ai~�H`6<��p���+�	>DF�2�G�!%���o����w��uz�8����zsYAi���>k�o������"�9���g%x�\	���N����;�r��h��v���������S�=�����36���cJ\�g���=&\�E��0��%'V�lN<�"��6%m�CV�"�r ����;�C��1=R0��.NM��9+���dv�8��M�yyv~)�U�
<�5�_r����;<�����g�1P�2l�/�:�Ae�*��n�
�����
T�
�]������L�Q�vq���MX(D���7����5A��4���~|B��hi�����=��JgOvR��s�}��S��y�:}u&s���0�^~���FG�@zV��?������V��k�Xr5���y>;�����u>{��~'����
= �W�M*@��[$���1O(�,�����5<��xV���O�0��s������S1PclI�lu�H��������8@f)GW����-�k��R���S�}��n�'��p���pJ��$(�	xMp�wl����	������_�	����#a>����Z���r�H�B+O��Q���Ou���7�g&�<�!�l:��bq5j�B������c��uSsf���Gwe�-�&+��B3��$�E�4���D)�Nh��
���h��v�K�s������2�=���h��4..Y����BJr�x�?�c����Ed�=����f�P_$������5�h�;�>���1+��
��c�$[g����{���V��V���(�!Q�dF�w��W(^�ar6�K2|�b�q)�����y����6�[�����r��W��	��	v�=�t�	�� �D���	8<����6C��c�	�zt��A*����f����\D���]�Tl�G��u��v�	�:���������/>U(������g��FR����	����Z���x�\�7<c�f4e���`au8F:�XS�1G��y2���x�����1�*�,K
��nto�;�<����x,5��0�>x\������o��m��*7����4
��f�n������Vh�a$�5�4l�*�+d�-�r��/.�V���4p����U��6�W	�5���h�B�xf�i��xp(��@>A�	��le%��Y���E)�)������'���N/�����1��LjI�l�$n9�h�%�
������]KQ�F��Hm�\�.��&�RX'\��{{t�l�?k�i"�kPq$�
����1&�>�<�N�T8�������B��t5EL�c�n7��#����o�<��a����5�c��k�*�+�rh��r�JiGP���d����%J���RJ��S�X��M{�{C&H�P��8���Q�NrM�k1�W,�������Gi��|�b=Ui���a�P8��,�_��f@�2^�i�Ey��NJL�sl��'���a=�1fA�_��L�k@���O�a�����Q9����a�������9�{���l�l{�n�H��;�O����R�M]����G!�Tzb��V�3�%�I�U�G��"�SY���?����[fB�����;��Y1����S��o6'&>6��!���x�T�
q.KC�2����rK���j�I1�m����)�R�~)������K�_���w<Rc%�*�+��Z��G�Hw������sw��x6V�r^�O�J�����I��"��T�X�d����F`�7�rq0��!`7�$q�&�'���k>�*&GbF�~��Kh�8��2�����R�[�~��������o���&3O<�Lo�����S <�0DA!b�����Li@$M�bX$�k ���|��gF�9L]9��c�Y1	�Lu3��
��16�h�f)&�����k���?b�� !���Ga|�v`�������������?j�n%���"��������2��n8��	�X���$g������8s�d��etd\����3&w��	&�M[����.�q�LM��������c�����z��K�@#l����m��L�P�V:�=�=�E���M��z�X��q@�����/���)`Q���4
��lN�1:�K�Ia l����A	��\���������e1��4���-	�5_�]4�z��`-�@�73v��X�J��\X��5q*����^���k��m'��#�����`�}���kG�FW�1������	y�K�&�%5,�������F{���}��HR�����Dn�l<{� �?��s�/9oV��������u�4�L�` r��ld�*�2�~�#[�N����C�/B�XvI�))y��G����8�/�Ol����.�{�c�hTB�~K)�y�n,����&@����J���E�����<'���@���v�b�S	�_w�^�����@Cqfkl���w���b��F���=��	���}���-����=*��~�M��{������=�����4j.���D�Qx&��<��S��/�Cx��F��_��*r�
=,$�}���jlM��FJ���.�����N�]Fn�-K;6� -�yIw���T���:��N%l�f���N�6������(��]��o%��e�z����a���^���8Rhvk{�v�i���
1�%�d�@$�G77�h-���{�#u�,�h�A���b��$��a��9�p:�=s/�v��<�_���v�S�a��>�����T�O��T��L��
�����(�F��`s6���Y��a�&m�\��hzN�� �!�I�(w�+��������O0�\�s#��oH������=���:�{'\w���~��0�!�����?U��ZW.�������j�=<\c��}n�>+�}�*q��Zd�����b����o��b�`% a�|�<|H=��y��2��������e��l|�B+P���80=x��p��:d������c����:��:��"�� ; ~;E^��:�b��jbF��0���� N$J�X�c�(��8�1����+������6����'����(A�}S�z��8I0Oj�T��0��-�uH�zE�����I�!w�&�j��w���G��X�	���|m�*w2��@Jn��� i��q8[[c�5�JtG3���5�����V���'�E=�����i��I�������/����G�8�{
W��)8�SD�f��P����������!�q���O��5�X��"X��F�0 -2!M�|=����C6���	 B���0�$��,����M�����2�����1�O]�x���YJ��Rp�����1�j��&��Rf>��_�U���.D8a��-�'�H��"SH]��0�
��FsB0�7�����
�����r�^�J��yh$E@1J�*��^Rt�dqNr�t���\NiL�%M���&�\I�����`wsdyC��<3$w�/=F�8��u	��f0���5����`6�F�	'��Xac�8��
�fdk���R#��������'���N@�w���>�_dwY�_���6.�%�9�cN@<��l�0-\�j@�F���)���2�(���6-5R�;,$M���w]�����O� ���z�x�"�
�fag��K��M������1I��Q��A���h�XGm82��������$>[�s6:}5�����}ct���.+���
��(Z��L��Gl�Z���?���.�%A�<�����������)��$}�W��@@R|�g�_��P���l���'G������m��-�9r��I�3Ta_���1�d��hw��/�9�������D�Yd�q��(���v����s�Q) ���P	3Rb0�X�x�:/5q�hV�On����Rld��D����N]�js����q��,�J<��t$���`��f+NO���cB���k��������M�|!oq��z�=:9k��<�c�O���B��eO��]�����w���t~z�]�X	z��lg�g�g���Jp�������=�*�-�(?y�f���9d�.O:��cM��VAO�>�W�R�������g��b�z��;�����
}f_�S�'��>B�p��� x��bX�H��[<�*"��Tm�n��P������8��S���$ ����f��G�s�>#�VS)�C�4����M;�����I���zg&���k��SH���(�s��C�E1�`o$[(�?�?�e(J����~���Oh/�B��#����F�Ld�}���e��pg������[��?Wo������&7�����I�u/t?�K�z���HA�cU��W��!�a�������}�4�TE!������F��f4F�H��s�}��m0�Y���'f
����7ash�fj>���W-��n�>)��'�_���#H�����r|8���c�y��|��a�Wh�����d�ox4��a���mS�IiY~��#�+{�u��{`��nw��<��������5�_F�r����L���e���~��t���U����u&����?���f3����6�6��V�G�����u���vX�1�?{8����6��*\d�a��Er�x&NR�����&�$����]��������;�@���pD���1��x���l��P��W�F�����~��+&f.��>T0I�"������f�/��2��*�'�I�2��8�]�$�vQ�?��$�=�}A,���E���T�&��n(yO�N��$�HM�X���B��0�v�6�@=|l�Go<8��#G���J`�a�jEB�:k���*��+�����8�+c�F�B��=�C&�}�S�a�	w�"����Y��
�����XE@�M��������
L�,
�d�����l��q��1�V���$xh:��$"��I
���m��/d��"}���b�pu� C�~
UJ&XE����)�35������V��D�E����WP����� A��9C<p�
��)|*����[�}M0,WhM4�oF����1��~���CV�1j'������WIY{#����:�@��@��{$)�?*&	T��e��9��u
������@�/X���^x-��lp������!jQs���!Z�x8�z���]* e���V����
j,s+q��� ��8����`<O0�K ���W7 ���A���4���e���[w�l�~�sxNq-�q0Mm��3Jh��\��?���Y��'X��D��f��|C���[u�k�kKj4#<�a
���� ]f������P�2���d9��ol/��a�'( �%��T	�F2�l�Ao��W��9�@W>���^�2�FVd���T�pe���Y2�b��}y��,��-��?$P)�\�E�%o���+��`z�$j�r�i��7�k�P�Ha9����~���F��,uAv�3�1vg�8D��l�'��"����S�s��}E���%3,0���N&p�����GlE������(��N)DEM�1������.�P!8]V�s��d�(Ok�}�<�2���E��p����SF�PB��?L���J&�����2��Ay���,
��g�(]]Z5F����4t���m���J9�fK��?B���S�w�)�.
k���
&��oC��"��_����q�3{��f7e���0fo>�E�=:{��@�
��B��[U�d4����9��a���}��[���!ZE�����f"(�(\�l�d���[�O'�����4��y�f��8Ib�i�.x��u��i�!J<�y�X�Uq(&��i���s�H��|4��p���1W������$��d��M)9�5S:��fp�7:t�=�a�>�a0�{�jd����xa�f���P��L=�T"f���J^�P-�S���!^����,A���eg�����y����4�f��/��6 ���/���xy�n��=J�����^���[�������d�������
{�������_����Dz�
�eB��+�����Mc��v5�2��g�gGg'm�Dv���.����
�W�c���L��p)����S��R�p8�zN�]�CPj�"�@�a��11�l6���� c��,M���	R>L"����p����T-�
 HC�JM�7�h�����VN�6-�����=,�����
��(&�e��+�}�ql��AV^��"s"A�����U����4R���eC2�C.<sSN���;7���\�$�$Z"�% �v@�l|����������c�����#�S�^a2�>Aj���6�xA�����y{V'���a�����QZ}���xN�\,8��f�l��NRk�����"����D���
(��]�� 	p�2Z>�T"���N^64�0�'����P�t��*j��*�9C�3�J�p����(I����A��nw>�s�x��/f���
#�(���@J ���(f���f!��@��#��`� ��q� �x��.����T*�"��x:�!�2���.�<8x��5ar��T�R�e���}���|��}t}3�b��
�	\D��Bk�.=���X�fn}������`�r��NV]�It>{�����N+��S��������Q�3�I���mT��`��G��������;K����ls��	����		j5��RT+�wE�
]�-�������  t�Z��Lw�G����qK^����j��W���M��M=�?�����.��N�'^}?��~�|�h(��[����o�i/q�V!�7I���
���:k����y��XX�q�0���0����u X���#��>�s�:Q���n�N�>�`?���h�;��:xM��Yj��R�Y�D-OM1�
�ZP��e�-XB3�K0�i�������(YB��Tk�_�[������������Z�n�W���Hm7kB���a{�r<M(�6AX��S��"��|	��&Zw���z����<�D>���9O4"���������+���@�s�Cn&�����n�������)��<0$�(N�O��L�1	
��S�VF��XJQ�i$|��:]&�����m��4�~�k���I���C���#�	�a�!�3)��5��O�}������V��P�����"p�.P)��R����n3Z�SK��SO��]Bi������c�
����)>�T���G)?�d>��'Xj��.!������#'��p�
��x�����"S�_���Q�����T��P�����.�y�U�����l8��~���|�U��w�����N��Y1"XR����������-/���-�3v��Fj�P��a�Y�X�������5]43�2��c*���V�<mx��d[Y���Z�y����(��0��y���J���{�_(RKz6�*39� �%�hGFz@�HM
��-�p{Y��Y��(8����@��t��F^�������{���
�~`�!�e�s���#��Q/r��u��_B��6e��sO?8����_����l`,_��`�-��f!��B2:������ ��N������;�X}2�(V%��
���.�UW	l%:P����
-S(	4j�`;;+.Q�"D]i��a�QV�p0��*rf��\���#v%9�O��e�m�h`p���������c��H��j����d���o~3@���W����s�A����	��$-!��<gT��a��ib���R�8<�tu�l�3��G^�`[v��	�J��N���C_�\4����i�k!��4��JS94%�s�i6�0��I�����"����3\7~s#�
p0aV�Q�{����+�����e�i�R"�[6�K�6#]�9����������*3g�6��
��L������
A�7j1;�m<����u6��%��G��������#�5����^=f���9�R\��.�I\
�K�E�����'J:V�����-�t�v��pN�c��e
�b��%����=A�pl�^�>��x�/��i�,v&E4�����V�Q���a<���9H���
Y�(g�s2��Db�-Q8��|���	�N,(A����8d�;SF�*���.RY@�1P������]�O&���)�xB�=#�TS��!���vW�D_
g�]xUR�Paa�i�"�[�����2�n��y)|S�E��&��+K��G:���!�v�|�����"��"�}�R�<���J6G�0r���z9_Q^���B�u�J�t{�J�Q�����8E���{�[�(3+dpX@��
�g�7��`�?�1��}�?��lsp��sb[C.�!���G����q�r�\C�)
P�6- �+
�EO�(e.��"�J�Gn�r������Y1g!�#q��jr��c8�������{��=��|h��g��?%��M-
u�D|�wXcU���F#� �����3��l�5�	��`_��0(��9�]��`��%�xH�f����2��b9;�S[��F���,~����������V���]q)I���Z�3e��W9����6�K����qa���M�$���j�G |��g��+�Q���0"�����5��=�#p0����(���@���`��PP�Q#BGwzv�2���goO�1`��&qq
b2a0�:w>�1��:�>v�5n�P��u7Q�����*�R���NR��j��z�!F1(e[�W�n�s���PB�+$��l�[����.��������=�0���n�;�8�}_PN[Sh�~�8E����	�lz_lxq��1,o���%_}�(4����M���fwk���V��&g�q�5�c
�z�0����[�����Q���`MZj��Mc?�I�rMB.�X��eEz�n�{(�4|q��2 h���9�<\w��L���&d�)}���V��#�Y��6F�x�mf������m�[*��(*6I�����<��F����k;�=y��<��Y::L�w$u��A�	�QEz���x�x������	.pd!_�!j��yJ�D��ij<
�we�^q:X���w�3vn$WR���s%}b �*�Q���������J�+�R
s�t��(�[���K�}g���7��`
G/������?E�ph���U �W�X]�X��������>[�Y��,������'�����0Rz8|�4F�"%6���T�l����OIy;��N6��FC�����
�t������E
	�Tx
�C=7���[k�b	>;����E">#E����:�!� �����
����Q���������
��[2�
�Sr%A����t�"�H�T�S���
���Tg��Qp�R#�v�d���H��8���'�x���[*s�9G��Y���R�o��1r�S�q�@��HDm����� �R(;�f�s�nX��.����'n#���t���+�1���/�9�5`�2��c�����S�d��,d��5(8��=(Ro��tW /��A<�tB���7�o��DGP��y.��M�l_����w��e�uz����q�Xg��A8E��H8<��J� �r��G���L%;��IU�����=Lh-w���������I�:�Br���{�;��_�������6�/�)�~T�\�D�G#���h������%j�+=�fw�����z0�
��nQ����Xk�L	>V0���������A|�/Bb�0�����]<�)�����X�$���]a�7�3�����>�<i�����qb2����m���*��{��	���A��Q��=�
����aNM�>��&�c��!R���I���P�"���+�F�L;�F����'��0���1c��Y![@���L���g�Y���jt}Mso�K�/��3x�fp�����5<���Bc�IT��-KA�Va��cn�c@b�HAN�U`XWx:�A����_��$^9�~d���f���JYGN���K�_R#0�n�NB�$�En���8-OG�!DS�#��JB���c�����P6����d�[�t#�d>��A~�G���:n���+T�`)��Q)�G!�j�0F�*!M���X�.�@t88�{�	b�z2c�*%*�6�4;g8��7�!�d@%pj+��	��>9{�8�4NN�T�0�?��w�����!
���)A,$���~k�k6�������M���/���)F�Xb�V�|dN�$�Ri1��I���r|����������!�]�[�s���_b��'.��S�/��������[u�,������g[�p���%��p8w��p�����������G&x��@�e����divq�S�P�?SU��/M��d��c�MVi��N�L�O��J�K.[�@��+jp|�	�la�`����T�1�\X�C��/����;�X��%0��[���V������/�w�*��.Z��@�A���\�_e�_+���.�bz�#d��}���ih?�"wg��q�."���Z����t3�*g����)��N�5��VB���hZs�o�@a~�_Op�����&���PT�2[=�������tR�����-�N_3����1Y4�.����dm�5;�R�������Dg�w&i'�~Z#'3� u�9��T��^��5�����_���o�������e�� +�X��/��/�$�����b�f�c������z���*k����e���4m�35<\q�_���|hmd��M)�b� �'�A�pn�|F?���%���h�0���������oW����c�
s�<j8	c�`�1�1�����V���4�����L���l);�*�j�7�6����\WO��I������#9)�{�K�?"U�?����_v�Ly���0ce=�������.����d
�������b���oX9���8�^N6��_�.�,�]	sZ��F0p���nem�=����X4�BD��<�������;~_s��YA�ww��@�2u��]���W�c�Y��������K���d�rj,�nOVX�\�i��9������|��C��#�4�M	^<5N��c���
%����P����<��SD�d2C��xP�GeI9,�gMMt�
��?�i�B��Lv��x������A��f@�r�I|�>��%�,�������e�	
���w�x3i�y�������,��A�N��D�a8b�c<BQ.FkZ@�;[�����j�Oj�(�%�����!�����W�=���7_{�9q!(>�����b�L�C�\5x�b�f\l�D�Z�'���L����&�A�����K>���y�y����:O�x����W���A�YU�{B����\�#��c�}�r�z�f���S������Y��.(��������5o��������<��\���^���*N�Rq�s�,q\�����/�.xv��s�n]�7��dT9��u���1����.3{���V�Zp�6���B����j�g,6F�.c

���7��x�AdR�+eFu���y�v�
�?������G��u�S$c/�!4�+�.�T}qg�m�C�l��(dS
u�
�#��5`3�5V��8�[���r���A�����H$�o~��};���������R��|Oa������[������D.��V�1���9�B�!E3���hJ�p����0mF��~��]e�}�"��`��Gq~�g���{<����[.q�+����������*~��V�U���	�o�
��<K�b���u��T�� ��O�i�	�i����'���T�d��Q�vk���>O�#@]�C4;�a�tww=�������V����vw�v{;��j5x����Cu67��������U�
6������?����_�?S�u�����@/�r��;�,<O���M�a6��)���'wS���n9�?������SQ����'�k��bVt4O�S�k�x&KZ���59q���(�s�A���A�{��[�c��Z�ZGd����g}����"d��KL��Q�1�N��n6�{�~�|��"���3Wy�{4�*��|������Y��sv�<mw��N_�^#g�IqCB�D�?�h0{��"���Hx�(J��V�2�B��xht}
����a;�E��)��n���&���.��Ce���D{G-���^u�6-BX����W6\kq������yv��7�������4���;%���}���Z����/�O�6�)��?<��ae
}��RL���C?a���I�9�C�
�kY��_�
�8�2AZ��T&"p>�A@�c�>�GvJ\��lo:D���6;1h�	��|��*�(�*wG3��t��{�R��$�b�?���;6Vy�������(A�ai
[����W�����Q�"���8`6���!��a
��#��o���_��L�CY�XH�T�!�}�j�����n$�1p�l3<*�4�
�Xi�;����E��|�B���EP��v?R*�LUg��@��������2#�P������!�=b�V7e.A�V��G��V�mLD�bs�WbYFt'L�e�q���u��i��[�6�
�S�@S��zba��JQ��2t�,�����nB"�@#	�����w���M����cT��'&c��]�#��oC��#}�{��ut��l�X�	2�=���$��/B������K��!��h7���;���|b8��$������<���<=>?k�^�@�f��������t�MbED`2G�_R=��$��[�t}N����,�M|�JO)��t*�_����R�3�|�b�����\��c)���S��%v���[��I���r�L�v��L2~z!%9�:��!#��&��1e�i�
3	���xa��
_j7�.������N:�	h���j tst�BA+?�1���=X�1��*���P	�Q�<�>�����(N;^������}�q4]���R��I}�����3K��G;���u9'&���+��F��E�l�����RH|�7�|1�q#{�;nL/�y���'���}uv�Mfe8g�8���`|�0R����(�!����o�`��sw��iB!Z$Cm'T'���w�#�6� ���Z��P\�Kkv1���_�����mAw�E����bM��
��7�@�E�#�@�3���v������u���X-^�����@/Ug��h�ZO���8�O��F�8�I�:�"WWF�BQ�k�Pv6��U��J�*���r��dc�;~<�*��y����^�a:�pY���6��q>/��lMmfQ��~�k�f"��B����-�od!��GJg����\?/�oFt�!TRuh��T
R�;���~�)����&"S-�f|+� d��!F��n�D���N��{'>m��V\�he�6]��%�3������NdX�gGwqT��r��.��.���q���	������s���dE��{��-����-c���1�w88y�l��<����SbT^�3�����
�Pa�0���d=mw�T�����hR�m�����9�n�4&�N�����R�(�}#�T�ZI6���h��gCt���(O����K��e�N{���*��f�Ys��F����n�vF-Y���-�b�*�A���
�v�O�}�PH����G��#������^,]�v��l�F�O(�tN\u*�F�g����f��}6���Z���|�7C����c���}���_d�TL���l��!�Tu��/��Z7E��f*�<�tnL��pNT�}#�>?�d����#M����������gF��!&O���?���;�"�E�M;��
2�9�F�l�2>�� ���T�������;X,���79�����R�G����	0�[��eO�g}��8�jj�#���a���qy��P�}ST>s��7���O ������T��D9j�$����l����7�&�����A4�>��(���!��d8�v`
:WX"����4]�?���Cp`����0c���b4��X��s[1��+^��e�6�����d���_4�FW�f��C�|�N�5�YA[.�%1B�NgQ���5HhC�Q��������T�=�O�V�?Yq��[�
sg@5�N��M��u��xX����^5�Z�����>��������z�!�&���>n�7��.������O�j�zk�����T����t������JeC{�K�@��7���H�Ncl���6n*�+m����&�y-M�7$�rL:�xF]�=��[81���|���z�9b�@A��N�z����!�D��������]0Ca�&����\(t	��h,,�e,,�����3�q~�\�M�j�De�������\W���7����"����"(�;�ym M�>w��|���n�������f�A�Z�/����{�
[������E�"xBc��D�)���av�cb����p�����IB�4��%�Cd������x����@K���	���@��z�i��b{PBJ���c�u��io��S������K9td=��D�0�7$M<��^�2n0�(��C���!�~O��T�oa���0a�{	W����,�z�)��D�x&��f������z�dx�k�\�����-�|� f}�7~�_�v���J�����4.^�;��!��}����6~�S ����s6�'���9��3VY�>s���Q���g�.��N7d���9n��B��L���QWCP��>�y-gFE��l��Q�^�SD��ais������I�;���i7�tG�EH���o�%������
DF����W�������e�"l'S�8Au�����S*P�5�D�,p������v����K��G"W������=v������(�[�3��=�V�����9���D���x_L��t�}!�z��uzK����18������r��T�6��v��=����	&�j��<>#Q�>��!�64�����d�&���F�@�//Z���M�8o���p�����	X%l-w��L��,���;�5������)�1��E�\>�1E��6���}�F'�e`To������ }�������aC4�h���7U ���]��B^Ki1��c�������mh:Ly,hc1��Z.-��>JR����������U������
/9$
���Z���N0gf��Z�8�R�����
E����1�=2��k�5���x�M��J�p�@�>�����q�t�z��r�p���������0�"��?������u�������Cx��!��/
v�'���b��s�J��C]r�r�(on#��y�2[z$���#8�����r�h�^��sya�D���?a~���|��9����BS�"N`#���n����\�/<G3d���_�P�%\h�ub�0K,]�"��\DqPn������c:�8d'�.�=
G0n8�����elV�M�'��XJ����xa��dG����*L���c���������2d�W:����b�.M��=�$!= ;'#�r5���#��+[e8�S?8��!_7*�'w^�1[�x��y������;�>�l8s�q��.J8}����s�~.�@I���nR
�	�w�������X�~�!���F������j,$�K�5�a���-J����Kj�Q�z�J�p$Q4�" N��%�t��
uo#�S��o��UWG��(�#?�[�����^�1�khL�/����n��v�#���e�v�c-��"�G-o�:k�J�p	������Qr.��^���X���+Y6	9��+qK��L5`hz�G�]e3m��T#�R=8 ��$��sFRH�s��R�}����2������2n���_2\4�IEar�����n8�ZT�k�L��������eW������o�l��t-%ZQB�����@�.��]ET�Cb-)wEYY��Vv�L�����>YY�R�>s���x^����2Z�Uc���u�4H
�s�����9�!R�i�+E^9��n���40;��~��P#R�*�\u��@�yc.��%O�2@��hVk^�8
C�9����'/sP�$DOPT��Y���V��E�W�OO��h���-N����� ���C`ICsfi�;��T����� +q�WG���VU�"��� {�|����d8�Q������,��#��4�s�����kn��]/�R�tV����<6U-�Ewe&#5�|��#����g��Rt��/J���uK�x���Y��$��G�����; ����#�p#I�oJ�;tx�['�������3���L��Do��J�"�P�d�]����Y5e�V����kg�B.���5�<�
W�����=�z���8)�8{�e�8�J�B�3���DPiu�b��QD�V��z�B�)�L#�T��O�h�!J����a�r�$tRWF=�@y(J'bI3a�����
�o�p�|��^��[�;?N�N�3�z��S�"�cco	��s�}��hUH}6�0"�&����x�m�*���O�n���J�a~�)�B�b�$�m��4�YvP����������������W� 7d�=J"��vj�)#_�"��l&'����6~Y��+0c����e�DQ��b�?Kr*���H�����,m:nC���%.�t�\}p@/���d��P���=��A.7�:X��48�V�0�/1*�d���<|�l������{	�c~d-v�
������T�E.��,����|�������H��u�2C�XD������&��X������������G����`�q%����/��
G�'�!�����c��#��}��GdI0��RT���]������B��('��������W���`�\O�&�����dF�Zr������*�=��fp�HQW��=�s�Tp�E)s�e��c<��y>�BwI?������m[�u�w�Q�u��U��N"������k���Xj��z��%�U�+U�	6d4��2(������T����`�P�t0aS��H�<��B�&���3�����F����Cw�mn����=��y���?P�L-���
B����d�(2bF�c�Xr�Ofw~B'��]��r��x���q����m6Ws+��m���G�J�a!�5e������W��F|g�L�,`�-� ���3��&k)o��h������:�>�My���������
|�����n�^jc�g��Na��6�f�y��#���o1�E#*
���	/L�`�e_�'o�O����������������jG�F��4�E�?]��1M���`�rA����������;��"2<#D���k;e����d�Q�����u���5[O����S��G����h�g���\�o�v���p������9����{�s�K�������b�@F��=r��T�>���F[b|1�y�GQ��pD%���>�����$�G��C��$i�
��G����`a[�fON<E��9#��z,@�}��q*���a�B'����/�<�a�U{�~Rn��kj���)��r��:(�o��jg�A������v/k�Y�0�:��$��,f�'���*&��*L�)�����+i���a��$�,��K�7�u��q���b���7�������������0��\��!`w�O�����#;�Y,G����GH���2P�{�PC+M����aE�W��'��6Y+s#[�AR���>�ob�6~&���g�����w���������)7d?|����}��9=�F��@��}�h~�]�B6��Z�����An����t��}���yC@>U����������Ieu~�B�rv�O	c���^}o{�j�V�v������nW���j�\�����z�����&�[�C�����C;��`5c�,�i��rO���>
+�>{�)�D��6���������Kc�x�>F�|
��l���F����z@�#F0��VB,9Z�x}Vi�$���hM��G��F��o����4ZP�x���e�r���.� ���)+�j^4_#j�����d�F���4���-�6�u�|�1��f
Z��Z��^`���a�"w���G[�Udf�G�!�
��T�^�m���l�'A������>��!�j'Xr��/�L��D�f���
����m������xMHXs�}N��+oz���Y/"�*���+�
��a����<rf��<Sb�����T�$�F*7����h�
��4�;4�Z�^h��F�����B���*5)<�5[���
a5S�q�2�����*�6\��v[ye/J6j�F��+���<{^��Ynk�R�"��o�V�9h����%�E��/�Om�AI�Y��v�d��C��^5m�M�~jAPz,$l�����Y�]Ti1�:-�IY�H[�q���rf��Dx�!��+m��X/+wN�@������zD�6���S�!8I5��dL�E����]��7��2\�x4���Y�����-+��6p,���n�2���"�c{��#L�\O>�����_5R�~�����\OdU����H�9rtR����$H��=!t'�i���PO $�=��'�U��}D�.0���u��j����h�Y}�W�����=�Q��}�EiniQ%qq���E���4X�����,pL���+��,�����s����t���}p!�f����D'k�ta:Y���,:Y���]�?����B<o�����P�E�Q�Y�l�x%��������_���s�Z�j�c-�F�F�;l��)s\)}�,�`Y�F�F�Y�q05|8�rTpA�s-�D��!�m�@D������`����R\���k��X����2�N}�?��
D��u����:���z`�#t��z(��x^8��|N?�Z�nU��a-v�+{�_�h1y]�S����L@|W�i��F�P�����0B��1-$�wd������$U�����?�W���Z���\W
k�5��Y]���������J�q�szu#�R3.j��=M_�u���tJ�q����y)��gO[v�����,������/\,����.���@���b����.�9[�.U��V������]�k��������-B��X���s�o���/;��"w�]�	�����!�����K8\��I��e+i�0gB���Umw�q���x�O���g�obA�:#��C'�p�m`����4��	�����&�k���d��������{/����u>W��
���2��U���������0-W9c��D=M%0I%f4���;��do�v��qC��zW�W��H��8��X��74�z�,�i���|�������]��X���x�=�J��n�� TKV���j�F?����n�i��_���=��{�������J����;����>#�\��{�A��V���J�a��x����6������$n�noU��(oN�;��y�1d������$��" �X�%����l=�Dd�&�n�1�xdD�T�ZZ����&r��/m������tNj����!:cH^h���U9����_B�i�A��5���J279	�j;�6�����~�����"�6�}��
vA�x�@?�V\�����T�(~1P^�����Gd�z�)R��0��^��<j��
�}����w�U�a�����~"N���rdqB~����yO\�z������
g���Z��H`��jd@���Z������Q��{Z=?y��v��~>���k}\����C*40v:�q4y/��ST�p�o�������;j������;X�ng��;�txX����G����9k��f���/��
�����������c	������c*(�#�q;�fu��,2�i������8����������Q�0&;���Y:�O��~��m	��|
��X�x�`��%	���CqRxs��n��3J[�a�(�C�����^��3��J�>����R�=�<���R:��*.k0��7tfi���8��'�YB���D�(7ba� 8����e�*\=��?�#�	��qH������
h\g�H�U>%�Z�������������J8H!�)��.����sb��
�������'�����������}�1�d��\���w0iW����^~���G����=���D�������4��U���}J���y�\+��7�3S�����]�9�����h>����!g(���	�{]���$�/.N�H�c��uH��d���	����q1��h�gTRx�T���B0���M��|X��vK8R���T���1C6��ys8��r����D6;f����/�IiD�s�Tn:5A>c27�f��t��2���'ts�hF
�zS��R�ar
`���d�p���=r�����"e
����Q=��4
o�d�������4<;MV��m��X�Z�u��W����[�B�.*=��i)������Y�0~�T�����ZW�\�������y0a�u���C9�r.��3�KH-dlA��zFSXd��%���?M�Y@��/ ���������h���\���C����*aJ�R�2�����)��:�k�S���RV:z�^	�jR*J�V���!(2�������g���hf��m���)�`w<Xd��H�U�)���.4p�O�4��f;����h���:��
"$��������\�I<h�]�c'"�$�uZ��:�����m�s��X�+�W
����(����1���f1:����L�P��?�_��@�!9���=9�i�����+
Yx�"�U�Y:*��G.g�v������R��C��SF~iR��q,+2@V��7�g����3t.W��PwE����!��}��<��eK������|�gm��L�Ur��B����[_�L����t�AL����T�/2u\%����#M/���Ye���F��8����.��LE��N�O��K��|�Vgg�+�N�w\���E��&�N��q@����6�?���B�
:���	���KxL(� ^@�;Nv9�)N��$�����ep
�>�	$�8����T��~��E�e	����	}����Q	�GFT�s�����1��03L�j���'��}�q���63��W��	�}���,B�j8�`�0�Z`@\q�T������4���U�q�F��.�����&�"���'���3�qE�q
�5�������������m������fS~�|����I�?$��������&���m���iC#���"�CM�A%�RR�����mY]�R����"�m��i��E1�k��3��3��;�x�w*�_i(z���\�S�),5)�!g&�U"3d��_���j���&}�K������M���N2 ���������F��t`���������S�����U�&���G�%j�[t�PP9a�dU�IL>�m��0���Vb'�������u�=J1���� ���������?&���zy�t�T��^��8(�>}�f�0��'�@��(���7���5��@��F+D�#�i��I�
�����������N�}���������J��$������R�s6�V��<B4�y)�-Q*�9�	�	����G�����X������������NY`w�	����%�O�x*��*!�ub�G�}�2��������W9b�ye7�����7�5v]r`l��f:��Zh�M=��2���o?6�p����s��p	���8��e�IF)��_=�]��#��3�{�Q}+����k��^7�z������m[�w�}�B��8f
~�� &5�'z�F�C��lBBM�B��o����q4]}0����8�a����Bs��������(���|���}��p���2��m���&;�K��*$^>Q0�������"��W������k�^���!����*���
_�t>f�'�|�M����}>i�5����k�/����~�?��W_���<�_����E�\D_"K����Hya�o��/�����?�z~��B�����>	��<���������Ma�x���b�8e�1B�L�������:��'I>����N`_q<|9�,kH��F#"�
3�����N������p3}������&�;m6���y�^��X�����y�����G?t���M��K?�|�yz�:j H��F�2��?c��#j��(�to��p�w�0��*"��sb|�����U���������y{�>(�&�4��WM�`_+rm��pl	l?�r�bJ ����k8����H�>������S��|���*�lp��P�o�������Z����.$dI�� ?b��48V'��b���4�����[B���?�V�bi�<�V��(���s�'\\�ivF����S��<m�<i"��*k����=���y��A^_�H��[	��S�Td�GUE`0x�8g�I3@��!�^E�q���zcu��&1��:���M�/H��� ��'��2��������F%x�"��j���*f��U���#�z;����D�4G����YpZ�R���|�G��w�,b+���B���Ob�,��~����"n���0��g��g�lJ�O<jKAQ��h>M4N(�X������n��Phf`buY��o�u\�����Z�*CX�u�S��u�i������('9E=8���^U;9t�&l"�JI/L�)��9Fc������;�������y�yW7���C^^������0v���wE#��\��d0T%A
��6��Z�=h|B��(�+�
�}���/��G��;��n��`6�u-��)��q�=v��F���`���N�M?Jx��
g�/�B���<N��=)�!�p���o~R���8�"w��J��F�����������\����}��o����"+,��8��h�#ko�s�|�x{ri[�qt>S�m|����;r�5�dq�y�p�/�y��9�WsG����'���Q�>{Z������d�\�t|���1��R=A/��<�M��39Z)'S��)��hl���-�-
Z�#Z6�)���$/z1����=��������8�����{RS����:y�u�J���A� ��������Yr�.�����y��>O�J��!,�B�? f��v��l?����jO��\������eUZl���������0�����Fj��mZ�"r��������P��P���,��<~ �~��gNzY�Ek��)���]h�a��]�E���E6P��[-�@S7y�~��Wa�9����j����w6�������1j��k7[��,h����"�|/!�������������5������
j�%i���W���Aqz��A].[����"e!:���}B��Q��[���;�Zo����^��t�(����y��}��I�9z�SE[���������v���q���:i�nu�h�@��L���*�����R|5�E%�l��db�q�'Q> �~��>��_��_�`�d!>��}jK�w
!�T�<�8{s~)�����ZGM�H�!�J@���W��c��r^��
]����t��-�
<��`��[�p��M��
�����1("�&*�o�����w�
���y+��k(�C
i�$LhN=�kF�Y,�f��O����kv��OEf���zWw��-��E�T��!_�����Qi�2L5'e!aPK+��.�E���	Wed�F����+}Jr]����f�%�:o�7A�iv/�..������I<�@-��&86�w1IWp�QA$��|�d@�8>�]�1���?Lf�������^����d��ZI���Z�\������%�������m���G��*N��E��G�{C����(���pl (d\.kwM8��>����$u<}
�����PD8�1}��>�1���C����_�^Y�@Q�)l��s�#P(-4=�����g� 4��(S���^�Z?���H����i�����
UoR�e)����Ngk��"h@�I��r�0���vF�b$YR����+R|��z�:>JU7*h�T�#S �.�#U��h�9d3��y$�#E=4`�C{�^\�d������`R���W|���o\4�����R�	�%�3�O��K���;�'3*����d���U3��(t
��#\MKX�K�iX9�F�>IU���'�>���&� p�8�����	1�c��T�5����#�M�<�3�hv���_����Q#(J������(����9&��]�hcnG�v�W;'�@2���`����9f�Cw)�l��5����B1��5����^�6�J�B1x
���2���$�fR-�'H��[
���Y�0���k"Z�L�n�
B>�M�Z#�l3��I����0UC�t�`�,���<a�b>��vzJB*�{0�� ��61� ��P	6uoFq�
*��z .��C8�"�(`6SS[X����0�b�C�'"U�.����uC�v��W��K-��&�����%�ff
���mNZIp�&�]agG��G��K���!"R~��������i�?u�Q�e`�L��=;Wyu,t���M�i�k�%�CQd	��h"�;����,�s�KJI|6�B��X�w�E�@�).����2T+��E)Ba(T�?��w��T����#�yIQ���!�Y!I��Bq�;���L�e6�2h�	��4s��d��f.����og7C�{��E�~�5�C=I�|��sb�h�i����=B;�������|_����H�����M\Q7�Cq:;�q�H��,�������\l��G�&��������m�j�n�i����hge�7��(��I<����.��@8���`@,�D)=�x*��x�W��y����n G�E}5����r�l!�����yNV��m	�G��$��3�J�HLh�5rllQ��n���������"����dz�k*gZp�XN�19��~��A�^�2���{��eC�l������9����z�s��k���(�]8��S�����qwNe���6�j6Ssz�;�_q��+��X�*d>8]�8�(0�V���Lh	��i�nE6��L��
�Dhu����-�+���o�=�����+4�	&��g�������q���9R�C$1Pt�������!c��������!�I�EP1�*c^�������X��?��������l��	�Q�Y�hD,���O�[�n��
�Ha0v�w�FZ����d�Cb:��H����hl���&��G�b�Z�1�.�Q�B���������>�z������lo'|V�y�[��6�����1_#Z�����$	'q�/�#�d��p���$d"�����T�o�m���(���:b&'����o������U�m�a@V�;����f���zD�v�Q�:L#�G�|�IUpz�����r;�._��9>L��>�H�=�����n�������o�t��[[WY��o��D�}�o���k�CR�>&�^4I8�<
���Q���������	�_�az<\�����������+F��
?�kr�N�^�_v�Nf2g�
�'�k�����I����>�.����0��_m�j{��������0;���3��12����1}O�6�ix�&��(	�����w���d�W.]����Sz�������<�GG\^�����m��&P��%���#���e�������7��E�wT���<�y�������p�4?�+!�����#^<��DT���0���1������=�f�"{m>lt��{�?���M���h������w���s��qM��'�����Na}�g��s��t
�J.}�v2���O���N��]�7�2���''���|��p�����:����S��f��;I�z��a3���}��=��15��9�O�#��X�Xw���?�#�Y}�������g�H�	=N������S{IP���t�Q�L�1����
�k�Ct��-��l��*}�h���yrv�C����k)�M������ :�9������A]��r�Y�E�(��J�>�|�8=>i��M ���/����E��[ z�"���6����]�]&�A������&������$�k��~��n�D*�������WM��a��_h%��yh���LQ���5
[��]�������/c�>��
v16-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzapplication/x-gzip; name=v16-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch.gzDownload
��<�ev16-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch�[�S����s�� ���m�cW�,l��r��$�Z���=�v7;��|���~�3�!��7U	�;������=�$�R��=s_xO��C��{���/�>w�/^��{3yx�N���(c��s���b����N �H����T�2�����5~��s�q����['�G�*�-q >8+H��������Q�/����^m�M�-��H�|q|5x/n���~���b��72����ybt|}�����rx)���3��2���V{;��+q=����cqq9��7��W������\���;���2�R�iEl��d �?�;l
L)]H�dr+�|Ov���W�
����9Y���"�Z"�O-�F"�OS�����)�!��o�tRXI�['Y	�;?��UK8�'��K ���:��qc�q������d���FE_���"v���`�����xQ(�tU�'�Y����vw>�qB!�S��N �$�'\J�x3t�L�/4��x���Xf*�x�1�� ����Z��������W��@�H�#��D�)f���P�������r?����R��
gtD�
�l)���j���Wk��
�3�$?L�O�Rd��C�d&T,]�c�f��>\��H,O�O`	�e/�+)��,J��P���HL�0���"*��,-��+�~�O��BO�R�/�$fY�I���7M|��	&��`"�E����L�����O[�k:�X[X������^�������������Z����Q-��(�h!P2~�A/i@�I��@���P&N���m�b��p7Y���j���#���R���q��%���`<�Gt`@L�$F�8�����~1��Y�OF��&~:��n1�E�vv
�0����_
��O!���n��CW�+��s���GHp�(V�M��7S����OL�G��7��*�d�O}r2���q�x�E� @Ri/A�I#)wc��z�]���#�.�R�\� :����u4X{I�UA1S���M\:~��G��!�W�;��
�5��e�w�c��s�#�<�6�����K���C.']E�������@��~���,?������AlDe5�-)Wl�]")��mR�����7DQ0FB�=��������'�AR".�m�r��:s@�e���g�N�je�g���4�E�V����er��E������t	�9q[����X��|��8Mmk'@�� �������?B�JV�O8��h����Pr"�M�D�|Qf��tZ
%���D3��(�*s�=k@�	T4����������*�����K�GKql��XBe���:-7�n��h�M�D�aW�r�fC48��v;o��Fd&`Y$&�!BZH-���QL6c��>��K0�b�?�}������(�t��U�l�����NQ�%���ek�e�����`��i��Z�h)����6l���va��T7�"�%��!�IR!M&��Z=F����8��]8�Nr���D���C���H����BK5I�[#w�*O�Z��i^-��ELmzg!&��3p�r��T������]�Q�*�������2P��[���:�\c������j[UK������?
�%�EECh�1�X�&�����4�E�!�V��7�B14� CD��:S���A���0�Z��F>���PS����f(�%��M��C��HeFL�)S���[y�l�j�p����Rg�NT�J�:M��:���K��;M�yD�JfXP�����O�[1&�n��f�,���!�B�ZZ�yJ�
������4�5�k���[��n@6���r�>q��?
JU�M6�P��TO��r�4��!��	|��Vy�����������[]c��f����T0����
�8���Q>{^��a���<����G�����dD�\QYv@TL���F��%rFS�Ne���0��2���H(���]�-;��E<f�ve�af	
��K���$�7j sR�����j�X^rqK�c5�QxG\xK��TEswE���fFC�dL��"M�@GX�2�x���<���Y�W��:j
�4�.ai�K%�sz��T��V�2&�3mD;��D:�i����K��W�E��X�i���]K������h����v��v���D*�����3��N��}����^�G�qi$��bb��5T��������y���%��n�l��c��GI�� Z��Mzt����I]Z�Q�:��#b����(Q�H����#�,��R��G��w��g�&����o�`UR�'
j&�p�F�r��[�,�},ij�u(���M�^A��
�<��sQ����7���\�U��m[���$�"8���)m*�Z��uc�2���HMIF��m���6��
���T�d!���a5���
i�G�+���7���?��wdK�b�|�C�@z�/	1u�U�B��z@�\�Q�Ol�G����ff����b���:�:��{�"-?����7m�P��s@������x`�{�s��f*����#�=��(J�@�s)u-�;:};���*�h>��s��uS�k�I#���dM�@\����Q����GN��c&L�P���lI(��0������G	Ps�J�6�]<2�ec�-7�D���:��?0)\�����e����bRWv�	�F�Mf���_l�5��K�VB��f��
�&#<`!���J�p��q�[��������"�qQ��b���( ��a<:�r>4�f�����GK�K5;Xk�+���vLNThd	7��m�U�!�L�4��h����������.�M�G��T���u(� ����5���p���Z�
�H�~���d1M���p��vj�v�&T�v�nf����2���"�o_�njL��Y��k�{b�?��Q�
���lTQ�vQ��
q����X�z��#��\���Z��i
Da�H������T�0��g��C
��$�lcP�y���bM�����I������b�u�@6�`�hv�=��Q���(�hw����4�Lu,X*�����
���R���=s�@�����}�yT?h7k�� ������������.MP�y�l&��9N��0���M�^<�����g�A���PN�����=�E�'G�!2>=����v�i^���>�9��j�Q<az~�d!�[gDS'��o@O��Ri65c��,Pi�Ih��i�)T:HW\��s%d��u�����a,^�_j�[%�Gx ��H.�X-���1}�Z6���P�1��~7~��?����?�������!]O�vCN�Z�2y��+�[Dr�_�����������_Q�����M��k�������j�5g��\�X���#��8����:����=�=��A�_��N����tt��K��1."�b����/R�p������6�l��k�w�M�eP����%QF�A+�9���_�>���hh���Y�������f�/���ol��c���,������2���:����3�?+��\������+Z`f]�/������<�lx�9d����:5T^�����<3���xw=`c�	�,v�a^R{:X�#��Q�n.�I�e� T���[X��TO1U:�m��v7���P+�P[�f�;
��E��#��Z8^t7���$����+�u&k��b���?��@�[��@�-0&��:���$�����"i'�L�/�Q?X� j���E�63w�������xj�������2�L������Ov�-Ra�������&D8������������{�������b�lY�����&���������
Z�Zk��������'h=~?|K�
.�!�`&�������^�Uj%l�-�c��@*�jy�)�����200�h������gn	+�Q�$JNnnc�O�dfv����u������:eb�)�������4�J����������t�_YX����Q���[�K�	m�6NN��m�6�]/�u��f����n��<{��v7h�6�n����j�E�:��X�P�/D���/�1����}���m'������������5.��~-�i#�G�m_N�~��A��[NLg���WrZ�����C���07����MM�Xg�;�?�P��6��]�����]��5���$REeJ�4r��O�C��p0z;�\\��F��������1�sk���#�T�F�a�1��������9EV��� ��y��b%��k(�C4k��J
��5/�����6���W���+Z*5�A�5�u3ojV��1�k�6�*���'�Y�CC��*�#� �����o��?�=
����_�O[.]���g�����3��2X���-��xv~}v��x��������-�����n���>�)�h�������nqi��nN����ih��2��{r���$)��6P~���5��Ig
���;�t���2��A(U��6�����]��K�J����|b�Y��C\��L�/^7�uFr��=~d��R��5�k�����p2�_��_�����
{t�&��9!�o�J��8�[���f�jA�?�l�N@�����eZ/7�Q���q��RE`xt�T���+� ����\��z�?�������}�R�S��R_�m	Z9������x\����[,��f}&c����4����Al�#�R�j���i�.?�j����=���Pl�n'1f7��O��9���U2���Z�uG����R��\�4��7y��|��^��u���>�c��s�-U�({�@� CH����n��atG�2�����vQ�\�(Y���]_���[#�����M����D����`���
$Zxl�j��w>�D�'���������8�1��������*���7pi��
�@X���z�����)��]��2����x�t��#�)�wZ����@��|��H��n"�J���"fYJ�W���
Je|������\����_EQ)�;>������u_iY�P��Y���6���7����#
((��'�]h�<�����d�)�&�6�s��u�{$�
r�d���-{��E�3�AW�&����K��-�3��T�;�:�u���+��}�3�NL������P�;s�hC��qBE����x�:��3���=����[*�l�dU>�+�s�]���B�P�����b���Z��D������N����v�\���x����Ze��&�v����9_��'�������bty5|��*0��l�/�.�_	4����*������Z'E��%���y�m��N�&��������r����ok0\|x�����,��\��u}�����ov*���7��~�������_%�b&�����(��=�m�����oR��G���c������
=��[�	�����J+����G��o2K39�&�o����,5��Q����0N�(M�}�o���9��/���I3/"����]�QR���T�X]��������_���nu��� 
Q������4�osNW���p�w��G {����r��Tqny:�n������xpzZ�_�|���w���*���/���y_v�u��I��������w���#o1��;��������N��g�2~~�kW�~%z�{}��������C�
��3�Kk�i�K��'�v�������sz�Oz{����Bs���i��h���*��#*�g	��My��-3�y.n$]�c�L�-�����Z��~����va����wv	�m?�-}|��#!���r������}:�nG�w�G�L������W�	!��1kQ~���]kw�F��,�
�{l�&E�!����Y�mmdYk���&9< 	���
AZ�$���u��
|�Qf��P$�h�����jO����O�q����w���y���5}��=}N�p���Wt_��p����'y���fJ�E��>G�hi�����6Y�$)Vp��>���"5+~���\�q��}����	}�j�S+�P# ���.U�D�|���8��RZ# 53��G#���di�e!��}.#��X�3k#`��s���@C���c��YTG�|C
������m������w�N��s�0��Y78��������"��a7����$����~������QE%��y�c��!�;�]Y�<rV}*=��JG������ftO��P��U���O���NP���j�T��I���ZB
���-=HW�{��G�!-	�_������;
���?����"���V�%U*��c�$g~�d��w������ �m?�Y�\H����Y}N;?�!�%�.��!7^HfVQ�g����N�����$fH*_�f����P����y�����+0�<bF=J�d�4��W���#�����V�8	]�����g`���@�PxA�J10n��jH��$����Vp���:��P�$�T��4��i��3��@]���S��f�����a���$5���h1a#����s3��t�\g�d�{�"�*�D�3<�����Lf��L[�s��G�,v=x�JXMG]�U����I��j���g[]&G9����m�cl'��
���#%�ODb�GA<Ls�o�T�qqP��Uo��������f&�P�=�
v�"��g�(6h.G��X��^����O��A��j�G����{�bcB�~yz{z�L����=wuz)���0m�x4m��� **�q���[����za�`"�:��sM��8AxM���kb�Tx��������ha�,������)��w���yA:�1������Z���#�OnA���ID��c����=B>>��4��.����<�c����j�<g���3�i��]�����o)G�N�J�O,)�&����}�����+y����t��0���_�Y�����?�����g/�_�z}�?�^��z{���nn������r�Z�7����G;���J�y��I���)B������������U[�����.	�"d��m�9[�0"��B��e%�����*�Q���QH�E���1�p�4��es�Q]�E�c)��F��X�$I�����[o�)��_~'O%��l��������y��=%1���C�]�xp��[��L�i�K� �;��e=��P��a�h(A��y(��S�MhM�� �7\{<�2w������+���"(�y�*��"�7%0�X����}���
�a{�����)=�C�`jG�AD�("��#��
@W�	;$�?=�����B	��@q��L18�K��*��C������)�;V�%��
�+[:��X6^$��E������v�\�G%	���8A�:��)q"7J�/����K��>G=K��Cl�Th�z���OL0|!�[��"�!�PZ�$��Z�P���!��8S���.qh\8�Y�n`s��x���
�.�#�!X��I�T4:���NA�9��;p�
�H�
om�����R�]�~{�^�0�����+
�cF2�^��}���]��oi��L!j�'�H��E}s�c1������O�~��*��Sv�7	L �!��Cl��g�{�*v���$�CjR��O��kS{h{���>f7*N+������pgz�r�u���P�'^�'��q��zL�[����vj���	WB#��� ��V��(�#�l����T��F&�g�b������Zh��thM4.
1���m1v�`e8
������0;^��d�_o�k_A"j��>�����Mv��-���G��p�R����q� �|����/���m�Tq�p�$A"��1	�y+0�*,�b���iD�������Y���54���j"F�s�n@CNk�O��ge�g���C���$��[�����<��|�I���,\��
�����3�$����8��;��hw>�aNn��������yzt����e�%=$�M�9}�$I��GXO��*.��Q�8�b��2�N�Y�dD�k
BCHt��KY/����m���tY���
�����7����c%Dz������<Ql��;��E%�N�}N�^���O��^�8���������`y�x�U?'cl�@@��%�d����3��>	�B���
\���c�}^���
nX��������F�/�e����'�%h{�F��(%s��E��8F6����0;I�K�6�LT�/���c���qK��&P'��g>
��T�t'Fr�O�8U=CH@�$U���=����������!�@�1�b���U������TP����n=�;���0u��.l�d|+���p���N�k��M������)�7��t��0�+�������uP�M��2!��
��6"�a�em�+��<����N�)o2��p^E_�����l-�'�����3
�!{���+R����C�k��2���P�3�IF!�,�g����b����t!Rl��B1f9�XGpQ��uG��-q��l�Gd�"���>��n��	��x�fO�SRXeRx�^�9����A*Ut��$�Pdcn��������wc����9�,��xw�%��qW��"�Sqn&(6Pt��;��:J�y���{���n�
w�>0}�D�r]|�}�����p��lky�8��|�m?��5)l�P��`�[������Jv����c����c]3���I���u�$��'�V�V�z�q7_�����1���7n\�t��X��V��H���������bW7�7�b/~�g"�B� v��0VOn-WRM\A�L�9:T� �D:q?��U���f+�����5\�]�HL.q��cKYH�$�'��#�`�~���I�8y����)
��e!�m��#�C4���c���L-��[�0����~cr� �����<����;���W_���������Cu���[����1_[!����v��1x��-N�v>\*��;3�)�1S���i>������a�)ql��
Og�s��YF�c�t��w����W�vj��`q���E�w����(�Z&j�1�=���T�g8��_��*��Q4O����j��)E�7�'�6q#��_Bl�����;����:�/a���]^�dO����}jp���&���(�V�����B�&�R��xxv|.'�#p��sCUQ�U��g^��e�	�$�@��"{�.����Ju���=�lA�u��F��
f���pe����t��djVUJ��d-(��{*��c3�f[�-	+���c��M�gL���������c��'P��k)�E,���1��Z�*�r%}��6�'���AP0��s��5?��lv0�E � 
�H�=]sN��K�����y�4NH�H6"�A',�VX�>+%6��Y�)B��k��nu-��Px��/����I�g�=��[xv���~"gC��Lm��'�rT=9����]�����i��Jh'�{����x�iz����	�[V�9X7��q�XLO��\�g�K�P���<���i��-:��!���V��L(:�?Yhh��������s�(��#�]aG02E~�;�����_�Za��/[��o_���DK$�C�/�<��5=�I|���I5��/ku�{/_�T��P8@S��+z�����P���r�I��=c�' ����{{�i�y�|����D��M�Q����[g�f��S!��'��I��t2�����!�-�;��TV@m"0\���M01K���	����Z��g�?��UA��"�|g����d�T"�)������TS6��K�� ����������=��������A��*i�W�&`����fS����v}�{Tm�U*{��Q����`��[�JHO-D�z�\��J����^�p��Zl����������a�_��|h�8�
�fd4�mT��;�.�yR��H;G���p��
��

�8���:(�y}r689M��!�n7�cP�����������:W���Z�N����;P85W�/���]*[H��
Kt���vpP��� �V��W:RH[���N9�+����%�ii��2pj��y��� ���	��&�Q��/m��St����h�sh��7C���^o	�
=�A��mI����$�%*H"eH��1^����6��g/�!R@�c�o�k�Q��W*��������f�Q+Z��/��!��aM8��\�+���U��R�E���MidLs]�1wR��O\H@ef,�������s�b��<����^;hW+�n��i6����&��f��<5}63i���]���3���j���$�7x��td��5�w�D���w��v��Y/�~�.9#�KD�N�����[�/4�.~"H�����e��@��������l��u��
w�1M~w����
S�D���zV����<������"�K���F��C�����f�t�M.�����#YF������
sH77�i����c�1��$�I�d����pM_����WM�\������`��c:�Cm����e=(��1H���[7Y��p���x��R������0����#����z~��ND��Z���8��W�>=+3�e3�_�vdm��E���x����M��~8��`�����j�X�:��[:(�6���W���~F6��F�+���?�g����_9�g9���a�������!K�$g����Z�H��C]?���u��.�x^4��dc�������!��$K6���>�@�V����Sl.9=�D
�����l1\���~c�1����h�f���~��.��Dr��q�y��$l��G`>��6��e��5�M9�j�k|=�+����/��3g���\�z�:k���I!/E6����K���=,�����|�;����y��pEw6�4{(�����&��^�tC��?4;��a/0y��]��
�{�eg ���h�^����
����z�d��l��"L�N"��?7>�s���b��|���`�Y����?�{F;�dN&R��s<�/2�LRJ{���w�����^T�Y<c����i	t/��H����R������&v
�,�|��Z��l�a���%���V*�F���6;�ul����k��"��@sT>tx�E��{�i�tB?y'4F�A���!y�1�m�VH�����/���!JZ��[�dn�D���B�l���O"x"�	�l1�3�%�Hl1`��K�G���*�5H��D	a���/
-*v.-0~'Ry;������'�n���%�J�1$��jykk��F�&}*+|���!&�Y���W:��gFyM�����������?�mz�F]�l�:_S
,��B��R�����c�\�������r���F�Bh���:s�eN�H�����D #|2���h�
W��[]�'I��>�MZ�KE�~u
w0Wi�J����v>&Dv������1���_�7`��f������ �u[��������n.�2���;����}��i���'�����_hziU�9�a[.U~I�-�����L����~��~9������$w��N�P������.�VS�5���/�Xv������7j����J�p������u��[�2�-�G0��Fw�JlU���H���Y�,:��e�h<�v��+�H��B��y��W�W�e$W�vL�+�|u.����,�a{�R:��f;������~�<l���
�����~YI�����/���9�����z��j��Q]Z%�sg4���/D�>�<��7y��Cb���ZV�y�`A`�`�Z�5��d�s��h��rf@:�UFb�P�R=b�A{~�
�-�)�1<�-^+���%J�-�o��Nr���x���u�;{{u%	b�L��3���9�d=L�"�������|����9�z{u��wdc����������s��<Y�H�-����/��?L#���3��p���?��9�I�o�P���|v,~.n�����������l���~��R��{~��W�.n��*���y�����(����l����c������P���S��R�ZzX�k�!��U��J�Zz0�k�����Q�j���D�D�D_\^^���1���>L�-���>��#g>�i��<�?���C�V'��%���4������RC�w_^�y�Mc�4]��HF���:Z�d�2�:6�~L����9`�\V�����d�������d0���<��I&!X��]-�c�*��O�J����
=mkp�|�2Ig|��������C������#��������0r����-k����"id4-�X(�7Q�w(V������6��7��xap���W�
Y�W����U�x#.��6���l{���5$�sC5��������z��.���/�|9���z��_oE�����q�&C���/Jp������������X��+���{�l��A�o7��}i�����D���?-��y��x��mOq���(�+'X��?-�ro��E'����N�����p�r���[/�{U�b�����h�y|�|�\��������*\�o����Znm�_�5���_�4z�N��>�e����b����|��t@��=���9����U[QHsjmm�7$R.�.���X�V�XfrlK],K����(.L�Mbg`A��%Q�zS����"��,���o�-�2;gv����&����yv�K7{�K:�kM�6�vXt���%�?�N�q*��w)�H���6�����aS�f�:�Q����"�����^��<��f���n��/n��U��je1f�����P�/��#8�
ZS�����E���[��_3�B��$z��X0
;&�+g�����xr
�����7�Z�~�5�Nz�z��W��r�l5MH?�
v16-0004-Introduce-OAuth-validator-libraries.patch.gzapplication/x-gzip; name=v16-0004-Introduce-OAuth-validator-libraries.patch.gzDownload
��<�ev16-0004-Introduce-OAuth-validator-libraries.patch�\yW��������`���I�ba��a������R�V#K�������[UZ���f��K��Uu��]�����3������n����[z�kV�����^����w�v���Lv��l�}��c����N��.��v��6w�iF�����n���g��w�c=��gn�X��N�t�@���=��X��n���x�7����h<����=�m�ggnxflpvuG3v�;��G^�{����bq<�Cp��
�h�Y��{���Ufgxp�`�es �pvt}V������;e������!t>Q����X��A�;�^��"�98���)�c��F�^�Y��n@7�	�?7uXM�����~�f��>��O=���G�6�vh4���E���a����8���n��j�7�o��~�B���
|`�:I?�0���-���g���\�W,u��_����x5�tB�MK��lL����^��x]V��[n�����n�Fc�%���"r���jxq�����n P�LO��n���EM�*Mhm�w�h���
V�hP���BG�N���i�G�>o��&�H9���]�u���a���3����v�0�$SX�Vjl,��w��"3���Xv����~_%����Z��GkI��GfA��HF+�&��v��mY�^I1���M�iP�]�a�����i��^�������v�zONq����������=��j{��������QX��x��7'��9b��#�������^���fQ���f�yN��yd5�`��Es�X6�>�:��������?�t��Q������5�2��:�Dw�1�����������b�yJ��(�')�>��!x4���k�����	���:-�m��v�n�����I��~�7r�wE��[���h��EcV*V��|��L'�F��v-o������������G������L���e.LV=�}��Ys����v�Xd
�����>z����b5iB���jqN��#;d��[�
Xd���M&�My��9����2
�m��Yc#b���Mb��6�]��v�(�����l;�G���t�{�|=3�m7*���p(e9����]?�����wkEV���h�
�4��0Gi�����q&�jv��e8.��G�4�5
gqKrW4��1��A,�s�S���OvA =���S:�5l��Q���s��B��|�D��P�3=a/��$	��F�%��k!�q��V�R����N�
���"$��I�3�/���*@����kxY�{FF'D#_B������ ����R���
(�am\�A���<^^����������0]E�|��#�'�,G,��e�^K���E��w`4`�'g�C�Z^�]�����_�Ajj-��!�p�9(��=�q�)gI����Ids�_&�f.g?�4^��H�{G������_Y	��H������Nm5���������r�C��������(���j��W��(�5(B�
�������
p��nc���������`N���������~�o���!x�R%�\hk�t!��@p��4m��`E:���]�l�	Ws��Ae�R!	������ �	)	SI��{�q��TXg�d����F,��m�j=�n�����b�,�����oR����H�1d�r�G�![/���n�em�10����	�
�=�X|�P����/����4���.�g��M�~���evt~���5������5���?�z	�������J������a)���	���;!FbU%wt�L%� ����Qj��Fg�}��{�,��W��)"K�	�R�C��4��/0Q	9�Z>����l7x���.�qz��'�?�3��	�@����"�����YY��,,��s&"J6��'�����Iz�����"��+S
~]/bw���B�@f���h+d��X`��0���6*	�\�9�I��k<2���yB��9�n����1�-1p�-�K����e$�r�'8Q;�k��/�@M+�����xW����,D��T����+���G9"����[[I�M����G�S��3�\����I���<�)`����1�m�;h=��;D1��
���	��%��7��$��c"
^L�,��a:���s����(����g��}ON?������W����o���_�[<o�;;����~����Sm�\��_}a������\6����t���f-<x��+bPu�ci���R(
���\,��>�1d����*�^��R��������,��~8d{�_��pT�r��5�[(�W�J��la���k����0</|%*��^��=~�B}��`}Jd#�O'�"p��{	�'B/��D�E �(m���~'nb����`E.N���e��",�F����aM�Q�n%����G7&�������qh59�Cv��SF1�:D�C���z@�N�N�~mK9��A.y�L�h���f��n��%�C���hit�Z}+A���xh�����&���yp����@����� �S& �4�����"*����x�m��*F�6L"�u����7�k�i
*F��*$2�I���<�9�%�<a�3��X�)�����P���_
���������gW����-��m��C:+�����S��������f����L�a����c����%I��.M�#D@��:�P��wMAIn�>27�O`m`�[�[�n��c0O���w�4hMERd��,H�.i���v���P(��I<�wj�`b���z.�R������4�L�����?����G��/b\��#z��&�.l�����fJ��T7�<�b�>�}���X#-,�cox;Qa����Q��4��p�z��7�85�Lh�:��e��`R�%�,���po�a�<���5�L�-W��������b�T%UU��.4�0�j�4�8��CR0��&���e�;���.��BV���A������}U�?�){s��g8��_<��
<�^�Q���lSN	+�&�����9�-��Z�~km%-I�`�4�z��f����T����NE�~���u�����n��{�[���#%�[�B��H���e�V+�HP�qJVTa�,D:����������]��������[�EMF=O���(�m�g��&���x��2M����G���v�#9D�0;�|*<�s5���t(������s�����)��`��}������{��������Cz�����XZ��-��W��lu�x��te��Z���J�<`�����B�:-j�_��z��Ou[�x�����=��hD���	X1�m��U�@=WyXCC�b��R6,�'���'�1���&f$J��?+��@`�h.�.��sz@�����h�u��zZ��g�t���[Tp��k�v ��;5�OX�c��B+�R=���y���
�����[y�c�%�1��$��X���a�Pb3�����|
�C�Te��h���,��m~��"�PP"l2/��@��:�����/���~<���gUYe`�@:��\nx�������&hM(�����	B:r��b#l���q�Ib6W�a\B�v$*/y�%�aF�g�5���L���<Hnz0��1cNg?��
5�.nGc�BWhg��,����'"�EY��!#v��J#[��ei�g�5��$�NC�JZ��v��fH/�������������4� ��p$@4<y@Jr�=`�����@<	�-�����������B�J�
�i�7����Av?b0�&��j6*g�/[fj�U��q��;��|q��9o���u�I�eC���A�������B#�}��R��IN:�C�9���z�
���
����4�L��>@��@��^o�e-��R_R~�jD��2Xw���������H���
!$���M���$��������h��T��^�%�e����i��:!������|Je%RWe?&G�c3�F�^)q�%������v&I@s�����xI���1`��(	2)���:d5J�����t�����L[B&@��<��k�}�&�A3D�3�s3L	�}��<vUT�-��
�����<3m�6�r��'���L<B.`��k�'���X��q��`,�%����!&O�~Q��!).(L����V�X���r���F�����-'k����z=�s<)u��g(���!��� ����?�����M;+XdC��8�Y�)�W@��(r}^V-���������f/�s�u�����.����;��a�h/���=c������\�Uc���
��\t�����2��L�C���7P��]���$[�����z�:\h�q�s���x�Oa���A ��]�o{��A�d "�d�k���B�}��}X`n�;���2l��L�J�F��!���F�	��>��p�`�5��a�Q�[d*l�����=�FnH*,f���� ��
~��L� K�U*��a&�J������
�9dxNZ(��8KJ���pD�� 	I<��w��
�*!�VT���p��A����������^��o���o%R����,x<k����"�g��$����2G���������B�c�U��DZ�rJ$$g���X��N��HN$%+vh����e�}r9I�S�i.�"h���~<������C��J5��6�T���E9C@4T���^�����!
����2<)�TH*��������x$�{D$W
���h��s���iq:&S*��m����T����T����Q}!���t!�F4�*�/���������/���)w�
H��,�UI���	��<$���.i�����NT��+������s/�g^r�Pg�?��k�G7G���F�������b%5 W�RG@��{	�PyGFeK���E`��<��W`Yy�&7H��eN�M����d����f��Smq�r�G.�n�4Z�@X��p�H�
����
Y����z��<)I���P����.�V��&�}�O<G0U.��i�2V��y(���f��sQW@a9�4�W�3;r��
 ��<g*�,�L4�2���1�<�N�����e�Z�$`�|���-�
+FY��n���W�'Ra>��F����g�lVI b�N ������D��n�f�P���>+�
�&Y�7;9�d1���n��t�o�H�<��sA����\x��o�����LS����[�y����c����#Gp\�����e��m*U��+�@Ka�t����K����Q�4��`5�(�<s�@��s���?B�o|!JDMY
B����0��������d��s���7������u'i���C���g������j��Ai�!�b�w����I���j�Z��8I ��	��x������c��ZWbL����+��7����*�T���?��V���
k��<��[in��|����,�^��V��	������u���d�V�E��.������7fo����{���/2����6���t���m��VU_��pC��j�<<�+H7/�)����t��k�ND����n������6:;�x{]c���K�^���]�#��4���ri��G�s2RS:���<��<��g��.�=���/@T�*N���C�d��Q�&��|�>����q{=��A�|uy�+8;�yy5��Z�Oy���8���Bk23y8�#,����pE�*���_	9�8�y������86$���]���jI������\\u��DDy�AN��)��5�C��7��2?���~F��S���5�U.��k;v���������p��p���*:��ox�~�kN��M<��"��4Y�9�m�:s{�K�=���%}�v,KNo���a��s|Ms_��s>���-=����/r���x
����MQ>������@'��,���������N}�@[�^�b��~�����_o�t���������e�Ifw��J%����p��]�����W�x�v�[��d��+�5�G����V9��\��+��(�,*����9����������m>��.�
xZ�&68��^��/OD���r��?5a�D�xT�,+�-$&���'AN���x�>>�QeT$�G��m�M�����a�@����}xZZ���;vv�\���Zv��5`�7��~���V������7��6;��8�FX�A�>�����RbS2I�_e��&b�����zm1��LLA�K��|����D����{B�]��t:�W���$��"?��S��PI�W�7	v��
���*Z5��7��
�����������&���BZ�g�C
����=V�[���P��F!��up�T�S���KXS<��u�v�3rR0��	$b����'��P��x.��� OY�X�)4�w��C�Q��}���<��f!f�s>����*v�
7�J�:���US���@o��F�
��Q���D?pq��]���jtc�����G#�~X�G<�*�
�u�<�
[�L���(����O��
����M�~|��vm�I���Z���S����8�^���Y�$u��T�p�=��i���o�����-���mB%�M�S�n5�D�]S
�����'�;����+���^Xp�>�n��n�}���%�v���@@��<=?C�8�JQ��[�r:Y�P
AD.M���[koQz"w�3�K^ 	 ����������bW�X��]���Br����C��0T���kr`F���##����Y��8�U�-�:���]��]b�^��n��*���7�������������^3����T\N�������H;�b9;n�s^G���c�%���i%WK2���$�7j�k�k��:$�U��A�n/�E%�5�u��!5��w����b���;q�^��6���1�f�"6���<x�,A�����Y�dU�0\~��!���izxPbf:<,c���1!�����_#`�h����H��I���3��"����=y�e����mF~���zS�gN������?s����tl�"�T;C'�_��E��J�%h>���9�����=�<U�!�?D��'�����De�����('��D ���w���OF���N�K���WBY�
��yz���n��L+D�z����X\7�L�����7��A����t��y7�������s�Y���M��k4��0	
c#re�j�=0	c6��F��X����:����|Qh�=�A0�<3���n[�F��M���qf�<��T
ZP�{xU��}�?�_pb�#�Z�+W��*&|����b�Q,��g������8J���������)����������c��
���`��W4�W���Sj�����K�D������6��9
�f�"Go�y�����[���*���V����X[���C��?�L����.��VsV��U/��:��������G�R�t���Uc+��O�I��@�/d����\^���,��G�"������������A+�Z����T�t�F��p[P�������<�J�|-�������a����yKMJ�����V��?���d��V��`�g�Ej��F��(�>6#���T�A*�@(�7��eB	Dyk|�B/RLI1�$\�4V�q@SNl�a��x�W��W����=���hb�����2$R!����h��~3�O#�*��
�&L�c��6��!r�;��,�Hha���*n�-��)9��z�����0�7��~va�C���S"#B�U�Tr�1���e\hO<�#O�bX9�����&�������*{�_��t�Z��l����3~��"S�����j�����~����V�|������f���&����l!$�0@�XsqF��/�#RT,%�e8����l�^nTV�X$PY���<b�?�T����`�#�DRc<*�����)�}P>(���zJ��������9�r��y~��)�^�����\��/����t`3�C�#'��6�!�w�'������������������uQ�z��6W���v
�*1:��	���%b3�/�FH"Ba}&����@��0���g���B��������'a`�o�h������������!�I~q|��4�"\1������)r�h�EfZ�F���������a����7����� �99X�X���3��Iz�u`h���d�m��t��!����g499��yn�[�?'#�gp���T_��aC*�4�����i4i�3L��K���sIH���7��GJQ���M�@%,��?�FH���O�z��3���S���8���/wA�0��h6�/�?�9�1�%C?��]u{7�O��(�~�-��R.l�	ZD��C�J�9�/�����H+EG� ���R;9;8&��?�:����t6D�����IhM���)�R����Y���M�L�[Q��I���@�3e�O��i���A#��$DS,�O8wO�����m�p���#���"���dj\�����S���7o4*=L@4�NRr�'!)�y�����B���F�L_&���fw������	��)�Zr�o~��5����Iz���_mw�U���$���\��A��������'��&.%���6*j����(b1J��"�������&���`>�;�s��y\��!l�_��
:��@�v���d�p�"�i�&m�d��^��7��Izp36�p�Skf��&�9M�/!|S������}
K"7kt�d1`D�eN���w3��J?i��_��*���y"�
��Z�Tci��dg��Q	az���9�
�y��E�w}��?6"4���5���x�r��T�?�H��W���S?h��;�#���X� ��7S�����k���j
�h���T�����@O����)���f�\�)W77�
W:�wr
v16-0008-XXX-temporary-patches-to-build-and-test.patch.gzapplication/x-gzip; name=v16-0008-XXX-temporary-patches-to-build-and-test.patch.gzDownload
v16-0009-REVERT-temporarily-skip-the-exit-check.patch.gzapplication/x-gzip; name=v16-0009-REVERT-temporarily-skip-the-exit-check.patch.gzDownload
v16-0007-squash-Add-pytest-suite-for-OAuth.patch.gzapplication/x-gzip; name=v16-0007-squash-Add-pytest-suite-for-OAuth.patch.gzDownload
��<�ev16-0007-squash-Add-pytest-suite-for-OAuth.patch�[�W������+ta�:�HfsO�<z�����<N�>9�����l w��J�_I`fv�-3��T*��]��8\�#����N�i���
�zW���3t����q�����a��E�:G����Y����.�f���;���>�U��?~v���o%��}a;���3��1ak��(�n�u��p<��f�����������������w��3dG��&��\>����.����	����cvs�&��aO�!�(
�DM&���f�O2��^:����Fj�f"VQ�d�G�0"��
�Z�`K�e�J���X,c!�%3�pQS�|%X�_
���9;���&`"z+s�I?��0��"J�F�Mc/X�Et�v�v��1�|����\0�U+S�w5����31��\v���h�7g7r\k����<j�$|U:�f�1�I���!��q���
qRcl�=?��Y�K���
|q�{.`�	�A�o	����%P���S �8�=�����~��x��&�.t$b���/�����p����9w@�z }�RH����9H2���<
�T�*�A1Z�g��DH�vx���Q.c=���`5O��B�W&q&��{$&���R0O������A�&8WB����*�`����(��X��H�8&K��\�}�]�g���W�X|M�X�]
��E�o��N�P{�����,�yB�A2����YEpB���d��a9`nH���I���A�����'��|q\86����� 9��G�qM�m�P�BN������
W)�&��9I�$P1%��S�"����d,�����f����;��Q����v���X���e7l�J��`���;~jL�N�N�z0�+CC���uw~r��|�;ln�;�'�D
iF��:���	�}'����l�x�`���?d�m��\�80�%��1�ym�6���u����Q��
�*����e�jVl1��VQ���l� l3���N�Te��:]F�)�9�v��#/6%���C�C�����������Uy����~��k�S��|�Zv4�:h��z�\�f����i�=V��/�7�}w18��E�������DR�]�j`�N����f���\���
��0����~:;��p?��A�T~�Db$�����<e����~8�^M/?��|��qUm���S��M��y���Oq8k_��\�if����������c��/��ww��g?�����������o�����B��x�9�B2S'8�qzw2f�"��8��w�y��a��%>�"�����i�1�o�������\<�F��dh��R�V��q����s���s��0L�Q(�}X+YEPb��a�Z:�O6
����j��������h�I���l��s�������Jcl?z\2��>k���C�������X�����%\H�G{Z&�	�p��3�Z��1-���~f���Y�%��s*��5:���A�.���1���{�8���Y�_=�w�=2���GG
E��d1�4�r�q2I����P`�#���.��?�+��s'D~�,q���&��x�17��#&0vau]E�&{�T�����U�rB���/�>�����b9i�2n�!����y�K~��-3�*������m��Y�;�<��������Ow�����)>a���)��&�l6�b���E�Iy�T���>��|+_�xyA$��Z	�=u�����F|���x~s���:�W��D����Nt�q-=����d��N�G������������<J�K��������<�w7�����I�AR�)[���;��
|j������@��7�H��MW<�=;�zP�D�u����j� ������J�:��7���O}_�5��!��{ ������1�B��d!"��9==9����������fc�����R���;s(�q�A$	d-��,�1���>�/e5�)G��J����c��'������v���s4�W���*��G������s���v
�����>#�d���V8ds�A���Aw��Y;t2bw���8x����k���d:�"c��Ed����,����(�����hH�x��9���a
t6b.|��*�N4�e���{�]6�0K%�L��@3�&6���l����2K��q������(���V���������V��	�����U��H:����8`*O��QE���K�x�}�q���I2�TIH���I
�=�E.0�f�����CijN%|!U�K?�9`��5�aM���������\����~����lh"��7���n�0eI
N��P�'��8�o�9�UE[k?D����Tw���p��	~�,���|P�lf��k3�`Vse���F!
�p�����p�
#�*S�v�dn -���A��P����`���['���*	]��0U���*�g[�]��(����S��m��c���������H���Q�4$���R�������w�P�H�<�6R�,G�Y|%����*^y���4�&U_[r�&���6��XR�H��0Zv�����Z���?m�Cp7n�"*!,D�,�Ps� /n�������'������}��0�6\[�����o��{����%���3C���P���'��r�E������Qm�
�V�Qln��@:#��3�=iN��GC�P�!��Q9��~�7AM���G����Q�9�]�*��m�	Y���%��:�|�-������6V�*�Mp�c6l��zme��-��Xj
T����f�]��/���T��{o�����v��2��rq����t'X�Kq�}V�bC�3+ADP:���`�Kx����G��.��J�e�_ey�d�z���0-�II*Z���X�����V�3�B1EY 1W�v��P5�#PU����4�\�����7����(��%V�r%����^\�������$3����	Y9��CUE��vH���XE~����1.���C�Dr�j!2}H��13%<"�U����L�l��eg��S�$Z���'�,��y>����r����/��
��E�l�[(�����V��c_���AL���[��SY5��q�(��OI���"3R)|V|��UqER8_��V�VVd�{����R�R�wma&�7x��n��������������Z��@Z���i}���2_�a�aY��G�<���q&������Uc}
"C`v��z�P��R��;=����R�F�+f=�X�J���
�*1x�~�m2D�1�u�Y��8�i��A���G��Ur������a�:=��.���������s4��p�r�&���DK�LMP�s_��RC�KGET��|��xO"^������
X@��gCZ	�F*���K	)N����������L���@t��3�6w����G����`�k�IUO���`R��j��z�KQ���g'���'����|
�&%c��P5��?S`�%�p��B1�h&��@�{����Y'�y#�}�2,6>l������R�e�o�Q���[�����ydr>f^���*>�A��UOV<Ff@�=�K�a������W�W����������v��o�����b�jfc������i�Qs����� w9i��j�\S���*����v��KYDP_�V?��
z'A]��W�vHT�X�3+\"�Ll�>�t���$���nK�.0#�wq�m��m������b�[�w��-�;AU��X�������R}���=���Fk6����T5�L?�����w1�D���,R�B��Tw��@�V�aS���<{*J��I������o6��u}���
�e}�x.�������g�\
�dO�^!;�����}���z	���-����M�8�L�U�~��d	i;���W!w�t9����������'9���Z�������L�:C�;p���������#q��U���@� H��7�����I�@����W,�w�9�'Du�kP������?N���[	A�2���++����k���R&��[�/L�Fx�����jmS�!K���U�~��[2$:{qC'%��V��9*+�=�Uu�0b>����2�D�$a����s�#�(ho���"��H��'{g�w���G�=��/���s���b����;�Z�����t�HL���PWp�����a2����.���U�uR# :0�@�}S�_	�v<a��*k��0Qo�?����!������+�ZA�����fD�n�4�T�&�tF�����s�^!62�������s��x�t��*]�MZ�t�����.���*�g�9���Z���$��B���~�����z1U~5R�Q/f�f�p�F��%�V�P� �s�|s�`�tL��e�%���#g}�H��H����~���lN�qz���������w?��n�����.��fw���DY�HR5��T
�j|��z�)��Dp�xN�*�|�H/�	'C���J���U+��������\�K�sV_q���p�����g�jLBM�Ta]Y#j<ig[���u#v>�������DM������������L�����0�Ta�:)�(�6��1<�"��|����:�pO<n^"K���
���T�X3���	�5�����{�������kU�s���f��r�1��e)������1d`&�;�-B�@���t��cQc�������6��������b�*�"��l�e�S2���RR	ck`i<��Mq��5����7����m�	��S�����:��m����Th�q���&Le���}�����������zood� L\!��G�~�w�A�b���g�Z����I��n�}2U� U�
��6�1���e�< I��T,S�zd����.v�>�Q3����>����=� "�������/z�=6�wLi����zv���l*!a�au��f��������4�\��>i����~[8v�����m # s�]�]w4�#q:-�G�!���:��"�PIGE���}�"@@p�a��ki������oC����c��FP*�������H��������G�m��M��p[Zn����u�%�����g��4��X���H��t�!5�
��z	T������<5:��f���Q��t)BP�F��Sh���!����g���CCAe�`�/�����H��fY�[��\�f�qJ���(6{E��{���yI��$����a�(U+�q���V��mV?_E��<�1Q	)u �F �6�����K�9�=����7���>�8@��Q�m��n��=����!:���������e��W_e�{YCP(�&l�:�zqM����v���E�p��Z���M?tM*��=]��B=5���<��z�>v��&��7���#��] �������\�Q7�U�"q�NQ����`���(�v�+��������#�����0eK���vv ���K4��h�K*P%��D���������.�e)�~i$��������g����h*	����!S�C�8Za<�+�:~�F��x�^^�6`L"�&�Cy����D�!u?V��?�3��=���\
�:��K�Q������O�C:Y����ciE�H����<����������������fbB�����=��C:�$A���'�r���+�[%�'E��z�:0d3�{^:HP)��#O����'�j)�E��is�l��_?uf2��
CP�����n��������/�7��`m�J�q��/�{K�:��E�3T�i�3�P�z�]`�M/_����5
����j����aH��1|8GL�o/����A�M
AdY�(��il�O.��H�����i�R�����6A
8�1����7
#����Z��d�e��O7h�`4g@�NbZ:G�o�_mA_!n��Q�Rq��+UAX��`�L
�>�?	�U��S��z��rY�K��4�#�7��8���t��{�XqrgaF�+��p���������)UxS����j���G`���f�u
j��lj_Nj;����Gkf`���2�z�����z���<�z� �@
z���a��2!�Y����{�EvBA,���E$2E����*�M$�E�~L�������������;���D�U����o.�<A,z~�a���W6�+8#���K��=�B�Jan�H�v	����4F���Q����|P��fB��	��7[�|�,�fi�e��G Y��$�[G���������G�	wa)���e���`�?n,����UU��3�r�=G/��U���@�N��
��c����
{��sOe�5Dy32bNl��	�,�5��R	n_�� ���=�,����>	�M��1����G�����L�a�Z�f�����9S��
3�zp���-K�0�t���i�Z
��$F,�D�Y��t���O N��1����%��}��mu&n���X�}���:����i��|��:��%	�[�j+��E
�@��Yr���\(����+��������������q�5�z&�!�.7aAP��YG���e�������8�������mY����=�Y�����>�Y!�������LM�����H���OO:��V)�����Y��.sv�u�L��Xp�$�Q��H������F�4G���`���|t����,>W�<�=�uX�����!�,N�y�F�Om���~:lyR��i�����?\���$m$�����o�o/o.�
1�K��V�?�e��"LH��	|��6��lB����23���
���g�������L,��X�������� 2u��V���2;���q�uG[Ez������UL�������,�~���������},)1����E"���p������/�c2��������$��Y�x��'}�C�������/O��t������t���a%V��B�"��#���H������;Z��Wg��; M��w��E�M;.����pHlBjsB��$���9�cg���@MN!����I"N��
cB#IG��7��4�pJPa������/m���b;�����a�F3 �����,���lh�JR���&�i2�`e@��F��	�Z������;�"v�X��e�}�K�iL�!����
���h���Et�GjC�X�wF��HH��##�h��iL��Q�O2�w��?r������?���X���}�-�5������h;7e��\0��w�7��9l�4+XB�AtP�4�Z��n�Za�E������0��J!�8�@�@��;���G��ZE�)�u�op�L�?�.6��V%���i�#���6����/?�k�5z���r�?{_;��������"���g
c�q�z��5"h�+�$$��&A�U��Y����Wo��4D]Z5�;���V���w�J����>WP��$m����gL��l�������8��(e��k���u�,Q�l���u����T��!�Q6����%��d}�����rmW
���@��_W�����s=�u<::�e|x��\�Pv�������7j����^G���$h��c�b]��.c��1��a��-�<8O'����'�-��02���ZG���].C������������Z��4��U��#%*?��Z�@q[K�M[�f	���Kqyu!����&��yJ��c�ZX�y@k
�����^�� ��:��3A~��&s�m�^7��h� �'1.��3
����;@)e���{9����S��I���"
L�e:��U��
v���8�\x8Gu'GGwI��y��EF�BhL`y���������([���D��Q*Q�3
�ZY��L14��evgg��pm��J����/�oR
'	=������!&���=�Dj���Q�y�[U�����&�����J�!�^y�������u��ZY��$��>2�3�ipz�A)�S�G>�p��gM����n�J�^~���*l9��@������b������@�:��6�`�|JO_�s���g������������
�R�f���I� z���<����u<{����@� ����FH�L��|�#�����U	|��8�{`��_xt
tDR����'��"^���sc��"��M �}ly������,b���e�V1:~7+I]�:��UP<�����E;����w�\�SXh�kF�HI��kj���q�{�~f�Y0��Hmo�������H�V
v16-0006-Add-pytest-suite-for-OAuth.patch.gzapplication/x-gzip; name=v16-0006-Add-pytest-suite-for-OAuth.patch.gzDownload
#88Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#1)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Feb 27, 2024 at 11:20 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

This is done in v17, which is also now based on the two patches pulled
out by Daniel in [1].

It looks like my patchset has been eaten by a malware scanner:

550 Message content failed content scanning
(Sanesecurity.Foxhole.Mail_gz.UNOFFICIAL)

Was there a recent change to the lists? Is anyone able to see what the
actual error was so I don't do it again?

Thanks,
--Jacob

#89Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#1)
10 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

[Trying again, with all patches unzipped and the CC list temporarily
removed to avoid flooding people's inboxes. Original message follows.]

On Fri, Feb 23, 2024 at 5:01 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

The
patchset is now carrying a lot of squash-cruft, and I plan to flatten
it in the next version.

This is done in v17, which is also now based on the two patches pulled
out by Daniel in [1]/messages/by-id/flat/F51F8777-FAF5-49F2-BC5E-8F9EB423ECE0@yesql.se. Besides the squashes, which make up most of the
range-diff, I've fixed a call to strncasecmp() which is not available
on Windows.

Daniel and I discussed trying a Python version of the test server,
since the standard library there should give us more goodies to work
with. A proof of concept is in 0009. I think the big question I have
for it is, how would we communicate what we want the server to do for
the test? (We could perhaps switch on magic values of the client ID?)
In the end I'd like to be testing close to 100% of the failure modes,
and that's likely to mean a lot of back-and-forth if the server
implementation isn't in the Perl process.

--Jacob

[1]: /messages/by-id/flat/F51F8777-FAF5-49F2-BC5E-8F9EB423ECE0@yesql.se

Attachments:

since-v16.diff.txttext/plain; charset=US-ASCII; name=since-v16.diff.txtDownload
 1:  dc523009f2 =  1:  00976d4f75 common/jsonapi: support FRONTEND clients
 -:  ---------- >  2:  d8b567dd55 Refactor SASL exchange to return tri-state status
 -:  ---------- >  3:  83d78f598c Explicitly require password for SCRAM exchange
 2:  af969e6cea !  4:  00c8073807 libpq: add OAUTHBEARER SASL mechanism
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     +#endif							/* FE_AUTH_OAUTH_H */
     
      ## src/interfaces/libpq/fe-auth-sasl.h ##
    -@@
    - 
    - #include "libpq-fe.h"
    - 
    -+/* See pg_fe_sasl_mech.exchange(). */
    -+typedef enum
    -+{
    -+	SASL_COMPLETE,
    -+	SASL_FAILED,
    -+	SASL_CONTINUE,
    +@@ src/interfaces/libpq/fe-auth-sasl.h: typedef enum
    + 	SASL_COMPLETE = 0,
    + 	SASL_FAILED,
    + 	SASL_CONTINUE,
     +	SASL_ASYNC,
    -+} SASLStatus;
    -+
    + } SASLStatus;
    + 
      /*
    -  * Frontend SASL mechanism callbacks.
    -  *
     @@ src/interfaces/libpq/fe-auth-sasl.h: typedef struct pg_fe_sasl_mech
    - 	 * server response once at the start of the authentication exchange to
    - 	 * generate an initial response.
    - 	 *
    -+	 * Returns a SASLStatus:
    -+	 *
    -+	 *  SASL_CONTINUE: The output buffer is filled with a client response. An
    -+	 *				   additional server challenge is expected.
    -+	 *
    -+	 *  SASL_ASYNC:	   Some asynchronous processing external to the connection
    -+	 *				   needs to be done before a response can be generated. The
    -+	 *				   mechanism is responsible for setting up conn->async_auth
    -+	 *				   appropriately before returning.
    -+	 *
    -+	 *  SASL_COMPLETE: The SASL exchange has completed successfully.
    -+	 *
    -+	 *  SASL_FAILED:   The exchange has failed and the connection should be
    -+	 *				   dropped.
    -+	 *
    - 	 * Input parameters:
      	 *
      	 *	state:	   The opaque mechanism state returned by init()
      	 *
    @@ src/interfaces/libpq/fe-auth-sasl.h: typedef struct pg_fe_sasl_mech
      	 *			   the server expects the client to send a message to start
     @@ src/interfaces/libpq/fe-auth-sasl.h: typedef struct pg_fe_sasl_mech
      	 *
    - 	 *	output:	   A malloc'd buffer containing the client's response to
    - 	 *			   the server (can be empty), or NULL if the exchange should
    --	 *			   be aborted.  (*success should be set to false in the
    -+	 *			   be aborted.  (The callback should return SASL_FAILED in the
    - 	 *			   latter case.)
    - 	 *
    - 	 *	outputlen: The length (0 or higher) of the client response buffer,
    - 	 *			   ignored if output is NULL.
    --	 *
    --	 *	done:      Set to true if the SASL exchange should not continue,
    --	 *			   because the exchange is either complete or failed
    --	 *
    --	 *	success:   Set to true if the SASL exchange completed successfully.
    --	 *			   Ignored if *done is false.
    + 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
    + 	 *					Additional server challenge is expected
    ++	 *	SASL_ASYNC:		Some asynchronous processing external to the
    ++	 *					connection needs to be done before a response can be
    ++	 *					generated. The mechanism is responsible for setting up
    ++	 *					conn->async_auth appropriately before returning.
    + 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
    +-	 *	SASL_FAILED:	The exchance has failed and the connection should be
    ++	 *	SASL_FAILED:	The exchange has failed and the connection should be
    + 	 *					dropped.
      	 *--------
      	 */
    --	void		(*exchange) (void *state, char *input, int inputlen,
    --							 char **output, int *outputlen,
    --							 bool *done, bool *success);
    +-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
     +	SASLStatus	(*exchange) (void *state, bool final,
     +							 char *input, int inputlen,
    -+							 char **output, int *outputlen);
    + 							 char **output, int *outputlen);
      
      	/*--------
    - 	 * channel_bound()
     
      ## src/interfaces/libpq/fe-auth-scram.c ##
     @@
      /* The exported SCRAM callback mechanism. */
      static void *scram_init(PGconn *conn, const char *password,
      						const char *sasl_mechanism);
    --static void scram_exchange(void *opaq, char *input, int inputlen,
    --						   char **output, int *outputlen,
    --						   bool *done, bool *success);
    +-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
     +static SASLStatus scram_exchange(void *opaq, bool final,
     +								 char *input, int inputlen,
    -+								 char **output, int *outputlen);
    + 								 char **output, int *outputlen);
      static bool scram_channel_bound(void *opaq);
      static void scram_free(void *opaq);
    - 
     @@ src/interfaces/libpq/fe-auth-scram.c: scram_free(void *opaq)
    - /*
       * Exchange a SCRAM message with backend.
       */
    --static void
    + static SASLStatus
     -scram_exchange(void *opaq, char *input, int inputlen,
    --			   char **output, int *outputlen,
    --			   bool *done, bool *success)
    -+static SASLStatus
     +scram_exchange(void *opaq, bool final,
     +			   char *input, int inputlen,
    -+			   char **output, int *outputlen)
    + 			   char **output, int *outputlen)
      {
      	fe_scram_state *state = (fe_scram_state *) opaq;
    - 	PGconn	   *conn = state->conn;
    - 	const char *errstr = NULL;
    - 
    --	*done = false;
    --	*success = false;
    - 	*output = NULL;
    - 	*outputlen = 0;
    - 
    -@@ src/interfaces/libpq/fe-auth-scram.c: scram_exchange(void *opaq, char *input, int inputlen,
    - 		if (inputlen == 0)
    - 		{
    - 			libpq_append_conn_error(conn, "malformed SCRAM message (empty message)");
    --			goto error;
    -+			return SASL_FAILED;
    - 		}
    - 		if (inputlen != strlen(input))
    - 		{
    - 			libpq_append_conn_error(conn, "malformed SCRAM message (length mismatch)");
    --			goto error;
    -+			return SASL_FAILED;
    - 		}
    - 	}
    - 
    -@@ src/interfaces/libpq/fe-auth-scram.c: scram_exchange(void *opaq, char *input, int inputlen,
    - 			/* Begin the SCRAM handshake, by sending client nonce */
    - 			*output = build_client_first_message(state);
    - 			if (*output == NULL)
    --				goto error;
    -+				return SASL_FAILED;
    - 
    - 			*outputlen = strlen(*output);
    --			*done = false;
    - 			state->state = FE_SCRAM_NONCE_SENT;
    --			break;
    -+			return SASL_CONTINUE;
    - 
    - 		case FE_SCRAM_NONCE_SENT:
    - 			/* Receive salt and server nonce, send response. */
    - 			if (!read_server_first_message(state, input))
    --				goto error;
    -+				return SASL_FAILED;
    - 
    - 			*output = build_client_final_message(state);
    - 			if (*output == NULL)
    --				goto error;
    -+				return SASL_FAILED;
    - 
    - 			*outputlen = strlen(*output);
    --			*done = false;
    - 			state->state = FE_SCRAM_PROOF_SENT;
    --			break;
    -+			return SASL_CONTINUE;
    - 
    - 		case FE_SCRAM_PROOF_SENT:
    --			/* Receive server signature */
    --			if (!read_server_final_message(state, input))
    --				goto error;
    --
    --			/*
    --			 * Verify server signature, to make sure we're talking to the
    --			 * genuine server.
    --			 */
    --			if (!verify_server_signature(state, success, &errstr))
    --			{
    --				libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
    --				goto error;
    --			}
    --
    --			if (!*success)
    - 			{
    --				libpq_append_conn_error(conn, "incorrect server signature");
    -+				bool		match;
    -+
    -+				/* Receive server signature */
    -+				if (!read_server_final_message(state, input))
    -+					return SASL_FAILED;
    -+
    -+				/*
    -+				 * Verify server signature, to make sure we're talking to the
    -+				 * genuine server.
    -+				 */
    -+				if (!verify_server_signature(state, &match, &errstr))
    -+				{
    -+					libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
    -+					return SASL_FAILED;
    -+				}
    -+
    -+				if (!match)
    -+					libpq_append_conn_error(conn, "incorrect server signature");
    -+
    -+				state->state = FE_SCRAM_FINISHED;
    -+				state->conn->client_finished_auth = true;
    -+				return match ? SASL_COMPLETE : SASL_FAILED;
    - 			}
    --			*done = true;
    --			state->state = FE_SCRAM_FINISHED;
    --			state->conn->client_finished_auth = true;
    --			break;
    - 
    - 		default:
    - 			/* shouldn't happen */
    - 			libpq_append_conn_error(conn, "invalid SCRAM exchange state");
    --			goto error;
    -+			break;
    - 	}
    --	return;
    - 
    --error:
    --	*done = true;
    --	*success = false;
    -+	return SASL_FAILED;
    - }
    - 
    - /*
     
      ## src/interfaces/libpq/fe-auth.c ##
     @@
    @@ src/interfaces/libpq/fe-auth.c: pg_SSPI_startup(PGconn *conn, int use_negotiate,
      {
      	char	   *initialresponse = NULL;
      	int			initialresponselen;
    --	bool		done;
    --	bool		success;
      	const char *selected_mechanism;
      	PQExpBufferData mechanism_buf;
     -	char	   *password;
     +	char	   *password = NULL;
    -+	SASLStatus	status;
    + 	SASLStatus	status;
      
      	initPQExpBuffer(&mechanism_buf);
    - 
     @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      		goto error;
      	}
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      		libpq_append_conn_error(conn, "duplicate SASL authentication request");
      		goto error;
     @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
    - 	/*
    - 	 * Parse the list of SASL authentication mechanisms in the
    - 	 * AuthenticationSASL message, and select the best mechanism that we
    --	 * support.  SCRAM-SHA-256-PLUS and SCRAM-SHA-256 are the only ones
    --	 * supported at the moment, listed by order of decreasing importance.
    -+	 * support.  Mechanisms are listed by order of decreasing importance.
    - 	 */
    - 	selected_mechanism = NULL;
    - 	for (;;)
    -@@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
    - 				{
    - 					selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
    - 					conn->sasl = &pg_scram_mech;
    -+					conn->password_needed = true;
    - 				}
    - #else
    - 				/*
    -@@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
    - 		{
    - 			selected_mechanism = SCRAM_SHA_256_NAME;
      			conn->sasl = &pg_scram_mech;
    -+			conn->password_needed = true;
    + 			conn->password_needed = true;
      		}
     +#ifdef USE_OAUTH
     +		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      	if (!selected_mechanism)
     @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      
    - 	/*
    - 	 * First, select the password to use for the exchange, complaining if
    --	 * there isn't one.  Currently, all supported SASL mechanisms require a
    --	 * password, so we can just go ahead here without further distinction.
    -+	 * there isn't one and the SASL mechanism needs it.
    - 	 */
    --	conn->password_needed = true;
    --	password = conn->connhost[conn->whichhost].password;
    --	if (password == NULL)
    --		password = conn->pgpass;
    --	if (password == NULL || password[0] == '\0')
    -+	if (conn->password_needed)
    - 	{
    --		appendPQExpBufferStr(&conn->errorMessage,
    --							 PQnoPasswordSupplied);
    --		goto error;
    -+		password = conn->connhost[conn->whichhost].password;
    -+		if (password == NULL)
    -+			password = conn->pgpass;
    -+		if (password == NULL || password[0] == '\0')
    -+		{
    -+			appendPQExpBufferStr(&conn->errorMessage,
    -+								 PQnoPasswordSupplied);
    -+			goto error;
    -+		}
    - 	}
    - 
      	Assert(conn->sasl);
      
     -	/*
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
     +	}
      
      	/* Get the mechanism-specific Initial Client Response, if any */
    --	conn->sasl->exchange(conn->sasl_state,
    --						 NULL, -1,
    --						 &initialresponse, &initialresponselen,
    --						 &done, &success);
    +-	status = conn->sasl->exchange(conn->sasl_state,
     +	status = conn->sasl->exchange(conn->sasl_state, false,
    -+								  NULL, -1,
    -+								  &initialresponse, &initialresponselen);
    + 								  NULL, -1,
    + 								  &initialresponse, &initialresponselen);
      
    --	if (done && !success)
    -+	if (status == SASL_FAILED)
    + 	if (status == SASL_FAILED)
      		goto error;
      
     +	if (status == SASL_ASYNC)
    @@ src/interfaces/libpq/fe-auth.c: oom_error:
      {
      	char	   *output;
      	int			outputlen;
    --	bool		done;
    --	bool		success;
    - 	int			res;
    - 	char	   *challenge;
    -+	SASLStatus	status;
    - 
    - 	/* Read the SASL challenge from the AuthenticationSASLContinue message. */
    - 	challenge = malloc(payloadlen + 1);
     @@ src/interfaces/libpq/fe-auth.c: pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
      	/* For safety and convenience, ensure the buffer is NULL-terminated. */
      	challenge[payloadlen] = '\0';
      
    --	conn->sasl->exchange(conn->sasl_state,
    --						 challenge, payloadlen,
    --						 &output, &outputlen,
    --						 &done, &success);
    +-	status = conn->sasl->exchange(conn->sasl_state,
     +	status = conn->sasl->exchange(conn->sasl_state, final,
    -+								  challenge, payloadlen,
    -+								  &output, &outputlen);
    + 								  challenge, payloadlen,
    + 								  &output, &outputlen);
      	free(challenge);			/* don't need the input anymore */
      
    --	if (final && !done)
     +	if (status == SASL_ASYNC)
     +	{
     +		/*
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_continue(PGconn *conn, int payloadlen, b
     +		return STATUS_OK;
     +	}
     +
    -+	if (final && !(status == SASL_FAILED || status == SASL_COMPLETE))
    + 	if (final && !(status == SASL_FAILED || status == SASL_COMPLETE))
      	{
      		if (outputlen != 0)
    - 			free(output);
    -@@ src/interfaces/libpq/fe-auth.c: pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
    - 	 * If the exchange is not completed yet, we need to make sure that the
    - 	 * SASL mechanism has generated a message to send back.
    - 	 */
    --	if (output == NULL && !done)
    -+	if (output == NULL && status == SASL_CONTINUE)
    - 	{
    - 		libpq_append_conn_error(conn, "no client response found after SASL exchange success");
    - 		return STATUS_ERROR;
    -@@ src/interfaces/libpq/fe-auth.c: pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
    - 			return STATUS_ERROR;
    - 	}
    - 
    --	if (done && !success)
    -+	if (status == SASL_FAILED)
    - 		return STATUS_ERROR;
    - 
    - 	return STATUS_OK;
     @@ src/interfaces/libpq/fe-auth.c: check_expected_areq(AuthRequest areq, PGconn *conn)
       * it. We are responsible for reading any remaining extra data, specific
       * to the authentication method. 'payloadlen' is the remaining length in
    @@ src/tools/pgindent/typedefs.list: PQArgBlock
      PQsslKeyPassHook_OpenSSL_type
      PREDICATELOCK
      PREDICATELOCKTAG
    -@@ src/tools/pgindent/typedefs.list: RuleLock
    - RuleStmt
    - RunningTransactions
    - RunningTransactionsData
    -+SASLStatus
    - SC_HANDLE
    - SECURITY_ATTRIBUTES
    - SECURITY_STATUS
     @@ src/tools/pgindent/typedefs.list: explain_get_index_name_hook_type
      f_smgr
      fasthash_state
 3:  8906c9d445 !  5:  29d7e3cbed backend: add OAUTHBEARER SASL mechanism
    @@ src/backend/libpq/auth-oauth.c (new)
     +	 *
     +	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
     +	 */
    -+	if (strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
    ++	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
     +		return false;
     +
     +	/* Pull the bearer token out of the auth value. */
 4:  e2566ab594 !  6:  0661817808 Introduce OAuth validator libraries
    @@ Commit message
         for loading in extensions for validating bearer tokens. A lot of
         code is left to be written.
     
    +    Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
    +
      ## src/backend/libpq/auth-oauth.c ##
     @@
       * See the following RFC for more details:
    @@ src/backend/libpq/auth-oauth.c: generate_error_response(struct oauth_ctx *ctx, c
      	*outputlen = buf.len;
      }
      
    +-static bool
    +-validate(Port *port, const char *auth, const char **logdetail)
     +/*-----
    ++ * Validates the provided Authorization header and returns the token from
    ++ * within it. NULL is returned on validation failure.
    ++ *
     + * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
     + * 2.1:
     + *
    @@ src/backend/libpq/auth-oauth.c: generate_error_response(struct oauth_ctx *ctx, c
     + *
     + * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
     + */
    - static bool
    --validate(Port *port, const char *auth, const char **logdetail)
    ++static const char *
     +validate_token_format(const char *header)
      {
     -	static const char *const b64_set =
    @@ src/backend/libpq/auth-oauth.c: generate_error_response(struct oauth_ctx *ctx, c
     -	const char *token;
     -	size_t		span;
     -	int			ret;
    --
    --	/* TODO: handle logdetail when the test framework can check it */
     +	/* If the token is empty or simply too short to be correct */
     +	if (!header || strlen(header) <= 7)
     +	{
     +		ereport(COMMERROR,
     +				(errmsg("malformed OAuth bearer token 1")));
    -+		return false;
    ++		return NULL;
     +	}
      
    +-	/* TODO: handle logdetail when the test framework can check it */
    +-
     -	/*-----
     -	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
     -	 * 2.1:
    @@ src/backend/libpq/auth-oauth.c: generate_error_response(struct oauth_ctx *ctx, c
     -	 *
     -	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
     -	 */
    --	if (strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
    -+	if (strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
    - 		return false;
    +-	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
    +-		return false;
    ++	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
    ++		return NULL;
      
      	/* Pull the bearer token out of the auth value. */
     -	token = auth + strlen(BEARER_SCHEME);
    @@ src/backend/libpq/auth-oauth.c: generate_error_response(struct oauth_ctx *ctx, c
     -				 errmsg("malformed OAUTHBEARER message"),
     +				 errmsg("malformed OAuth bearer token 2"),
      				 errdetail("Bearer token is empty.")));
    - 		return false;
    +-		return false;
    ++		return NULL;
      	}
    -@@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const char **logdetail)
    + 
    + 	/*
      	 * Make sure the token contains only allowed characters. Tokens may end
      	 * with any number of '=' characters.
      	 */
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
     -				 errmsg("malformed OAUTHBEARER message"),
     +				 errmsg("malformed OAuth bearer token 3"),
      				 errdetail("Bearer token is not in the correct format.")));
    - 		return false;
    +-		return false;
    ++		return NULL;
      	}
      
     -	/* Have the validator check the token. */
     -	if (!run_validator_command(port, token))
    -+	return true;
    ++	return token;
     +}
     +
     +static bool
     +validate(Port *port, const char *auth)
     +{
    -+	int map_status;
    -+	ValidatorModuleResult	*ret;
    ++	int			map_status;
    ++	ValidatorModuleResult *ret;
    ++	const char *token;
     +
     +	/* Ensure that we have a correct token to validate */
    -+	if (!validate_token_format(auth))
    -+		return false;
    -+
    ++	if (!(token = validate_token_format(auth)))
    + 		return false;
    + 
     +	/* Call the validation function from the validator module */
     +	ret = ValidatorCallbacks->validate_cb(validator_module_state,
    -+										  auth + strlen(BEARER_SCHEME),
    -+										  port->user_name);
    ++										  token, port->user_name);
     +
     +	if (!ret->authenticated)
    - 		return false;
    - 
    ++		return false;
    ++
    ++	if (ret->authn_id)
    ++		set_authn_id(port, ret->authn_id);
    ++
      	if (port->hba->oauth_skip_usermap)
    + 	{
    + 		/*
     @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const char **logdetail)
      	}
      
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
      		/* TODO: use logdetail; reduce message duplication */
      		ereport(LOG,
     @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const char **logdetail)
    - 		return false;
      	}
      
    -+	/* Set the authenticated identity from the validator module */
    -+	set_authn_id(port, ret->authn_id);
    -+
      	/* Finally, check the user map. */
     -	ret = check_usermap(port->hba->usermap, port->user_name,
    -+	map_status = check_usermap(port->hba->usermap, port->user_name,
    - 						MyClientConnectionInfo.authn_id, false);
    +-						MyClientConnectionInfo.authn_id, false);
     -	return (ret == STATUS_OK);
    ++	map_status = check_usermap(port->hba->usermap, port->user_name,
    ++							   MyClientConnectionInfo.authn_id, false);
     +	return (map_status == STATUS_OK);
      }
      
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
     +load_validator_library(void)
      {
     -	int			rc;
    --
    ++	OAuthValidatorModuleInit validator_init;
    + 
     -	rc = ClosePipeStream(*fh);
     -	*fh = NULL;
    -+	OAuthValidatorModuleInit	validator_init;
    - 
    +-
     -	if (rc == -1)
     -	{
     -		/* pclose() itself failed. */
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
     -	else if (rc != 0)
     -	{
     -		char	   *reason = wait_result_to_str(rc);
    +-
    +-		ereport(COMMERROR,
    +-				(errmsg("failed to execute command \"%s\": %s",
    +-						command, reason)));
    +-
    +-		pfree(reason);
    +-	}
     +	if (OAuthValidatorLibrary[0] == '\0')
     +		ereport(ERROR,
     +				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
     +				 errmsg("oauth_validator_library is not set")));
      
    --		ereport(COMMERROR,
    --				(errmsg("failed to execute command \"%s\": %s",
    --						command, reason)));
    +-	return (rc == 0);
    +-}
     +	validator_init = (OAuthValidatorModuleInit)
     +		load_external_function(OAuthValidatorLibrary,
     +							   "_PG_oauth_validator_module_init", false, NULL);
      
    --		pfree(reason);
    --	}
    --
    --	return (rc == 0);
    --}
    -+	if (validator_init == NULL)
    -+		ereport(ERROR,
    -+				(errmsg("%s module \"%s\" have to define the symbol %s",
    -+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
    - 
     -static bool
     -set_cloexec(int fd)
     -{
     -	int			flags;
     -	int			rc;
    -+	ValidatorCallbacks = (*validator_init) ();
    ++	if (validator_init == NULL)
    ++		ereport(ERROR,
    ++				(errmsg("%s module \"%s\" have to define the symbol %s",
    ++						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
      
     -	flags = fcntl(fd, F_GETFD);
     -	if (flags == -1)
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
     -				 errmsg("could not get fd flags for child pipe: %m")));
     -		return false;
     -	}
    -+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
    -+	if (ValidatorCallbacks->startup_cb != NULL)
    -+		ValidatorCallbacks->startup_cb(validator_module_state);
    ++	ValidatorCallbacks = (*validator_init) ();
      
     -	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
     -	if (rc < 0)
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
     -				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
     -		return false;
     -	}
    --
    ++	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
    ++	if (ValidatorCallbacks->startup_cb != NULL)
    ++		ValidatorCallbacks->startup_cb(validator_module_state);
    + 
     -	return true;
     +	before_shmem_exit(shutdown_validator_library, 0);
      }
    @@ src/include/libpq/oauth.h
      /* Implementation */
      extern const pg_be_sasl_mech pg_be_oauth_mech;
     
    + ## src/test/modules/meson.build ##
    +@@ src/test/modules/meson.build: subdir('gin')
    + subdir('injection_points')
    + subdir('ldap_password_func')
    + subdir('libpq_pipeline')
    ++subdir('oauth_validator')
    + subdir('plsample')
    + subdir('spgist_name_ops')
    + subdir('ssl_passphrase_callback')
    +
      ## src/test/modules/oauth_validator/.gitignore (new) ##
     @@
     +# Generated subdirectories
    @@ src/test/modules/oauth_validator/expected/validator.out (new)
     +(1 row)
     +
     
    + ## src/test/modules/oauth_validator/meson.build (new) ##
    +@@
    ++# Copyright (c) 2024, PostgreSQL Global Development Group
    ++
    ++validator_sources = files(
    ++  'validator.c',
    ++)
    ++
    ++if host_system == 'windows'
    ++  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
    ++    '--NAME', 'validator',
    ++    '--FILEDESC', 'validator - test OAuth validator module',])
    ++endif
    ++
    ++validator = shared_module('validator',
    ++  validator_sources,
    ++  kwargs: pg_test_mod_args,
    ++)
    ++test_install_libs += validator
    ++
    ++tests += {
    ++  'name': 'oauth_validator',
    ++  'sd': meson.current_source_dir(),
    ++  'bd': meson.current_build_dir(),
    ++  'regress': {
    ++    'sql': [
    ++      'validator',
    ++    ],
    ++  },
    ++  'tap': {
    ++    'tests': [
    ++      't/001_server.pl',
    ++    ],
    ++  },
    ++}
    +
      ## src/test/modules/oauth_validator/sql/validator.sql (new) ##
     @@
     +SELECT 1;
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
     +$node->start;
     +
    -+reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1" scope="openid postgres"');
    ++reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:18080" scope="openid postgres"');
     +
    -+my $webserver = PostgreSQL::Test::OAuthServer->new(80);
    ++my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
     +
     +my $port = $webserver->port();
     +
    -+is($port, 80, "Port is 80");
    ++is($port, 18080, "Port is 18080");
     +
     +$webserver->setup();
     +$webserver->run();
     +
    -+$node->connect_ok("dbname=postgres oauth_client_id=f02c6361-0635", "connect");
    ++$node->connect_ok("dbname=postgres oauth_client_id=f02c6361-0635", "connect",
    ++				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +
     +$node->stop;
     +
    @@ src/test/modules/oauth_validator/validator.c (new)
     +
     +static void validator_startup(ValidatorModuleState *state);
     +static void validator_shutdown(ValidatorModuleState *state);
    -+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
    -+											 const char *token,
    -+											 const char *role);
    ++static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
    ++											  const char *token,
    ++											  const char *role);
     +
     +static const OAuthValidatorCallbacks validator_callbacks = {
     +	.startup_cb = validator_startup,
    @@ src/test/modules/oauth_validator/validator.c (new)
     +	return res;
     +}
     
    + ## src/test/perl/PostgreSQL/Test/Cluster.pm ##
    +@@ src/test/perl/PostgreSQL/Test/Cluster.pm: instead of the default.
    + 
    + If this regular expression is set, matches it with the output generated.
    + 
    ++=item expected_stderr => B<value>
    ++
    ++If this regular expression is set, matches it against the standard error
    ++stream; otherwise the stderr must be empty.
    ++
    + =item log_like => [ qr/required message/ ]
    + 
    + =item log_unlike => [ qr/prohibited message/ ]
    +@@ src/test/perl/PostgreSQL/Test/Cluster.pm: sub connect_ok
    + 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
    + 	}
    + 
    +-	is($stderr, "", "$test_name: no stderr");
    ++	if (defined($params{expected_stderr}))
    ++	{
    ++		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
    ++	}
    ++	else
    ++	{
    ++		is($stderr, "", "$test_name: no stderr");
    ++	}
    + 
    + 	$self->log_check($test_name, $log_location, %params);
    + }
    +
      ## src/test/perl/PostgreSQL/Test/OAuthServer.pm (new) ##
     @@
     +#!/usr/bin/perl
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm (new)
     +			print $fh "\r\n";
     +			print $fh <<EOR;
     +			{
    -+				"issuer": "http://localhost:80",
    -+				"token_endpoint": "http://localhost:80/token",
    -+				"device_authorization_endpoint": "http://localhost:80/authorize",
    ++				"issuer": "http://localhost:$self->{'port'}",
    ++				"token_endpoint": "http://localhost:$self->{'port'}/token",
    ++				"device_authorization_endpoint": "http://localhost:$self->{'port'}/authorize",
     +				"response_types_supported": ["token"],
     +				"subject_types_supported": ["public"],
     +				"id_token_signing_alg_values_supported": ["RS256"],
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm (new)
     +}
     +
     +1;
    +
    + ## src/tools/pgindent/typedefs.list ##
    +@@ src/tools/pgindent/typedefs.list: NumericSortSupport
    + NumericSumAccum
    + NumericVar
    + OAuthStep
    ++OAuthValidatorCallbacks
    + OM_uint32
    + OP
    + OSAPerGroupState
    +@@ src/tools/pgindent/typedefs.list: VacuumRelation
    + VacuumStmt
    + ValidIOData
    + ValidateIndexState
    ++ValidatorModuleState
    + ValuesScan
    + ValuesScanState
    + Var
 5:  26781a7f15 <  -:  ---------- squash! Introduce OAuth validator libraries
 6:  295de92a5a !  7:  45755e8461 Add pytest suite for OAuth
    @@ Metadata
      ## Commit message ##
         Add pytest suite for OAuth
     
    -    Requires Python 3; on the first run of `make installcheck` the
    -    dependencies will be installed into ./venv for you. See the README for
    -    more details.
    +    Requires Python 3. On the first run of `make installcheck` or `meson
    +    test` the dependencies will be installed into a local virtualenv for
    +    you. See the README for more details.
    +
    +    Cirrus has been updated to build OAuth support on Debian and FreeBSD.
    +
    +    The suite contains a --temp-instance option, analogous to pg_regress's
    +    option of the same name, which allows an ephemeral server to be spun up
    +    during a test run.
     
         For iddawc, asynchronous tests still hang, as expected. Bad-interval
         tests fail because iddawc apparently doesn't care that the interval is
         bad.
     
    +    TODOs:
    +    - The --tap-stream option to pytest-tap is slightly broken during test
    +      failures (it suppresses error information), which impedes debugging.
    +    - Unsurprisingly, Windows builds fail on the Linux-/BSD-specific backend
    +      changes. 32-bit builds on Ubuntu fail during testing as well.
    +    - pyca/cryptography is pinned at an old version. Since we use it for
    +      testing and not security, this isn't a critical problem yet, but it's
    +      not ideal. Newer versions require a Rust compiler to build, and while
    +      many platforms have precompiled wheels, some (FreeBSD) do not. Even
    +      with the Rust pieces bypassed, compilation on FreeBSD takes a while.
    +    - The with_oauth test skip logic should probably be integrated into the
    +      Makefile side as well...
    +
    + ## .cirrus.tasks.yml ##
    +@@ .cirrus.tasks.yml: env:
    +   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
    +   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
    +   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
    +-  PG_TEST_EXTRA: kerberos ldap ssl load_balance
    ++  PG_TEST_EXTRA: kerberos ldap ssl load_balance python
    + 
    + 
    + # What files to preserve in case tests fail
    +@@ .cirrus.tasks.yml: task:
    +     chown root:postgres /tmp/cores
    +     sysctl kern.corefile='/tmp/cores/%N.%P.core'
    +   setup_additional_packages_script: |
    +-    #pkg install -y ...
    ++    pkg install -y curl
    + 
    +   # NB: Intentionally build without -Dllvm. The freebsd image size is already
    +   # large enough to make VM startup slow, and even without llvm freebsd
    +@@ .cirrus.tasks.yml: task:
    +         -Dcassert=true -Dinjection_points=true \
    +         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
    +         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
    ++        -Doauth=curl \
    +         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
    +         build
    +     EOF
    +@@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
    +   --with-libxslt
    +   --with-llvm
    +   --with-lz4
    ++  --with-oauth=curl
    +   --with-pam
    +   --with-perl
    +   --with-python
    +@@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
    + 
    + LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
    +   -Dllvm=enabled
    ++  -Doauth=curl
    +   -Duuid=e2fs
    + 
    + 
    +@@ .cirrus.tasks.yml: task:
    +     EOF
    + 
    +   setup_additional_packages_script: |
    +-    #apt-get update
    +-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
    ++    apt-get update
    ++    DEBIAN_FRONTEND=noninteractive apt-get -y install \
    ++      libcurl4-openssl-dev \
    ++      libcurl4-openssl-dev:i386 \
    ++      python3-venv \
    + 
    +   matrix:
    +     - name: Linux - Debian Bullseye - Autoconf
    +@@ .cirrus.tasks.yml: task:
    +     folder: $CCACHE_DIR
    + 
    +   setup_additional_packages_script: |
    +-    #apt-get update
    +-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
    ++    apt-get update
    ++    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
    + 
    +   ###
    +   # Test that code can be built with gcc/clang without warnings
    +
    + ## meson.build ##
    +@@ meson.build: else
    + endif
    + 
    + testwrap = files('src/tools/testwrap')
    ++make_venv = files('src/tools/make_venv')
    ++
    ++checked_working_venv = false
    + 
    + foreach test_dir : tests
    +   testwrap_base = [
    +@@ meson.build: foreach test_dir : tests
    +         )
    +       endforeach
    +       install_suites += test_group
    ++    elif kind == 'pytest'
    ++      venv_name = test_dir['name'] + '_venv'
    ++      venv_path = meson.build_root() / venv_name
    ++
    ++      # The Python tests require a working venv module. This is part of the
    ++      # standard library, but some platforms disable it until a separate package
    ++      # is installed. Those same platforms don't provide an easy way to check
    ++      # whether the venv command will work until the first time you try it, so
    ++      # we decide whether or not to enable these tests on the fly.
    ++      if not checked_working_venv
    ++        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
    ++
    ++        have_working_venv = (cmd.returncode() == 0)
    ++        if not have_working_venv
    ++          warning('A working Python venv module is required to run Python tests.')
    ++        endif
    ++
    ++        checked_working_venv = true
    ++      endif
    ++
    ++      if not have_working_venv
    ++        continue
    ++      endif
    ++
    ++      # Make sure the temporary installation is in PATH (necessary both for
    ++      # --temp-instance and for any pip modules compiling against libpq, like
    ++      # psycopg2).
    ++      env = test_env
    ++      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
    ++
    ++      foreach name, value : t.get('env', {})
    ++        env.set(name, value)
    ++      endforeach
    ++
    ++      reqs = files(t['requirements'])
    ++      test('install_' + venv_name,
    ++        python,
    ++        args: [ make_venv, '--requirements', reqs, venv_path ],
    ++        env: env,
    ++        priority: setup_tests_priority - 1,  # must run after tmp_install
    ++        is_parallel: false,
    ++        suite: ['setup'],
    ++        timeout: 60,  # 30s is too short for the cryptography package compile
    ++      )
    ++
    ++      test_group = test_dir['name']
    ++      test_output = test_result_dir / test_group / kind
    ++      test_kwargs = {
    ++        #'protocol': 'tap',
    ++        'suite': test_group,
    ++        'timeout': 1000,
    ++        'depends': test_deps,
    ++        'env': env,
    ++      }
    ++
    ++      pytest = venv_path / 'bin' / 'py.test'
    ++      test_command = [
    ++        pytest,
    ++        # Avoid running these tests against an existing database.
    ++        '--temp-instance', test_output / 'data',
    ++
    ++        # FIXME pytest-tap's stream feature accidentally suppresses errors that
    ++        # are critical for debugging:
    ++        #     https://github.com/python-tap/pytest-tap/issues/30
    ++        # Don't use the meson TAP protocol for now...
    ++        #'--tap-stream',
    ++      ]
    ++
    ++      foreach pyt : t['tests']
    ++        # Similarly to TAP, strip ./ and .py to make the names prettier
    ++        pyt_p = pyt
    ++        if pyt_p.startswith('./')
    ++          pyt_p = pyt_p.split('./')[1]
    ++        endif
    ++        if pyt_p.endswith('.py')
    ++          pyt_p = fs.stem(pyt_p)
    ++        endif
    ++
    ++        test(test_group / pyt_p,
    ++          python,
    ++          kwargs: test_kwargs,
    ++          args: testwrap_base + [
    ++            '--testgroup', test_group,
    ++            '--testname', pyt_p,
    ++            '--', test_command,
    ++            test_dir['sd'] / pyt,
    ++          ],
    ++        )
    ++      endforeach
    ++      install_suites += test_group
    +     else
    +       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
    +     endif
    +
    + ## src/test/meson.build ##
    +@@ src/test/meson.build: subdir('authentication')
    + subdir('recovery')
    + subdir('subscription')
    + subdir('modules')
    ++subdir('python')
    + 
    + if ssl.found()
    +   subdir('ssl')
    +
      ## src/test/python/.gitignore (new) ##
     @@
     +__pycache__/
    @@ src/test/python/README (new)
     +
     +    export PGDATABASE=postgres
     +
    -+but you can adjust as needed for your setup.
    ++but you can adjust as needed for your setup. See also 'Advanced Usage' below.
     +
     +## Requirements
     +
    @@ src/test/python/README (new)
     +can skip them by saying e.g.
     +
     +    $ py.test -m 'not slow'
    ++
    ++If you'd rather not test against an existing server, you can have the suite spin
    ++up a temporary one using whatever pg_ctl it finds in PATH:
    ++
    ++    $ py.test --temp-instance=./tmp_check
     
      ## src/test/python/client/__init__.py (new) ##
     
    @@ src/test/python/client/test_oauth.py (new)
     +
     +from .conftest import BLOCKING_TIMEOUT
     +
    ++# The client tests need libpq to have been compiled with OAuth support; skip
    ++# them otherwise.
    ++pytestmark = pytest.mark.skipif(
    ++    os.getenv("with_oauth") == "none",
    ++    reason="OAuth client tests require --with-oauth support",
    ++)
    ++
     +if platform.system() == "Darwin":
     +    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
     +else:
    @@ src/test/python/conftest.py (new)
     +import pytest
     +
     +
    ++def pytest_addoption(parser):
    ++    """
    ++    Adds custom command line options to py.test. We add one to signal temporary
    ++    Postgres instance creation for the server tests.
    ++
    ++    Per pytest documentation, this must live in the top level test directory.
    ++    """
    ++    parser.addoption(
    ++        "--temp-instance",
    ++        metavar="DIR",
    ++        help="create a temporary Postgres instance in DIR",
    ++    )
    ++
    ++
     +@pytest.fixture(scope="session", autouse=True)
     +def _check_PG_TEST_EXTRA(request):
     +    """
    @@ src/test/python/conftest.py (new)
     +    if "python" not in extra_tests:
     +        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
     
    + ## src/test/python/meson.build (new) ##
    +@@
    ++# Copyright (c) 2023, PostgreSQL Global Development Group
    ++
    ++subdir('server')
    ++
    ++pytest_env = {
    ++  'with_oauth': oauth_library,
    ++
    ++  # Point to the default database; the tests will create their own databases as
    ++  # needed.
    ++  'PGDATABASE': 'postgres',
    ++
    ++  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
    ++  # pyca/cryptography.
    ++  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
    ++}
    ++
    ++# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
    ++# might have multiple implementations installed (macOS+brew), try to use the
    ++# same one that libpq is using.
    ++if ssl.found()
    ++  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
    ++  if pytest_incdir != ''
    ++    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
    ++  endif
    ++
    ++  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
    ++  if pytest_libdir != ''
    ++    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
    ++  endif
    ++endif
    ++
    ++tests += {
    ++  'name': 'python',
    ++  'sd': meson.current_source_dir(),
    ++  'bd': meson.current_build_dir(),
    ++  'pytest': {
    ++	'requirements': meson.current_source_dir() / 'requirements.txt',
    ++    'tests': [
    ++      './client',
    ++      './server',
    ++      './test_internals.py',
    ++      './test_pq3.py',
    ++    ],
    ++    'env': pytest_env,
    ++  },
    ++}
    +
      ## src/test/python/pq3.py (new) ##
     @@
     +#
    @@ src/test/python/pytest.ini (new)
      ## src/test/python/requirements.txt (new) ##
     @@
     +black
    -+# cryptography 39.x removes a lot of platform support, beware
    -+cryptography~=38.0.4
    ++# cryptography 35.x and later add many platform/toolchain restrictions, beware
    ++cryptography~=3.4.8
     +construct~=2.10.61
     +isort~=5.6
     +# TODO: update to psycopg[c] 3.1
    -+psycopg2~=2.9.6
    ++psycopg2~=2.9.7
     +pytest~=7.3
     +pytest-asyncio~=0.21.0
     
    @@ src/test/python/server/__init__.py (new)
      ## src/test/python/server/conftest.py (new) ##
     @@
     +#
    -+# Copyright 2021 VMware, Inc.
    ++# Portions Copyright 2021 VMware, Inc.
    ++# Portions Copyright 2023 Timescale, Inc.
     +# SPDX-License-Identifier: PostgreSQL
     +#
     +
    ++import collections
     +import contextlib
    ++import os
    ++import shutil
     +import socket
    ++import subprocess
     +import sys
     +
     +import pytest
    @@ src/test/python/server/conftest.py (new)
     +BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
     +
     +
    ++def cleanup_prior_instance(datadir):
    ++    """
    ++    Clean up an existing data directory, but make sure it actually looks like a
    ++    data directory first. (Empty folders will remain untouched, since initdb can
    ++    populate them.)
    ++    """
    ++    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
    ++    empty = True
    ++
    ++    try:
    ++        with os.scandir(datadir) as entries:
    ++            for e in entries:
    ++                empty = False
    ++                required_entries.discard(e.name)
    ++
    ++    except FileNotFoundError:
    ++        return  # nothing to clean up
    ++
    ++    if empty:
    ++        return  # initdb can handle an empty datadir
    ++
    ++    if required_entries:
    ++        pytest.fail(
    ++            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
    ++        )
    ++
    ++    # Okay, seems safe enough now.
    ++    shutil.rmtree(datadir)
    ++
    ++
    ++@pytest.fixture(scope="session")
    ++def postgres_instance(pytestconfig, unused_tcp_port_factory):
    ++    """
    ++    If --temp-instance has been passed to pytest, this fixture runs a temporary
    ++    Postgres instance on an available port. Otherwise, the fixture will attempt
    ++    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
    ++    will be skipped if the connection fails.
    ++
    ++    Yields a (host, port) tuple for connecting to the server.
    ++    """
    ++    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
    ++
    ++    datadir = pytestconfig.getoption("temp_instance")
    ++    if datadir:
    ++        # We were told to create a temporary instance. Use pg_ctl to set it up
    ++        # on an unused port.
    ++        cleanup_prior_instance(datadir)
    ++        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
    ++
    ++        # The CI looks for *.log files to upload, so the file name here isn't
    ++        # completely arbitrary.
    ++        log = os.path.join(datadir, "postmaster.log")
    ++        port = unused_tcp_port_factory()
    ++
    ++        subprocess.run(
    ++            [
    ++                "pg_ctl",
    ++                "-D",
    ++                datadir,
    ++                "-l",
    ++                log,
    ++                "-o",
    ++                " ".join(
    ++                    [
    ++                        f"-c port={port}",
    ++                        "-c listen_addresses=localhost",
    ++                        "-c log_connections=on",
    ++                        "-c shared_preload_libraries=oauthtest",
    ++                        "-c oauth_validator_library=oauthtest",
    ++                    ]
    ++                ),
    ++                "start",
    ++            ],
    ++            check=True,
    ++        )
    ++
    ++        yield ("localhost", port)
    ++
    ++        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
    ++
    ++    else:
    ++        # Try to contact an already running server; skip the suite if we can't
    ++        # find one.
    ++        addr = (pq3.pghost(), pq3.pgport())
    ++
    ++        try:
    ++            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
    ++                pass
    ++        except ConnectionError as e:
    ++            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
    ++
    ++        yield addr
    ++
    ++
     +@pytest.fixture
    -+def connect():
    ++def connect(postgres_instance):
     +    """
     +    A factory fixture that, when called, returns a socket connected to a
    -+    Postgres server, wrapped in a pq3 connection. The calling test will be
    -+    skipped automatically if a server is not running at PGHOST:PGPORT, so it's
    -+    best to connect as soon as possible after the test case begins, to avoid
    -+    doing unnecessary work.
    ++    Postgres server, wrapped in a pq3 connection. Dependent tests will be
    ++    skipped if no server is available.
     +    """
    ++    addr = postgres_instance
    ++
     +    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
     +    with contextlib.ExitStack() as stack:
     +
     +        def conn_factory():
    -+            addr = (pq3.pghost(), pq3.pgport())
    -+
    -+            try:
    -+                sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
    -+            except ConnectionError as e:
    -+                pytest.skip(f"unable to connect to {addr}: {e}")
    ++            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
     +
     +            # Have ExitStack close our socket.
     +            stack.enter_context(sock)
    @@ src/test/python/server/conftest.py (new)
     +
     +        yield conn_factory
     
    + ## src/test/python/server/meson.build (new) ##
    +@@
    ++# Copyright (c) 2024, PostgreSQL Global Development Group
    ++
    ++if not oauth.found()
    ++  subdir_done()
    ++endif
    ++
    ++oauthtest_sources = files(
    ++  'oauthtest.c',
    ++)
    ++
    ++if host_system == 'windows'
    ++  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
    ++    '--NAME', 'oauthtest',
    ++    '--FILEDESC', 'passthrough module to validate OAuth tests',
    ++  ])
    ++endif
    ++
    ++oauthtest = shared_module('oauthtest',
    ++  oauthtest_sources,
    ++  kwargs: pg_test_mod_args,
    ++)
    ++test_install_libs += oauthtest
    +
      ## src/test/python/server/oauthtest.c (new) ##
     @@
     +/*-------------------------------------------------------------------------
    @@ src/test/python/server/test_oauth.py (new)
     +MAX_UINT16 = 2**16 - 1
     +
     +
    -+def skip_if_no_postgres():
    -+    """
    -+    Used by the oauth_ctx fixture to skip this test module if no Postgres server
    -+    is running.
    -+
    -+    This logic is nearly duplicated with the conn fixture. Ideally oauth_ctx
    -+    would depend on that, but a module-scope fixture can't depend on a
    -+    test-scope fixture, and we haven't reached the rule of three yet.
    -+    """
    -+    addr = (pq3.pghost(), pq3.pgport())
    -+
    -+    try:
    -+        with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
    -+            pass
    -+    except ConnectionError as e:
    -+        pytest.skip(f"unable to connect to {addr}: {e}")
    -+
    -+
     +@contextlib.contextmanager
     +def prepend_file(path, lines):
     +    """
    @@ src/test/python/server/test_oauth.py (new)
     +
     +
     +@pytest.fixture(scope="module")
    -+def oauth_ctx():
    ++def oauth_ctx(postgres_instance):
     +    """
     +    Creates a database and user that use the oauth auth method. The context
     +    object contains the dbname and user attributes as strings to be used during
    @@ src/test/python/server/test_oauth.py (new)
     +    server running on a local machine, and that the PGUSER has rights to create
     +    databases and roles.
     +    """
    -+    skip_if_no_postgres()  # don't bother running these tests without a server
    -+
     +    id = secrets.token_hex(4)
     +
     +    class Context:
    @@ src/test/python/server/test_oauth.py (new)
     +    )
     +    ident_lines = (r"oauth /^(.*)@example\.com$ \1",)
     +
    -+    conn = psycopg2.connect("")
    ++    host, port = postgres_instance
    ++    conn = psycopg2.connect(host=host, port=port)
     +    conn.autocommit = True
     +
     +    with contextlib.closing(conn):
    @@ src/test/python/server/test_oauth.py (new)
     +
     +
     +@pytest.fixture()
    -+def setup_validator():
    ++def setup_validator(postgres_instance):
     +    """
     +    A per-test fixture that sets up the test validator with expected behavior.
     +    The setting will be reverted during teardown.
     +    """
    -+    conn = psycopg2.connect("")
    ++    host, port = postgres_instance
    ++    conn = psycopg2.connect(host=host, port=port)
     +    conn.autocommit = True
     +
     +    with contextlib.closing(conn):
    @@ src/test/python/tls.py (new)
     +    "length" / Int16ub,
     +    "fragment" / FixedSized(this.length, GreedyBytes),
     +)
    +
    + ## src/tools/make_venv (new) ##
    +@@
    ++#!/usr/bin/env python3
    ++
    ++import argparse
    ++import subprocess
    ++import os
    ++import sys
    ++
    ++parser = argparse.ArgumentParser()
    ++
    ++parser.add_argument('--requirements', help='path to pip requirements file', type=str)
    ++parser.add_argument('--privatedir', help='private directory for target', type=str)
    ++parser.add_argument('venv_path', help='desired venv location')
    ++
    ++args = parser.parse_args()
    ++
    ++# Decide whether or not to capture stdout into a log file. We only do this if
    ++# we've been given our own private directory.
    ++#
    ++# FIXME Unfortunately this interferes with debugging on Cirrus, because the
    ++# private directory isn't uploaded in the sanity check's artifacts. When we
    ++# don't capture the log file, it gets spammed to stdout during build... Is there
    ++# a way to push this into the meson-log somehow? For now, the capture
    ++# implementation is commented out.
    ++logfile = None
    ++
    ++if args.privatedir:
    ++    if not os.path.isdir(args.privatedir):
    ++        os.mkdir(args.privatedir)
    ++
    ++    # FIXME see above comment
    ++    # logpath = os.path.join(args.privatedir, 'stdout.txt')
    ++    # logfile = open(logpath, 'w')
    ++
    ++def run(*args):
    ++    kwargs = dict(check=True)
    ++    if logfile:
    ++        kwargs.update(stdout=logfile)
    ++
    ++    subprocess.run(args, **kwargs)
    ++
    ++# Create the virtualenv first.
    ++run(sys.executable, '-m', 'venv', args.venv_path)
    ++
    ++# Update pip next. This helps avoid old pip bugs; the version inside system
    ++# Pythons tends to be pretty out of date.
    ++pip = os.path.join(args.venv_path, 'bin', 'pip')
    ++run(pip, 'install', '-U', 'pip')
    ++
    ++# Finally, install the test's requirements. We need pytest and pytest-tap, no
    ++# matter what the test needs.
    ++run(pip, 'install', 'pytest', 'pytest-tap')
    ++if args.requirements:
    ++    run(pip, 'install', '-r', args.requirements)
 7:  7d21be13c0 <  -:  ---------- squash! Add pytest suite for OAuth
 8:  26dcd5f828 !  8:  0f9f884856 XXX temporary patches to build and test
    @@ Commit message
         - the new pg_combinebackup utility uses JSON in the frontend without
           0001; has something changed?
         - construct 2.10.70 has some incompatibilities with the current tests
    +    - temporarily skip the exit check (from Daniel Gustafsson); this needs
    +      to be turned into an exception for curl rather than a plain exit call
     
      ## src/bin/pg_combinebackup/Makefile ##
     @@ src/bin/pg_combinebackup/Makefile: include $(top_builddir)/src/Makefile.global
    @@ src/bin/pg_verifybackup/Makefile: top_builddir = ../../..
      OBJS = \
      	$(WIN32RES) \
     
    + ## src/interfaces/libpq/Makefile ##
    +@@ src/interfaces/libpq/Makefile: libpq-refs-stamp: $(shlib)
    + ifneq ($(enable_coverage), yes)
    + ifeq (,$(filter aix solaris,$(PORTNAME)))
    + 	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
    +-		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
    ++		echo 'libpq must not be calling any function which invokes exit'; \
    + 	fi
    + endif
    + endif
    +
      ## src/test/python/requirements.txt ##
     @@
      black
 9:  0ff8e3786a <  -:  ---------- REVERT: temporarily skip the exit check
 -:  ---------- >  9:  de8f81bd7d WIP: Python OAuth provider implementation
v17-0003-Explicitly-require-password-for-SCRAM-exchange.patchapplication/octet-stream; name=v17-0003-Explicitly-require-password-for-SCRAM-exchange.patchDownload
From 83d78f598c65bd942e770eefbec0084d040414c3 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Fri, 23 Feb 2024 11:19:55 +0100
Subject: [PATCH v17 3/9] Explicitly require password for SCRAM exchange

This refactors the SASL init flow to set password_needed on the two
SCRAM exchanges currently supported. The code already required this
but was set up in such a way that all SASL exchanges required using
a password, a restriction which may not hold for all exchanges (the
example at hand being the proposed OAuthbearer exchange).

This was extracted from a larger patchset to introduce OAuthBearer
authentication and authorization.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 src/interfaces/libpq/fe-auth.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 71dd096605..9f57976b4f 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -446,8 +446,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	/*
 	 * Parse the list of SASL authentication mechanisms in the
 	 * AuthenticationSASL message, and select the best mechanism that we
-	 * support.  SCRAM-SHA-256-PLUS and SCRAM-SHA-256 are the only ones
-	 * supported at the moment, listed by order of decreasing importance.
+	 * support. Mechanisms are listed by order of decreasing importance.
 	 */
 	selected_mechanism = NULL;
 	for (;;)
@@ -487,6 +486,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 				{
 					selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
 					conn->sasl = &pg_scram_mech;
+					conn->password_needed = true;
 				}
 #else
 				/*
@@ -522,6 +522,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		{
 			selected_mechanism = SCRAM_SHA_256_NAME;
 			conn->sasl = &pg_scram_mech;
+			conn->password_needed = true;
 		}
 	}
 
@@ -545,18 +546,19 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	/*
 	 * First, select the password to use for the exchange, complaining if
-	 * there isn't one.  Currently, all supported SASL mechanisms require a
-	 * password, so we can just go ahead here without further distinction.
+	 * there isn't one and the selected SASL mechanism needs it.
 	 */
-	conn->password_needed = true;
-	password = conn->connhost[conn->whichhost].password;
-	if (password == NULL)
-		password = conn->pgpass;
-	if (password == NULL || password[0] == '\0')
+	if (conn->password_needed)
 	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 PQnoPasswordSupplied);
-		goto error;
+		password = conn->connhost[conn->whichhost].password;
+		if (password == NULL)
+			password = conn->pgpass;
+		if (password == NULL || password[0] == '\0')
+		{
+			appendPQExpBufferStr(&conn->errorMessage,
+								 PQnoPasswordSupplied);
+			goto error;
+		}
 	}
 
 	Assert(conn->sasl);
-- 
2.34.1

v17-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v17-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 00c807380737e91e3f85ebe0301c54faad2565b3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v17 4/9] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires either libcurl or libiddawc and their
development headers. Pass `curl` or `iddawc` to --with-oauth/-Doauth
during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- ...and more.
---
 configure                                   |  186 ++
 configure.ac                                |   37 +
 meson.build                                 |   45 +
 meson_options.txt                           |    4 +
 src/Makefile.global.in                      |    1 +
 src/include/common/oauth-common.h           |   19 +
 src/include/pg_config.h.in                  |   18 +
 src/interfaces/libpq/Makefile               |   12 +-
 src/interfaces/libpq/exports.txt            |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c   | 1981 +++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth-iddawc.c |  319 +++
 src/interfaces/libpq/fe-auth-oauth.c        |  659 ++++++
 src/interfaces/libpq/fe-auth-oauth.h        |   42 +
 src/interfaces/libpq/fe-auth-sasl.h         |   12 +-
 src/interfaces/libpq/fe-auth-scram.c        |    6 +-
 src/interfaces/libpq/fe-auth.c              |  107 +-
 src/interfaces/libpq/fe-auth.h              |    9 +-
 src/interfaces/libpq/fe-connect.c           |   85 +-
 src/interfaces/libpq/fe-misc.c              |    7 +-
 src/interfaces/libpq/libpq-fe.h             |   77 +-
 src/interfaces/libpq/libpq-int.h            |   14 +
 src/interfaces/libpq/meson.build            |    9 +
 src/makefiles/meson.build                   |    1 +
 src/tools/pgindent/typedefs.list            |   10 +
 24 files changed, 3634 insertions(+), 29 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-iddawc.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/configure b/configure
index 6b87e5c9a8..1527d5d599 100755
--- a/configure
+++ b/configure
@@ -712,6 +712,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -859,6 +860,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1570,6 +1572,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl, iddawc)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8794,6 +8797,59 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" = x"iddawc"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_IDDAWC 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl or iddawc" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13347,6 +13403,116 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+elif test "$with_oauth" = iddawc ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for i_init_session in -liddawc" >&5
+$as_echo_n "checking for i_init_session in -liddawc... " >&6; }
+if ${ac_cv_lib_iddawc_i_init_session+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-liddawc  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char i_init_session ();
+int
+main ()
+{
+return i_init_session ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_iddawc_i_init_session=yes
+else
+  ac_cv_lib_iddawc_i_init_session=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_iddawc_i_init_session" >&5
+$as_echo "$ac_cv_lib_iddawc_i_init_session" >&6; }
+if test "x$ac_cv_lib_iddawc_i_init_session" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBIDDAWC 1
+_ACEOF
+
+  LIBS="-liddawc $LIBS"
+
+else
+  as_fn_error $? "library 'iddawc' is required for --with-oauth=iddawc" "$LINENO" 5
+fi
+
+  # Check for an older spelling of i_get_openid_config
+  for ac_func in i_load_openid_config
+do :
+  ac_fn_c_check_func "$LINENO" "i_load_openid_config" "ac_cv_func_i_load_openid_config"
+if test "x$ac_cv_func_i_load_openid_config" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_I_LOAD_OPENID_CONFIG 1
+_ACEOF
+
+fi
+done
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14372,6 +14538,26 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
+elif test "$with_oauth" = iddawc; then
+  ac_fn_c_check_header_mongrel "$LINENO" "iddawc.h" "ac_cv_header_iddawc_h" "$ac_includes_default"
+if test "x$ac_cv_header_iddawc_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <iddawc.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 6e64ece11d..b7982221a7 100644
--- a/configure.ac
+++ b/configure.ac
@@ -949,6 +949,29 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl, iddawc)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" = x"iddawc"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_IDDAWC], 1, [Define to 1 to use libiddawc for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl or iddawc])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1446,6 +1469,14 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+elif test "$with_oauth" = iddawc ; then
+  AC_CHECK_LIB(iddawc, i_init_session, [], [AC_MSG_ERROR([library 'iddawc' is required for --with-oauth=iddawc])])
+  # Check for an older spelling of i_get_openid_config
+  AC_CHECK_FUNCS([i_load_openid_config])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1637,6 +1668,12 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+elif test "$with_oauth" = iddawc; then
+  AC_CHECK_HEADER(iddawc.h, [], [AC_MSG_ERROR([header file <iddawc.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/meson.build b/meson.build
index 8ed51b6aae..8c83eac03d 100644
--- a/meson.build
+++ b/meson.build
@@ -849,6 +849,49 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  oauth = dependency('libcurl', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if not oauth.found() and oauthopt in ['auto', 'iddawc']
+  oauth = dependency('libiddawc', required: (oauthopt == 'iddawc'))
+
+  if oauth.found()
+    oauth_library = 'iddawc'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_IDDAWC', 1)
+
+    # Check for an older spelling of i_get_openid_config
+    if cc.has_function('i_load_openid_config',
+                       dependencies: oauth, args: test_c_args)
+      cdata.set('HAVE_I_LOAD_OPENID_CONFIG', 1)
+    endif
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -2838,6 +2881,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3439,6 +3483,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index 249ecc5ffd..f54f7fd717 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -121,6 +121,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl', 'iddawc'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl, iddawc)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 8b3f8c24e0..79b3647834 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..5ff3488bfb
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07e73567dc..f470c77669 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -231,6 +231,9 @@
 /* Define to 1 if __builtin_constant_p(x) implies "i"(x) acceptance. */
 #undef HAVE_I_CONSTRAINT__BUILTIN_CONSTANT_P
 
+/* Define to 1 if you have the `i_load_openid_config' function. */
+#undef HAVE_I_LOAD_OPENID_CONFIG
+
 /* Define to 1 if you have the `kqueue' function. */
 #undef HAVE_KQUEUE
 
@@ -243,6 +246,12 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
+/* Define to 1 if you have the `iddawc' library (-liddawc). */
+#undef HAVE_LIBIDDAWC
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -711,6 +720,15 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
+/* Define to 1 to use libiddawc for OAuth support. */
+#undef USE_OAUTH_IDDAWC
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 083ca6f4cc..5dab88f095 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -61,6 +61,16 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),iddawc)
+OBJS += fe-auth-oauth-iddawc.o
+else
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -79,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -liddawc -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb1..0f8f5e3125 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,6 @@ PQsendClosePrepared       190
 PQsendClosePortal         191
 PQchangePassword          192
 PQsendPipelineSync        193
+PQsetAuthDataHook         194
+PQgetAuthDataHook         195
+PQdefaultAuthDataHook     196
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..3e20ba5818
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,1981 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls to
+ * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by cURL, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by cURL to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two cURL handles,
+ * so you don't have to write out the error handling every time. They assume
+ * that they're embedded in a function returning bool, however.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			return false; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			return false; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			return false; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (!strcmp(name, field->name))
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	char	   *content_type;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	/* Make sure the server thinks it's given us JSON. */
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		goto cleanup;
+	}
+	else if (strcasecmp(content_type, "application/json") != 0)
+	{
+		actx_error(actx, "unexpected content type \"%s\"", content_type);
+		goto cleanup;
+	}
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		goto cleanup;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.)
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(const char *interval_str)
+{
+	float		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%f", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceilf(parsed);
+
+	if (parsed < 1)
+		return 1;				/* TODO this slows down the tests
+								 * considerably... */
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(authz->interval_str);
+	else
+	{
+		/* TODO: handle default interval of 5 seconds */
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * cURL Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * cURL multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the `data` field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * cURL multi handle. Rather than continually adding and removing the timer,
+ * we keep it in the set at all times and just disarm it when it's not
+ * needed.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means cURL wants us to call back immediately. That's
+		 * not technically an option for timerfd, but we can make the timeout
+		 * ridiculously short.
+		 *
+		 * TODO: maybe just signal drive_request() to immediately call back in
+		 * this case?
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Initializes the two cURL handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * cURL for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create cURL multi handle");
+		return false;
+	}
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create cURL handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: This disables DNS resolution timeouts unless libcurl has been
+	 * compiled against alternative resolution support. We should check that.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L);
+
+	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L);
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err);
+
+	/* TODO */
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, stderr);
+
+	/*
+	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
+	 */
+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS);
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from cURL; appends the response body into actx->work_data.
+ * See start_request().
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * Sanity check.
+	 *
+	 * TODO: even though this is nominally an asynchronous process, there are
+	 * apparently operations that can synchronously fail by this point, such
+	 * as connections to closed local ports. Maybe we need to let this case
+	 * fall through to drive_request instead, or else perform a
+	 * curl_multi_info_read immediately.
+	 */
+	if (running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	err = curl_multi_socket_all(actx->curlm, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return PGRES_POLLING_FAILED;
+	}
+
+	if (running)
+	{
+		/* We'll come back again. */
+		return PGRES_POLLING_READING;
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future cURL versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "cURL easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+		bool		oom = false;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (!temp)
+			oom = true;
+
+		temp = curl_slist_append(temp, "implicit");
+		if (!temp)
+			oom = true;
+
+		if (oom)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * The top-level, nonblocking entry point for the cURL implementation. This will
+ * be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	struct token tok = {0};
+
+	/*
+	 * XXX This is not safe. cURL has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized cURL,
+	 * which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell cURL to initialize
+	 * everything else, because other pieces of our client executable may
+	 * already be using cURL for their own purposes. If we initialize libcurl
+	 * first, with only a subset of its features, we could break those other
+	 * clients nondeterministically, and that would probably be a nightmare to
+	 * debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	/* By default, the multiplexer is the altsock. Reassign as desired. */
+	*altsock = actx->mux;
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				PostgresPollingStatusType status;
+
+				status = drive_request(actx);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+				else if (status != PGRES_POLLING_OK)
+				{
+					/* not done yet */
+					free_token(&tok);
+					return status;
+				}
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			/* TODO check that the timer has expired */
+			break;
+	}
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			actx->errctx = "failed to fetch OpenID discovery document";
+			if (!start_discovery(actx, conn->oauth_discovery_uri))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DISCOVERY;
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+			if (!finish_discovery(actx))
+				goto error_return;
+
+			/* TODO: check issuer */
+
+			actx->errctx = "cannot run OAuth device authorization";
+			if (!check_for_device_flow(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain device authorization";
+			if (!start_device_authz(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+			break;
+
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			if (!finish_device_authz(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				const struct token_error *err;
+#ifdef HAVE_SYS_EPOLL_H
+				struct itimerspec spec = {0};
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				struct kevent ev = {0};
+#endif
+
+				if (!finish_token_request(actx, &tok))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					int			res;
+					PQpromptOAuthDevice prompt = {
+						.verification_uri = actx->authz.verification_uri,
+						.user_code = actx->authz.user_code,
+						/* TODO: optional fields */
+					};
+
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
+										 &prompt);
+
+					if (!res)
+					{
+						fprintf(stderr, "Visit %s and enter the code: %s",
+								prompt.verification_uri, prompt.user_code);
+					}
+					else if (res < 0)
+					{
+						actx_error(actx, "device prompt failed");
+						goto error_return;
+					}
+
+					actx->user_prompted = true;
+				}
+
+				if (tok.access_token)
+				{
+					/* Construct our Bearer token. */
+					resetPQExpBuffer(&actx->work_data);
+					appendPQExpBuffer(&actx->work_data, "Bearer %s",
+									  tok.access_token);
+
+					if (PQExpBufferDataBroken(actx->work_data))
+					{
+						actx_error(actx, "out of memory");
+						goto error_return;
+					}
+
+					state->token = strdup(actx->work_data.data);
+					break;
+				}
+
+				/*
+				 * authorization_pending and slow_down are the only acceptable
+				 * errors; anything else and we bail.
+				 */
+				err = &tok.err;
+				if (!err->error || (strcmp(err->error, "authorization_pending")
+									&& strcmp(err->error, "slow_down")))
+				{
+					/* TODO handle !err->error */
+					if (err->error_description)
+						appendPQExpBuffer(&actx->errbuf, "%s ",
+										  err->error_description);
+
+					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+
+					goto error_return;
+				}
+
+				/*
+				 * A slow_down error requires us to permanently increase our
+				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
+				 */
+				if (!strcmp(err->error, "slow_down"))
+				{
+					actx->authz.interval += 5;	/* TODO check for overflow? */
+				}
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				Assert(actx->authz.interval > 0);
+#ifdef HAVE_SYS_EPOLL_H
+				spec.it_value.tv_sec = actx->authz.interval;
+
+				if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+				{
+					actx_error(actx, "failed to set timerfd: %m");
+					goto error_return;
+				}
+
+				*altsock = actx->timerfd;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				/* XXX: I guess this wants to be hidden in a routine */
+				EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
+					   actx->authz.interval * 1000, 0);
+				if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
+				{
+					actx_error(actx, "failed to set kqueue timer: %m");
+					goto error_return;
+				}
+				/* XXX: why did we change the altsock in the epoll version? */
+#endif
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				break;
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+	}
+
+	free_token(&tok);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	free_token(&tok);
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth-iddawc.c b/src/interfaces/libpq/fe-auth-oauth-iddawc.c
new file mode 100644
index 0000000000..e78d4304d3
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-iddawc.c
@@ -0,0 +1,319 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-iddawc.c
+ *	   The libiddawc implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-iddawc.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <iddawc.h>
+
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+
+#ifdef HAVE_I_LOAD_OPENID_CONFIG
+/* Older versions of iddawc used 'load' instead of 'get' for some APIs. */
+#define i_get_openid_config i_load_openid_config
+#endif
+
+static const char *
+iddawc_error_string(int errcode)
+{
+	switch (errcode)
+	{
+		case I_OK:
+			return "I_OK";
+
+		case I_ERROR:
+			return "I_ERROR";
+
+		case I_ERROR_PARAM:
+			return "I_ERROR_PARAM";
+
+		case I_ERROR_MEMORY:
+			return "I_ERROR_MEMORY";
+
+		case I_ERROR_UNAUTHORIZED:
+			return "I_ERROR_UNAUTHORIZED";
+
+		case I_ERROR_SERVER:
+			return "I_ERROR_SERVER";
+	}
+
+	return "<unknown>";
+}
+
+static void
+iddawc_error(PGconn *conn, int errcode, const char *msg)
+{
+	appendPQExpBufferStr(&conn->errorMessage, libpq_gettext(msg));
+	appendPQExpBuffer(&conn->errorMessage,
+					  libpq_gettext(" (iddawc error %s)\n"),
+					  iddawc_error_string(errcode));
+}
+
+static void
+iddawc_request_error(PGconn *conn, struct _i_session *i, int err, const char *msg)
+{
+	const char *error_code;
+	const char *desc;
+
+	appendPQExpBuffer(&conn->errorMessage, "%s: ", libpq_gettext(msg));
+
+	error_code = i_get_str_parameter(i, I_OPT_ERROR);
+	if (!error_code)
+	{
+		/*
+		 * The server didn't give us any useful information, so just print the
+		 * error code.
+		 */
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("(iddawc error %s)\n"),
+						  iddawc_error_string(err));
+		return;
+	}
+
+	/* If the server gave a string description, print that too. */
+	desc = i_get_str_parameter(i, I_OPT_ERROR_DESCRIPTION);
+	if (desc)
+		appendPQExpBuffer(&conn->errorMessage, "%s ", desc);
+
+	appendPQExpBuffer(&conn->errorMessage, "(%s)\n", error_code);
+}
+
+/*
+ * Runs the device authorization flow using libiddawc. If successful, a malloc'd
+ * token string in "Bearer xxxx..." format, suitable for sending to an
+ * OAUTHBEARER server, is returned. NULL is returned on error.
+ */
+static char *
+run_iddawc_auth_flow(PGconn *conn, const char *discovery_uri)
+{
+	struct _i_session session;
+	PQExpBuffer token_buf = NULL;
+	int			err;
+	int			auth_method;
+	bool		user_prompted = false;
+	const char *verification_uri;
+	const char *user_code;
+	const char *access_token;
+	const char *token_type;
+	char	   *token = NULL;
+
+	i_init_session(&session);
+
+	token_buf = createPQExpBuffer();
+	if (!token_buf)
+		goto cleanup;
+
+	err = i_set_str_parameter(&session, I_OPT_OPENID_CONFIG_ENDPOINT, discovery_uri);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set OpenID config endpoint");
+		goto cleanup;
+	}
+
+	err = i_get_openid_config(&session);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to fetch OpenID discovery document");
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_TOKEN_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer has no token endpoint"));
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_DEVICE_AUTHORIZATION_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer does not support device authorization"));
+		goto cleanup;
+	}
+
+	err = i_set_response_type(&session, I_RESPONSE_TYPE_DEVICE_CODE);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set device code response type");
+		goto cleanup;
+	}
+
+	auth_method = I_TOKEN_AUTH_METHOD_NONE;
+	if (conn->oauth_client_secret && *conn->oauth_client_secret)
+		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
+
+	err = i_set_parameter_list(&session,
+							   I_OPT_CLIENT_ID, conn->oauth_client_id,
+							   I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
+							   I_OPT_TOKEN_METHOD, auth_method,
+							   I_OPT_SCOPE, conn->oauth_scope,
+							   I_OPT_NONE
+		);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set client identifier");
+		goto cleanup;
+	}
+
+	err = i_run_device_auth_request(&session);
+	if (err)
+	{
+		iddawc_request_error(conn, &session, err,
+							 "failed to obtain device authorization");
+		goto cleanup;
+	}
+
+	verification_uri = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_VERIFICATION_URI);
+	if (!verification_uri)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a verification URI"));
+		goto cleanup;
+	}
+
+	user_code = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_USER_CODE);
+	if (!user_code)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a user code"));
+		goto cleanup;
+	}
+
+	/*
+	 * Poll the token endpoint until either the user logs in and authorizes
+	 * the use of a token, or a hard failure occurs. We perform one ping
+	 * _before_ prompting the user, so that we don't make them do the work of
+	 * logging in only to find that the token endpoint is completely
+	 * unreachable.
+	 */
+	err = i_run_token_request(&session);
+	while (err)
+	{
+		const char *error_code;
+		uint		interval;
+
+		error_code = i_get_str_parameter(&session, I_OPT_ERROR);
+
+		/*
+		 * authorization_pending and slow_down are the only acceptable errors;
+		 * anything else and we bail.
+		 */
+		if (!error_code || (strcmp(error_code, "authorization_pending")
+							&& strcmp(error_code, "slow_down")))
+		{
+			iddawc_request_error(conn, &session, err,
+								 "failed to obtain access token");
+			goto cleanup;
+		}
+
+		if (!user_prompted)
+		{
+			int			res;
+			PQpromptOAuthDevice prompt = {
+				.verification_uri = verification_uri,
+				.user_code = user_code,
+				/* TODO: optional fields */
+			};
+
+			/*
+			 * Now that we know the token endpoint isn't broken, give the user
+			 * the login instructions.
+			 */
+			res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
+								 &prompt);
+
+			if (!res)
+			{
+				fprintf(stderr, "Visit %s and enter the code: %s",
+						prompt.verification_uri, prompt.user_code);
+			}
+			else if (res < 0)
+			{
+				appendPQExpBufferStr(&conn->errorMessage,
+									 libpq_gettext("device prompt failed\n"));
+				goto cleanup;
+			}
+
+			user_prompted = true;
+		}
+
+		/*---
+		 * We are required to wait between polls; the server tells us how
+		 * long.
+		 * TODO: if interval's not set, we need to default to five seconds
+		 * TODO: sanity check the interval
+		 */
+		interval = i_get_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL);
+
+		/*
+		 * A slow_down error requires us to permanently increase our retry
+		 * interval by five seconds. RFC 8628, Sec. 3.5.
+		 */
+		if (!strcmp(error_code, "slow_down"))
+		{
+			interval += 5;
+			i_set_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL, interval);
+		}
+
+		sleep(interval);
+
+		/*
+		 * XXX Reset the error code before every call, because iddawc won't do
+		 * that for us. This matters if the server first sends a "pending"
+		 * error code, then later hard-fails without sending an error code to
+		 * overwrite the first one.
+		 *
+		 * That we have to do this at all seems like a bug in iddawc.
+		 */
+		i_set_str_parameter(&session, I_OPT_ERROR, NULL);
+
+		err = i_run_token_request(&session);
+	}
+
+	access_token = i_get_str_parameter(&session, I_OPT_ACCESS_TOKEN);
+	token_type = i_get_str_parameter(&session, I_OPT_TOKEN_TYPE);
+
+	if (!access_token || !token_type || strcasecmp(token_type, "Bearer"))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a bearer token"));
+		goto cleanup;
+	}
+
+	appendPQExpBufferStr(token_buf, "Bearer ");
+	appendPQExpBufferStr(token_buf, access_token);
+
+	if (PQExpBufferBroken(token_buf))
+		goto cleanup;
+
+	token = strdup(token_buf->data);
+
+cleanup:
+	if (token_buf)
+		destroyPQExpBuffer(token_buf);
+	i_clean_session(&session);
+
+	return token;
+}
+
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	/* TODO: actually make this asynchronous */
+	state->token = run_iddawc_auth_flow(conn, conn->oauth_discovery_uri);
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..2a35c7438c
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(!strcmp(sasl_mechanism, OAUTHBEARER_NAME));
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (!strcmp(name, ERROR_STATUS_FIELD))
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (!strcmp(name, ERROR_SCOPE_FIELD))
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (!strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD))
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (!strcmp(ctx.status, "invalid_token"))
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..8d4ea45aa8
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2023, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index dce16b7b8b..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
-	 *	SASL_FAILED:	The exchance has failed and the connection should be
+	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 9f57976b4f..9e084dd1c7 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -419,13 +420,13 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
-	char	   *password;
+	char	   *password = NULL;
 	SASLStatus	status;
 
 	initPQExpBuffer(&mechanism_buf);
@@ -437,7 +438,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -524,6 +525,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -563,26 +573,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -625,7 +657,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -650,11 +682,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && !(status == SASL_FAILED || status == SASL_COMPLETE))
 	{
 		if (outputlen != 0)
@@ -955,12 +997,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1118,7 +1166,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1135,7 +1183,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1451,3 +1500,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f..15ceb73d01 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -359,6 +359,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -618,6 +635,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_err_msg = NULL;
 	conn->be_pid = 0;
 	conn->be_key = 0;
+	/* conn->oauth_want_retry = false; TODO */
 }
 
 
@@ -2536,6 +2554,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3517,6 +3536,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3672,6 +3692,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 #ifdef ENABLE_GSS
 
 					/*
@@ -3753,7 +3783,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/* OK, we have processed the message; mark data consumed */
 				conn->inStart = conn->inCursor;
@@ -3786,6 +3826,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4285,6 +4360,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4400,6 +4476,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -6868,6 +6949,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index f2fc78a481..663b1c1acf 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1039,10 +1039,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1059,7 +1062,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = pqSocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = pqSocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3..d095351c66 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -38,6 +38,8 @@ extern "C"
 #define LIBPQ_HAS_TRACE_FLAGS 1
 /* Indicates that PQsslAttribute(NULL, "library") is useful */
 #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -78,7 +80,9 @@ typedef enum
 	CONNECTION_CONSUME,			/* Consuming any extra messages. */
 	CONNECTION_GSS_STARTUP,		/* Negotiating GSSAPI. */
 	CONNECTION_CHECK_TARGET,	/* Checking target server properties. */
-	CONNECTION_CHECK_STANDBY	/* Checking if server is in standby mode. */
+	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -160,6 +164,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -658,10 +669,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 82c18f870d..cf26c693e3 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -351,6 +351,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -409,6 +411,15 @@ struct pg_conn
 	char	   *require_auth;	/* name of the expected auth method */
 	char	   *load_balance_hosts; /* load balance over hosts */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -477,6 +488,9 @@ struct pg_conn
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
 
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index a47b6f425d..753a7137d6 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,15 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'iddawc'
+    libpq_sources += files('fe-auth-oauth-iddawc.c')
+  else
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index b0f4178b3d..f803c1200b 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -231,6 +231,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 2461567026..6234fe66f1 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -354,6 +355,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1653,6 +1656,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1718,6 +1722,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1877,11 +1882,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3343,6 +3351,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v17-0001-common-jsonapi-support-FRONTEND-clients.patchapplication/octet-stream; name=v17-0001-common-jsonapi-support-FRONTEND-clients.patchDownload
From 00976d4f750ba5d6ea5973f9edd8fe9c521ee9aa Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v17 1/9] common/jsonapi: support FRONTEND clients

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed. json_errdetail() now allocates its error message inside
memory owned by the JsonLexContext, so clients don't need to worry about
freeing it.

We can now partially revert b44669b2ca, now that json_errdetail() works
correctly.
---
 src/bin/pg_verifybackup/t/005_bad_manifest.pl |   3 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 270 +++++++++++++-----
 src/common/meson.build                        |   8 +-
 src/common/parse_manifest.c                   |   2 +-
 src/include/common/jsonapi.h                  |  18 +-
 6 files changed, 222 insertions(+), 81 deletions(-)

diff --git a/src/bin/pg_verifybackup/t/005_bad_manifest.pl b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
index e278ccea5a..e2a297930e 100644
--- a/src/bin/pg_verifybackup/t/005_bad_manifest.pl
+++ b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
@@ -13,7 +13,8 @@ use Test::More;
 my $tempdir = PostgreSQL::Test::Utils::tempdir;
 
 test_bad_manifest('input string ended unexpectedly',
-	qr/could not parse backup manifest: parsing failed/, <<EOM);
+	qr/could not parse backup manifest: The input string ended unexpectedly/,
+	<<EOM);
 {
 EOM
 
diff --git a/src/common/Makefile b/src/common/Makefile
index 2ba5069dca..bbb5c3ab11 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 OBJS_COMMON = \
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 32931ded82..479310f598 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,41 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+
+#define appendStrVal		appendPQExpBuffer
+#define appendBinaryStrVal  appendBinaryPQExpBuffer
+#define appendStrValChar	appendPQExpBufferChar
+#define createStrVal		createPQExpBuffer
+#define resetStrVal			resetPQExpBuffer
+
+#else							/* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+
+#define appendStrVal		appendStringInfo
+#define appendBinaryStrVal  appendBinaryStringInfo
+#define appendStrValChar	appendStringInfoChar
+#define createStrVal		makeStringInfo
+#define resetStrVal			resetStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -167,9 +198,16 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+	lex->errormsg = NULL;
 
 	return lex;
 }
@@ -182,13 +220,30 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
 	{
+#ifdef FRONTEND
+		destroyPQExpBuffer(lex->strval);
+#else
 		pfree(lex->strval->data);
 		pfree(lex->strval);
+#endif
+	}
+	if (lex->errormsg)
+	{
+#ifdef FRONTEND
+		destroyPQExpBuffer(lex->errormsg);
+#else
+		pfree(lex->errormsg->data);
+		pfree(lex->errormsg);
+#endif
 	}
 	if (lex->flags & JSONLEX_FREE_STRUCT)
 		pfree(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -254,7 +309,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -316,14 +371,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -357,8 +419,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -414,6 +480,11 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -762,8 +833,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -800,7 +878,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -857,19 +935,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -879,22 +957,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -929,7 +1007,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to appendBinaryStrVal.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -953,8 +1031,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				appendBinaryStrVal(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -970,6 +1048,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -1145,72 +1228,93 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 	return JSON_SUCCESS;		/* silence stupider compilers */
 }
 
-
-#ifndef FRONTEND
-/*
- * Extract the current token from a lexing context, for error reporting.
- */
-static char *
-extract_token(JsonLexContext *lex)
-{
-	int			toklen = lex->token_terminator - lex->token_start;
-	char	   *token = palloc(toklen + 1);
-
-	memcpy(token, lex->token_start, toklen);
-	token[toklen] = '\0';
-	return token;
-}
-
 /*
  * Construct an (already translated) detail message for a JSON error.
  *
- * Note that the error message generated by this routine may not be
- * palloc'd, making it unsafe for frontend code as there is no way to
- * know if this can be safely pfree'd or not.
+ * The returned allocation is either static or owned by the JsonLexContext and
+ * should not be freed.
  */
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	int			toklen = lex->token_terminator - lex->token_start;
+
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
+	if (lex->errormsg)
+		resetStrVal(lex->errormsg);
+	else
+		lex->errormsg = createStrVal();
+
 	switch (error)
 	{
 		case JSON_SUCCESS:
 			/* fall through to the error code after switch */
 			break;
 		case JSON_ESCAPING_INVALID:
-			return psprintf(_("Escape sequence \"\\%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Escape sequence \"\\%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_ESCAPING_REQUIRED:
-			return psprintf(_("Character with value 0x%02x must be escaped."),
-							(unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
+			break;
 		case JSON_EXPECTED_END:
-			return psprintf(_("Expected end of input, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected end of input, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_FIRST:
-			return psprintf(_("Expected array element or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected array element or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_NEXT:
-			return psprintf(_("Expected \",\" or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_COLON:
-			return psprintf(_("Expected \":\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \":\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_JSON:
-			return psprintf(_("Expected JSON value, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected JSON value, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_MORE:
 			return _("The input string ended unexpectedly.");
 		case JSON_EXPECTED_OBJECT_FIRST:
-			return psprintf(_("Expected string or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_OBJECT_NEXT:
-			return psprintf(_("Expected \",\" or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_STRING:
-			return psprintf(_("Expected string, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_INVALID_TOKEN:
-			return psprintf(_("Token \"%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Token \"%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -1219,9 +1323,19 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			/* note: this case is only reachable in frontend not backend */
 			return _("Unicode escape values cannot be used for code point values above 007F when the encoding is not UTF8.");
 		case JSON_UNICODE_UNTRANSLATABLE:
-			/* note: this case is only reachable in backend not frontend */
+
+			/*
+			 * note: this case is only reachable in backend not frontend.
+			 * #ifdef it away so the frontend doesn't try to link against
+			 * backend functionality.
+			 */
+#ifndef FRONTEND
 			return psprintf(_("Unicode escape value could not be translated to the server's encoding %s."),
 							GetDatabaseEncodingName());
+#else
+			Assert(false);
+			break;
+#endif
 		case JSON_UNICODE_HIGH_SURROGATE:
 			return _("Unicode high surrogate must not follow a high surrogate.");
 		case JSON_UNICODE_LOW_SURROGATE:
@@ -1231,12 +1345,22 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			break;
 	}
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	elog(ERROR, "unexpected json parse error type: %d", (int) error);
-	return NULL;
-}
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && !lex->errormsg->data[0])
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d", (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
 #endif
+
+	return lex->errormsg->data;
+}
diff --git a/src/common/meson.build b/src/common/meson.build
index 4eb16024cb..5d2c7abaa6 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -124,13 +124,18 @@ common_sources_frontend_static += files(
 # least cryptohash_openssl.c, hmac_openssl.c depend on it.
 # controldata_utils.c depends on wait_event_types_h. That's arguably a
 # layering violation, but ...
+#
+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
+# appropriately. This seems completely broken.
 pgcommon = {}
 pgcommon_variants = {
   '_srv': internal_lib_args + {
+    'include_directories': include_directories('.'),
     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
     'dependencies': [backend_common_code],
   },
   '': default_lib_args + {
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_static,
     'dependencies': [frontend_common_code],
     # Files in libpgcommon.a should use/export the "xxx_private" versions
@@ -139,6 +144,7 @@ pgcommon_variants = {
   },
   '_shlib': default_lib_args + {
     'pic': true,
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
   },
@@ -156,7 +162,6 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'sources': sources,
         'c_args': c_args,
@@ -169,7 +174,6 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'dependencies': opts['dependencies'] + [ssl],
       }
diff --git a/src/common/parse_manifest.c b/src/common/parse_manifest.c
index 92a97714f3..62d93989be 100644
--- a/src/common/parse_manifest.c
+++ b/src/common/parse_manifest.c
@@ -147,7 +147,7 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 	/* Run the actual JSON parser. */
 	json_error = pg_parse_json(lex, &sem);
 	if (json_error != JSON_SUCCESS)
-		json_manifest_parse_failure(context, "parsing failed");
+		json_manifest_parse_failure(context, json_errdetail(json_error, lex));
 	if (parse.state != JM_EXPECT_EOF)
 		json_manifest_parse_failure(context, "manifest ended unexpectedly");
 
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 02943cdad8..75d444c17a 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -48,6 +46,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -57,6 +56,17 @@ typedef enum JsonParseErrorType
 	JSON_SEM_ACTION_FAILED,		/* error should already be reported */
 } JsonParseErrorType;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
 
 /*
  * All the fields in this structure should be treated as read-only.
@@ -88,7 +98,9 @@ typedef struct JsonLexContext
 	bits32		flags;
 	int			line_number;	/* line number, starting from 1 */
 	char	   *line_start;		/* where that line starts within input */
-	StringInfo	strval;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
-- 
2.34.1

v17-0002-Refactor-SASL-exchange-to-return-tri-state-statu.patchapplication/octet-stream; name=v17-0002-Refactor-SASL-exchange-to-return-tri-state-statu.patchDownload
From d8b567dd550aff4dda20174d64cd769c697861db Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Fri, 23 Feb 2024 11:09:54 +0100
Subject: [PATCH v17 2/9] Refactor SASL exchange to return tri-state status

The SASL exchange callback returned state in to output variables:
done and success.  This refactors that logic by introducing a new
return variable of type SASLStatus which makes the code easier to
read and understand, and prepares for future SASL exchanges which
operate asynchronously.

This was extracted from a larger patchset to introduce OAuthBearer
authentication and authorization.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 src/interfaces/libpq/fe-auth-sasl.h  | 31 +++++++----
 src/interfaces/libpq/fe-auth-scram.c | 78 +++++++++++++---------------
 src/interfaces/libpq/fe-auth.c       | 28 +++++-----
 src/tools/pgindent/typedefs.list     |  1 +
 4 files changed, 71 insertions(+), 67 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index ee5d1525b5..dce16b7b8b 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -21,6 +21,17 @@
 
 #include "libpq-fe.h"
 
+/*
+ * Possible states for the SASL exchange, see the comment on exchange for an
+ * explanation of these.
+ */
+typedef enum
+{
+	SASL_COMPLETE = 0,
+	SASL_FAILED,
+	SASL_CONTINUE,
+} SASLStatus;
+
 /*
  * Frontend SASL mechanism callbacks.
  *
@@ -59,7 +70,8 @@ typedef struct pg_fe_sasl_mech
 	 * Produces a client response to a server challenge.  As a special case
 	 * for client-first SASL mechanisms, exchange() is called with a NULL
 	 * server response once at the start of the authentication exchange to
-	 * generate an initial response.
+	 * generate an initial response. Returns a SASLStatus indicating the
+	 * state and status of the exchange.
 	 *
 	 * Input parameters:
 	 *
@@ -79,22 +91,23 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	output:	   A malloc'd buffer containing the client's response to
 	 *			   the server (can be empty), or NULL if the exchange should
-	 *			   be aborted.  (*success should be set to false in the
+	 *			   be aborted.  (The callback should return SASL_FAILED in the
 	 *			   latter case.)
 	 *
 	 *	outputlen: The length (0 or higher) of the client response buffer,
 	 *			   ignored if output is NULL.
 	 *
-	 *	done:      Set to true if the SASL exchange should not continue,
-	 *			   because the exchange is either complete or failed
+	 * Return value:
 	 *
-	 *	success:   Set to true if the SASL exchange completed successfully.
-	 *			   Ignored if *done is false.
+	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
+	 *					Additional server challenge is expected
+	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
+	 *	SASL_FAILED:	The exchance has failed and the connection should be
+	 *					dropped.
 	 *--------
 	 */
-	void		(*exchange) (void *state, char *input, int inputlen,
-							 char **output, int *outputlen,
-							 bool *done, bool *success);
+	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+							 char **output, int *outputlen);
 
 	/*--------
 	 * channel_bound()
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 04f0e5713d..0bb820e0d9 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,9 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static void scram_exchange(void *opaq, char *input, int inputlen,
-						   char **output, int *outputlen,
-						   bool *done, bool *success);
+static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
 
@@ -202,17 +201,14 @@ scram_free(void *opaq)
 /*
  * Exchange a SCRAM message with backend.
  */
-static void
+static SASLStatus
 scram_exchange(void *opaq, char *input, int inputlen,
-			   char **output, int *outputlen,
-			   bool *done, bool *success)
+			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
 	PGconn	   *conn = state->conn;
 	const char *errstr = NULL;
 
-	*done = false;
-	*success = false;
 	*output = NULL;
 	*outputlen = 0;
 
@@ -225,12 +221,12 @@ scram_exchange(void *opaq, char *input, int inputlen,
 		if (inputlen == 0)
 		{
 			libpq_append_conn_error(conn, "malformed SCRAM message (empty message)");
-			goto error;
+			return SASL_FAILED;
 		}
 		if (inputlen != strlen(input))
 		{
 			libpq_append_conn_error(conn, "malformed SCRAM message (length mismatch)");
-			goto error;
+			return SASL_FAILED;
 		}
 	}
 
@@ -240,61 +236,59 @@ scram_exchange(void *opaq, char *input, int inputlen,
 			/* Begin the SCRAM handshake, by sending client nonce */
 			*output = build_client_first_message(state);
 			if (*output == NULL)
-				goto error;
+				return SASL_FAILED;
 
 			*outputlen = strlen(*output);
-			*done = false;
 			state->state = FE_SCRAM_NONCE_SENT;
-			break;
+			return SASL_CONTINUE;
 
 		case FE_SCRAM_NONCE_SENT:
 			/* Receive salt and server nonce, send response. */
 			if (!read_server_first_message(state, input))
-				goto error;
+				return SASL_FAILED;
 
 			*output = build_client_final_message(state);
 			if (*output == NULL)
-				goto error;
+				return SASL_FAILED;
 
 			*outputlen = strlen(*output);
-			*done = false;
 			state->state = FE_SCRAM_PROOF_SENT;
-			break;
+			return SASL_CONTINUE;
 
 		case FE_SCRAM_PROOF_SENT:
-			/* Receive server signature */
-			if (!read_server_final_message(state, input))
-				goto error;
-
-			/*
-			 * Verify server signature, to make sure we're talking to the
-			 * genuine server.
-			 */
-			if (!verify_server_signature(state, success, &errstr))
-			{
-				libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
-				goto error;
-			}
-
-			if (!*success)
 			{
-				libpq_append_conn_error(conn, "incorrect server signature");
+				bool		match;
+
+				/* Receive server signature */
+				if (!read_server_final_message(state, input))
+					return SASL_FAILED;
+
+				/*
+				 * Verify server signature, to make sure we're talking to the
+				 * genuine server.
+				 */
+				if (!verify_server_signature(state, &match, &errstr))
+				{
+					libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
+					return SASL_FAILED;
+				}
+
+				if (!match)
+				{
+					libpq_append_conn_error(conn, "incorrect server signature");
+				}
+				state->state = FE_SCRAM_FINISHED;
+				state->conn->client_finished_auth = true;
+				return match ? SASL_COMPLETE : SASL_FAILED;
 			}
-			*done = true;
-			state->state = FE_SCRAM_FINISHED;
-			state->conn->client_finished_auth = true;
-			break;
 
 		default:
 			/* shouldn't happen */
 			libpq_append_conn_error(conn, "invalid SCRAM exchange state");
-			goto error;
+			break;
 	}
-	return;
 
-error:
-	*done = true;
-	*success = false;
+	return SASL_FAILED;
 }
 
 /*
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 1a8e4f6fbf..71dd096605 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -423,11 +423,10 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
-	bool		done;
-	bool		success;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
 	char	   *password;
+	SASLStatus	status;
 
 	initPQExpBuffer(&mechanism_buf);
 
@@ -575,12 +574,11 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto oom_error;
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	conn->sasl->exchange(conn->sasl_state,
-						 NULL, -1,
-						 &initialresponse, &initialresponselen,
-						 &done, &success);
+	status = conn->sasl->exchange(conn->sasl_state,
+								  NULL, -1,
+								  &initialresponse, &initialresponselen);
 
-	if (done && !success)
+	if (status == SASL_FAILED)
 		goto error;
 
 	/*
@@ -629,10 +627,9 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 {
 	char	   *output;
 	int			outputlen;
-	bool		done;
-	bool		success;
 	int			res;
 	char	   *challenge;
+	SASLStatus	status;
 
 	/* Read the SASL challenge from the AuthenticationSASLContinue message. */
 	challenge = malloc(payloadlen + 1);
@@ -651,13 +648,12 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	conn->sasl->exchange(conn->sasl_state,
-						 challenge, payloadlen,
-						 &output, &outputlen,
-						 &done, &success);
+	status = conn->sasl->exchange(conn->sasl_state,
+								  challenge, payloadlen,
+								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
-	if (final && !done)
+	if (final && !(status == SASL_FAILED || status == SASL_COMPLETE))
 	{
 		if (outputlen != 0)
 			free(output);
@@ -670,7 +666,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	 * If the exchange is not completed yet, we need to make sure that the
 	 * SASL mechanism has generated a message to send back.
 	 */
-	if (output == NULL && !done)
+	if (output == NULL && status == SASL_CONTINUE)
 	{
 		libpq_append_conn_error(conn, "no client response found after SASL exchange success");
 		return STATUS_ERROR;
@@ -692,7 +688,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 			return STATUS_ERROR;
 	}
 
-	if (done && !success)
+	if (status == SASL_FAILED)
 		return STATUS_ERROR;
 
 	return STATUS_OK;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index fc8b15d0cf..2461567026 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2423,6 +2423,7 @@ RuleLock
 RuleStmt
 RunningTransactions
 RunningTransactionsData
+SASLStatus
 SC_HANDLE
 SECURITY_ATTRIBUTES
 SECURITY_STATUS
-- 
2.34.1

v17-0006-Introduce-OAuth-validator-libraries.patchapplication/octet-stream; name=v17-0006-Introduce-OAuth-validator-libraries.patchDownload
From 0661817808b3b440c5fd2cbc13e47595514924ee Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Wed, 21 Feb 2024 17:04:26 +0100
Subject: [PATCH v17 6/9] Introduce OAuth validator libraries

This replaces the serverside validation code with an module API
for loading in extensions for validating bearer tokens. A lot of
code is left to be written.

Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
---
 src/backend/libpq/auth-oauth.c                | 431 +++++-------------
 src/backend/utils/misc/guc_tables.c           |   6 +-
 src/bin/pg_combinebackup/Makefile             |   2 +-
 src/common/Makefile                           |   2 +-
 src/include/libpq/oauth.h                     |  29 +-
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  19 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  33 ++
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   |  53 +++
 src/test/modules/oauth_validator/validator.c  |  71 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  14 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 158 +++++++
 src/tools/pgindent/typedefs.list              |   2 +
 16 files changed, 500 insertions(+), 332 deletions(-)
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index e74c008161..765a18b9b2 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -6,7 +6,7 @@
  * See the following RFC for more details:
  * - RFC 7628: https://tools.ietf.org/html/rfc7628
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/backend/libpq/auth-oauth.c
@@ -19,21 +19,29 @@
 #include <fcntl.h>
 
 #include "common/oauth-common.h"
+#include "fmgr.h"
 #include "lib/stringinfo.h"
 #include "libpq/auth.h"
 #include "libpq/hba.h"
 #include "libpq/oauth.h"
 #include "libpq/sasl.h"
 #include "storage/fd.h"
+#include "storage/ipc.h"
 
 /* GUC */
-char	   *oauth_validator_command;
+char	   *OAuthValidatorLibrary = "";
 
 static void oauth_get_mechanisms(Port *port, StringInfo buf);
 static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
 static int	oauth_exchange(void *opaq, const char *input, int inputlen,
 						   char **output, int *outputlen, const char **logdetail);
 
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
 /* Mechanism declaration */
 const pg_be_sasl_mech pg_be_oauth_mech = {
 	oauth_get_mechanisms,
@@ -62,11 +70,7 @@ struct oauth_ctx
 static char *sanitize_char(char c);
 static char *parse_kvpairs_for_auth(char **input);
 static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
-static bool validate(Port *port, const char *auth, const char **logdetail);
-static bool run_validator_command(Port *port, const char *token);
-static bool check_exit(FILE **fh, const char *command);
-static bool set_cloexec(int fd);
-static bool username_ok_for_shell(const char *username);
+static bool validate(Port *port, const char *auth);
 
 #define KVSEP 0x01
 #define AUTH_KEY "auth"
@@ -99,6 +103,8 @@ oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 	ctx->issuer = port->hba->oauth_issuer;
 	ctx->scope = port->hba->oauth_scope;
 
+	load_validator_library();
+
 	return ctx;
 }
 
@@ -249,7 +255,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 				 errmsg("malformed OAUTHBEARER message"),
 				 errdetail("Message contains additional data after the final terminator.")));
 
-	if (!validate(ctx->port, auth, logdetail))
+	if (!validate(ctx->port, auth))
 	{
 		generate_error_response(ctx, output, outputlen);
 
@@ -416,70 +422,73 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	*outputlen = buf.len;
 }
 
-static bool
-validate(Port *port, const char *auth, const char **logdetail)
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
 {
-	static const char *const b64_set =
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
 		"abcdefghijklmnopqrstuvwxyz"
 		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
 		"0123456789-._~+/";
 
-	const char *token;
-	size_t		span;
-	int			ret;
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
 
-	/* TODO: handle logdetail when the test framework can check it */
-
-	/*-----
-	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
-	 * 2.1:
-	 *
-	 *      b64token    = 1*( ALPHA / DIGIT /
-	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
-	 *      credentials = "Bearer" 1*SP b64token
-	 *
-	 * The "credentials" construction is what we receive in our auth value.
-	 *
-	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
-	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
-	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
-	 * it's pointed out in RFC 7628 Sec. 4.)
-	 *
-	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
-	 */
-	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
-		return false;
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
 
 	/* Pull the bearer token out of the auth value. */
-	token = auth + strlen(BEARER_SCHEME);
+	token = header + strlen(BEARER_SCHEME);
 
 	/* Swallow any additional spaces. */
 	while (*token == ' ')
 		token++;
 
-	/*
-	 * Before invoking the validator command, sanity-check the token format to
-	 * avoid any injection attacks later in the chain. Invalid formats are
-	 * technically a protocol violation, but don't reflect any information
-	 * about the sensitive Bearer token back to the client; log at COMMERROR
-	 * instead.
-	 */
-
 	/* Tokens must not be empty. */
 	if (!*token)
 	{
 		ereport(COMMERROR,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
+				 errmsg("malformed OAuth bearer token 2"),
 				 errdetail("Bearer token is empty.")));
-		return false;
+		return NULL;
 	}
 
 	/*
 	 * Make sure the token contains only allowed characters. Tokens may end
 	 * with any number of '=' characters.
 	 */
-	span = strspn(token, b64_set);
+	span = strspn(token, b64token_allowed_set);
 	while (token[span] == '=')
 		span++;
 
@@ -492,15 +501,35 @@ validate(Port *port, const char *auth, const char **logdetail)
 		 */
 		ereport(COMMERROR,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
+				 errmsg("malformed OAuth bearer token 3"),
 				 errdetail("Bearer token is not in the correct format.")));
-		return false;
+		return NULL;
 	}
 
-	/* Have the validator check the token. */
-	if (!run_validator_command(port, token))
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
 		return false;
 
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authenticated)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
 	if (port->hba->oauth_skip_usermap)
 	{
 		/*
@@ -513,7 +542,7 @@ validate(Port *port, const char *auth, const char **logdetail)
 	}
 
 	/* Make sure the validator authenticated the user. */
-	if (!MyClientConnectionInfo.authn_id)
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
 		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
@@ -523,288 +552,42 @@ validate(Port *port, const char *auth, const char **logdetail)
 	}
 
 	/* Finally, check the user map. */
-	ret = check_usermap(port->hba->usermap, port->user_name,
-						MyClientConnectionInfo.authn_id, false);
-	return (ret == STATUS_OK);
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
 }
 
-static bool
-run_validator_command(Port *port, const char *token)
-{
-	bool		success = false;
-	int			rc;
-	int			pipefd[2];
-	int			rfd = -1;
-	int			wfd = -1;
-
-	StringInfoData command = {0};
-	char	   *p;
-	FILE	   *fh = NULL;
-
-	ssize_t		written;
-	char	   *line = NULL;
-	size_t		size = 0;
-	ssize_t		len;
-
-	Assert(oauth_validator_command);
-
-	if (!oauth_validator_command[0])
-	{
-		ereport(COMMERROR,
-				(errmsg("oauth_validator_command is not set"),
-				 errhint("To allow OAuth authenticated connections, set "
-						 "oauth_validator_command in postgresql.conf.")));
-		return false;
-	}
-
-	/*------
-	 * Since popen() is unidirectional, open up a pipe for the other
-	 * direction. Use CLOEXEC to ensure that our write end doesn't
-	 * accidentally get copied into child processes, which would prevent us
-	 * from closing it cleanly.
-	 *
-	 * XXX this is ugly. We should just read from the child process's stdout,
-	 * but that's a lot more code.
-	 * XXX by bypassing the popen API, we open the potential of process
-	 * deadlock. Clearly document child process requirements (i.e. the child
-	 * MUST read all data off of the pipe before writing anything).
-	 * TODO: port to Windows using _pipe().
-	 */
-	rc = pipe(pipefd);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not create child pipe: %m")));
-		return false;
-	}
-
-	rfd = pipefd[0];
-	wfd = pipefd[1];
-
-	if (!set_cloexec(wfd))
-	{
-		/* error message was already logged */
-		goto cleanup;
-	}
-
-	/*----------
-	 * Construct the command, substituting any recognized %-specifiers:
-	 *
-	 *   %f: the file descriptor of the input pipe
-	 *   %r: the role that the client wants to assume (port->user_name)
-	 *   %%: a literal '%'
-	 */
-	initStringInfo(&command);
-
-	for (p = oauth_validator_command; *p; p++)
-	{
-		if (p[0] == '%')
-		{
-			switch (p[1])
-			{
-				case 'f':
-					appendStringInfo(&command, "%d", rfd);
-					p++;
-					break;
-				case 'r':
-
-					/*
-					 * TODO: decide how this string should be escaped. The
-					 * role is controlled by the client, so if we don't escape
-					 * it, command injections are inevitable.
-					 *
-					 * This is probably an indication that the role name needs
-					 * to be communicated to the validator process in some
-					 * other way. For this proof of concept, just be
-					 * incredibly strict about the characters that are allowed
-					 * in user names.
-					 */
-					if (!username_ok_for_shell(port->user_name))
-						goto cleanup;
-
-					appendStringInfoString(&command, port->user_name);
-					p++;
-					break;
-				case '%':
-					appendStringInfoChar(&command, '%');
-					p++;
-					break;
-				default:
-					appendStringInfoChar(&command, p[0]);
-			}
-		}
-		else
-			appendStringInfoChar(&command, p[0]);
-	}
-
-	/* Execute the command. */
-	fh = OpenPipeStream(command.data, "r");
-	if (!fh)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("opening pipe to OAuth validator: %m")));
-		goto cleanup;
-	}
-
-	/* We don't need the read end of the pipe anymore. */
-	close(rfd);
-	rfd = -1;
-
-	/* Give the command the token to validate. */
-	written = write(wfd, token, strlen(token));
-	if (written != strlen(token))
-	{
-		/* TODO must loop for short writes, EINTR et al */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not write token to child pipe: %m")));
-		goto cleanup;
-	}
-
-	close(wfd);
-	wfd = -1;
-
-	/*-----
-	 * Read the command's response.
-	 *
-	 * TODO: getline() is probably too new to use, unfortunately.
-	 * TODO: loop over all lines
-	 */
-	if ((len = getline(&line, &size, fh)) >= 0)
-	{
-		/* TODO: fail if the authn_id doesn't end with a newline */
-		if (len > 0)
-			line[len - 1] = '\0';
-
-		set_authn_id(port, line);
-	}
-	else if (ferror(fh))
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not read from command \"%s\": %m",
-						command.data)));
-		goto cleanup;
-	}
-
-	/* Make sure the command exits cleanly. */
-	if (!check_exit(&fh, command.data))
-	{
-		/* error message already logged */
-		goto cleanup;
-	}
-
-	/* Done. */
-	success = true;
-
-cleanup:
-	if (line)
-		free(line);
-
-	/*
-	 * In the successful case, the pipe fds are already closed. For the error
-	 * case, always close out the pipe before waiting for the command, to
-	 * prevent deadlock.
-	 */
-	if (rfd >= 0)
-		close(rfd);
-	if (wfd >= 0)
-		close(wfd);
-
-	if (fh)
-	{
-		Assert(!success);
-		check_exit(&fh, command.data);
-	}
-
-	if (command.data)
-		pfree(command.data);
-
-	return success;
-}
-
-static bool
-check_exit(FILE **fh, const char *command)
+static void
+load_validator_library(void)
 {
-	int			rc;
+	OAuthValidatorModuleInit validator_init;
 
-	rc = ClosePipeStream(*fh);
-	*fh = NULL;
-
-	if (rc == -1)
-	{
-		/* pclose() itself failed. */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not close pipe to command \"%s\": %m",
-						command)));
-	}
-	else if (rc != 0)
-	{
-		char	   *reason = wait_result_to_str(rc);
-
-		ereport(COMMERROR,
-				(errmsg("failed to execute command \"%s\": %s",
-						command, reason)));
-
-		pfree(reason);
-	}
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
 
-	return (rc == 0);
-}
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
 
-static bool
-set_cloexec(int fd)
-{
-	int			flags;
-	int			rc;
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
 
-	flags = fcntl(fd, F_GETFD);
-	if (flags == -1)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not get fd flags for child pipe: %m")));
-		return false;
-	}
+	ValidatorCallbacks = (*validator_init) ();
 
-	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
-		return false;
-	}
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
 
-	return true;
+	before_shmem_exit(shutdown_validator_library, 0);
 }
 
-/*
- * XXX This should go away eventually and be replaced with either a proper
- * escape or a different strategy for communication with the validator command.
- */
-static bool
-username_ok_for_shell(const char *username)
+static void
+shutdown_validator_library(int code, Datum arg)
 {
-	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
-	static const char *const allowed =
-		"abcdefghijklmnopqrstuvwxyz"
-		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
-		"0123456789-_./:";
-	size_t		span;
-
-	Assert(username && username[0]);	/* should have already been checked */
-
-	span = strspn(username, allowed);
-	if (username[span] != '\0')
-	{
-		ereport(COMMERROR,
-				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
-		return false;
-	}
-
-	return true;
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
 }
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8c1b90e310..2928047a70 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -4641,12 +4641,12 @@ struct config_string ConfigureNamesString[] =
 	},
 
 	{
-		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
-			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
 			NULL,
 			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
 		},
-		&oauth_validator_command,
+		&OAuthValidatorLibrary,
 		"",
 		NULL, NULL, NULL
 	},
diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index c3729755ba..4f24b1aff6 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -31,7 +31,7 @@ OBJS = \
 all: pg_combinebackup
 
 pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
-	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
 
 install: all installdirs
 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
diff --git a/src/common/Makefile b/src/common/Makefile
index bbb5c3ab11..00e30e6bfe 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -41,7 +41,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
 override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
-LIBS += $(PTHREAD_LIBS)
+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
 
 OBJS_COMMON = \
 	archive.o \
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
index 5edab3b25a..5c081abfae 100644
--- a/src/include/libpq/oauth.h
+++ b/src/include/libpq/oauth.h
@@ -3,7 +3,7 @@
  * oauth.h
  *	  Interface to libpq/auth-oauth.c
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/include/libpq/oauth.h
@@ -16,7 +16,32 @@
 #include "libpq/libpq-be.h"
 #include "libpq/sasl.h"
 
-extern char *oauth_validator_command;
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authenticated;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
 
 /* Implementation */
 extern const pg_be_sasl_mech pg_be_oauth_mech;
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 8fbe742d38..dc54ce7189 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..1f874cd7f2
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,19 @@
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..d9c1d1d577
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..49e04b0afe
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,53 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+# Delete pg_hba.conf from the given node, add a new entry to it
+# and then execute a reload to refresh it.
+# XXX: this is copied from authentication/t/001_password and should be made
+# generic functionality if we end up using it.
+sub reset_pg_hba
+{
+	my $node = shift;
+	my $database = shift;
+	my $role = shift;
+	my $hba_method = shift;
+
+	unlink($node->data_dir . '/pg_hba.conf');
+	# just for testing purposes, use a continuation line
+	$node->append_conf('pg_hba.conf',
+		"local $database $role\\\n $hba_method");
+	$node->reload;
+	return;
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:18080" scope="openid postgres"');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
+
+my $port = $webserver->port();
+
+is($port, 18080, "Port is 18080");
+
+$webserver->setup();
+$webserver->run();
+
+$node->connect_ok("dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..c76d0599c5
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,71 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "XXX: validating %s for %s", token, role);
+
+	res->authenticated = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 44c1bb5afd..b758ad01cc 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2302,6 +2302,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2345,7 +2350,14 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..3ac90c3d0f
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,158 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use threads;
+use Socket;
+use IO::Select;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+	my $port = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	$self->{'port'} = $port;
+
+	return $self;
+}
+
+sub setup
+{
+	my $self = shift;
+	my $tcp = getprotobyname('tcp');
+
+	socket($self->{'socket'}, PF_INET, SOCK_STREAM, $tcp)
+		or die "no socket";
+	setsockopt($self->{'socket'}, SOL_SOCKET, SO_REUSEADDR, pack("l", 1));
+	bind($self->{'socket'}, sockaddr_in($self->{'port'}, INADDR_ANY));
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+
+	my $server_thread = threads->create(\&_listen, $self);
+	$server_thread->detach();
+}
+
+sub _listen
+{
+	my $self = shift;
+
+	listen($self->{'socket'}, SOMAXCONN) or die "fail to listen: $!";
+
+	while (1)
+	{
+		my $fh;
+		my %request;
+		my $remote = accept($fh, $self->{'socket'});
+		binmode $fh;
+
+		my ($method, $object, $prot) = split(/ /, <$fh>);
+		$request{'method'} = $method;
+		$request{'object'} = $object;
+		chomp($request{'object'});
+
+		local $/ = Socket::CRLF;
+		my $c = 0;
+		while(<$fh>)
+		{
+			chomp;
+			# Headers
+			if (/:/)
+			{
+				my ($field, $value) = split(/:/, $_, 2);
+				$value =~ s/^\s+//;
+				$request{'headers'}{lc $field} = $value;
+			}
+			# POST data
+			elsif (/^$/)
+			{
+				read($fh, $request{'content'}, $request{'headers'}{'content-length'})
+					if defined $request{'headers'}{'content-length'};
+				last;
+			}
+		}
+
+		# Debug printing
+		# print ": read ".$request{'method'} . ";" . $request{'object'}.";\n";
+		# foreach my $h (keys(%{$request{'headers'}}))
+		#{
+		#	printf ": headers: " . $request{'headers'}{$h} . "\n";
+		#}
+		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
+
+		if ($request{'object'} eq '/.well-known/openid-configuration')
+		{
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"issuer": "http://localhost:$self->{'port'}",
+				"token_endpoint": "http://localhost:$self->{'port'}/token",
+				"device_authorization_endpoint": "http://localhost:$self->{'port'}/authorize",
+				"response_types_supported": ["token"],
+				"subject_types_supported": ["public"],
+				"id_token_signing_alg_values_supported": ["RS256"],
+				"grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"]
+			}
+EOR
+		}
+		elsif ($request{'object'} eq '/authorize')
+		{
+			print ": returning device_code\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"device_code": "postgres",
+				"user_code" : "postgresuser",
+				"interval" : 0,
+				"verification_uri" : "https://example.com/",
+				"expires-in": 5
+			}
+EOR
+		}
+		elsif ($request{'object'} eq '/token')
+		{
+			print ": returning token\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"access_token": "9243959234",
+				"token_type": "bearer"
+			}
+EOR
+		}
+		else
+		{
+			print ": returning default\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: text/html\r\n";
+			print $fh "\r\n";
+			print $fh "Ok\n";
+		}
+
+		close($fh);
+	}
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index fe65a4222b..2bca506e16 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1657,6 +1657,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -2980,6 +2981,7 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
 ValuesScan
 ValuesScanState
 Var
-- 
2.34.1

v17-0007-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v17-0007-Add-pytest-suite-for-OAuth.patchDownload
From 45755e8461774f49092f79e9e0bd614f68af58ed Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v17 7/9] Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

For iddawc, asynchronous tests still hang, as expected. Bad-interval
tests fail because iddawc apparently doesn't care that the interval is
bad.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- Unsurprisingly, Windows builds fail on the Linux-/BSD-specific backend
  changes. 32-bit builds on Ubuntu fail during testing as well.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
---
 .cirrus.tasks.yml                     |   18 +-
 meson.build                           |   93 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  137 ++
 src/test/python/client/test_client.py |  180 +++
 src/test/python/client/test_oauth.py  | 1766 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   37 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  728 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |    9 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   22 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  |  939 +++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  558 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   53 +
 25 files changed, 5304 insertions(+), 6 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 3b5b54df58..f02da9a2b0 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl load_balance python
 
 
 # What files to preserve in case tests fail
@@ -165,7 +165,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -177,6 +177,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -225,6 +226,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -237,6 +239,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -312,8 +315,11 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bullseye - Autoconf
@@ -676,8 +682,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/meson.build b/meson.build
index 8c83eac03d..0cf45f1764 100644
--- a/meson.build
+++ b/meson.build
@@ -3194,6 +3194,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3355,6 +3358,96 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      pytest = venv_path / 'bin' / 'py.test'
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_base + [
+            '--testgroup', test_group,
+            '--testname', pyt_p,
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index c3d0dfedf1..f401ec179e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -7,6 +7,7 @@ subdir('authentication')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..94f3620af3
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,137 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            self._pump_async(conn)
+            conn.close()
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..c4c946fda4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,180 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = "server closed the connection unexpectedly"
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..c4b7f91ff4
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1766 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": "application/json"}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept, openid_provider, asynchronous, retries, scope, secret, auth_data_cb
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+def test_oauth_retry_interval(accept, openid_provider, retries, error_code):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": expected_retry_interval,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, _ = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we cleaned up after ourselves.
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {}),
+            alt_patterns(
+                r'failed to parse token error response: field "error" is missing',
+                r"failed to obtain device authorization: \(iddawc error I_ERROR_PARAM\)",
+            ),
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            alt_patterns(
+                r"failed to parse device authorization: Token .* is invalid",
+                r"failed to obtain device authorization: \(iddawc error I_ERROR\)",
+            ),
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            alt_patterns(
+                r"failed to parse device authorization: Token .* is invalid",
+                r"failed to obtain device authorization: \(iddawc error I_ERROR\)",
+            ),
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    # XXX iddawc doesn't really check for problems in the device authorization
+    # response, leading to this patchwork:
+    if field_name == "verification_uri":
+        error_pattern = alt_patterns(
+            error_pattern,
+            "issuer did not provide a verification URI",
+        )
+    elif field_name == "user_code":
+        error_pattern = alt_patterns(
+            error_pattern,
+            "issuer did not provide a user code",
+        )
+    else:
+        error_pattern = alt_patterns(
+            error_pattern,
+            r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
+        )
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {}),
+            alt_patterns(
+                r'failed to parse token error response: field "error" is missing',
+                r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
+            ),
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            alt_patterns(
+                r"failed to parse access token response: no content type was provided",
+                r"failed to obtain access token: \(iddawc error I_ERROR\)",
+            ),
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            alt_patterns(
+                r"failed to parse access token response: unexpected content type",
+                r"failed to obtain access token: \(iddawc error I_ERROR\)",
+            ),
+            id="wrong content type",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    # XXX iddawc is fairly silent on the topic.
+    error_pattern = alt_patterns(
+        error_pattern,
+        r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
+    )
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # XXX iddawc doesn't differentiate...
+    expected_error = alt_patterns(
+        expected_error,
+        r"failed to fetch OpenID discovery document \(iddawc error I_ERROR(_PARAM)?\)",
+    )
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..71e167d7e7
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,37 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+
+    TODO: there are tests here that are probably safe, but until I do a full
+    analysis on which are and which are not, I've made the entire thing opt-in.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..b51ac96c71
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,728 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload" / FixedSized(this.len - 4, Default(_payload, b"")),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..57ba1ced94
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,9 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+construct~=2.10.61
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..7946c971cb
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,22 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+if not oauth.found()
+  subdir_done()
+endif
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..f8e6c1651b
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authenticated = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authenticated = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..96f5e1e1b5
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,939 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + ".bak"
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = (
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    )
+    ident_lines = (r"oauth /^(.*)@example\.com$ \1",)
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"x" * (MAX_SASL_MESSAGE_LENGTH + 1),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if not isinstance(payload, dict):
+        payload = dict(payload_data=payload)
+    pq3.send(conn, type, **payload)
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..e0c0e0568d
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,558 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05\x00",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        ("PGUSER", pq3.pguser, getpass.getuser()),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..a8fd530f46
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,53 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+pip = os.path.join(args.venv_path, 'bin', 'pip')
+run(pip, 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

v17-0008-XXX-temporary-patches-to-build-and-test.patchapplication/octet-stream; name=v17-0008-XXX-temporary-patches-to-build-and-test.patchDownload
From 0f9f8848568d650a1bc6519d16724bfa57075f02 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 20 Feb 2024 11:35:29 -0800
Subject: [PATCH v17 8/9] XXX temporary patches to build and test

- the new pg_combinebackup utility uses JSON in the frontend without
  0001; has something changed?
- construct 2.10.70 has some incompatibilities with the current tests
- temporarily skip the exit check (from Daniel Gustafsson); this needs
  to be turned into an exception for curl rather than a plain exit call
---
 src/bin/pg_combinebackup/Makefile    | 6 ++++--
 src/bin/pg_combinebackup/meson.build | 3 ++-
 src/bin/pg_verifybackup/Makefile     | 2 +-
 src/interfaces/libpq/Makefile        | 2 +-
 src/test/python/requirements.txt     | 4 +++-
 5 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index 4f24b1aff6..2f7dc1ed87 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -18,6 +18,8 @@ include $(top_builddir)/src/Makefile.global
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
 LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
@@ -30,8 +32,8 @@ OBJS = \
 
 all: pg_combinebackup
 
-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
-	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
 
 install: all installdirs
 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
diff --git a/src/bin/pg_combinebackup/meson.build b/src/bin/pg_combinebackup/meson.build
index 30dbbaa6cf..926f63f365 100644
--- a/src/bin/pg_combinebackup/meson.build
+++ b/src/bin/pg_combinebackup/meson.build
@@ -17,7 +17,8 @@ endif
 
 pg_combinebackup = executable('pg_combinebackup',
   pg_combinebackup_sources,
-  dependencies: [frontend_code],
+  # XXX linking against libpq isn't good, but how was JSON working?
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args,
 )
 bin_targets += pg_combinebackup
diff --git a/src/bin/pg_verifybackup/Makefile b/src/bin/pg_verifybackup/Makefile
index 7c045f142e..3372fada01 100644
--- a/src/bin/pg_verifybackup/Makefile
+++ b/src/bin/pg_verifybackup/Makefile
@@ -17,7 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 # We need libpq only because fe_utils does.
-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 5dab88f095..7603c54340 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -126,7 +126,7 @@ libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter aix solaris,$(PORTNAME)))
 	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
-		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
+		echo 'libpq must not be calling any function which invokes exit'; \
 	fi
 endif
 endif
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
index 57ba1ced94..0dfcffb83e 100644
--- a/src/test/python/requirements.txt
+++ b/src/test/python/requirements.txt
@@ -1,7 +1,9 @@
 black
 # cryptography 35.x and later add many platform/toolchain restrictions, beware
 cryptography~=3.4.8
-construct~=2.10.61
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
 isort~=5.6
 # TODO: update to psycopg[c] 3.1
 psycopg2~=2.9.7
-- 
2.34.1

v17-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v17-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 29d7e3cbeda15cb618c4dd7c364c95924b208a23 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v17 5/9] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external program: the oauth_validator_command.
This command must do the following:

1. Receive the bearer token by reading its contents from a file
   descriptor passed from the server. (The numeric value of this
   descriptor may be inserted into the oauth_validator_command using the
   %f specifier.)

   This MUST be the first action the command performs. The server will
   not begin reading stdout from the command until the token has been
   read in full, so if the command tries to print anything and hits a
   buffer limit, the backend will deadlock and time out.

2. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the command must exit with a
   non-zero status. Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The command should print the
      authenticated identity string to stdout, followed by a newline.

      If the user cannot be authenticated, the validator should not
      print anything to stdout. It should also exit with a non-zero
      status, unless the token may be used to authorize the connection
      through some other means (see below).

      On a success, the command may then exit with a zero success code.
      By default, the server will then check to make sure the identity
      string matches the role that is being used (or matches a usermap
      entry, if one is in use).

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below), the validator simply
      returns a zero exit code if the client should be allowed to
      connect with its presented role (which can be passed to the
      command using the %r specifier), or a non-zero code otherwise.

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the command may print
      the authenticated ID and then fail with a non-zero exit code.
      (This makes it easier to see what's going on in the Postgres
      logs.)

4. Token validators may optionally log to stderr. This will be printed
   verbatim into the Postgres server logs.

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Unlike the client, servers support OAuth without needing to be built
against libiddawc (since the responsibility for "speaking" OAuth/OIDC
correctly is delegated entirely to the oauth_validator_command).

Several TODOs:
- port to platforms other than "modern Linux/BSD"
- overhaul the communication with oauth_validator_command, which is
  currently a bad hack on OpenPipeStream()
- implement more sanity checks on the OAUTHBEARER message format and
  tokens sent by the client
- implement more helpful handling of HBA misconfigurations
- properly interpolate JSON when generating error responses
- use logdetail during auth failures
- deal with role names that can't be safely passed to system() without
  shell-escaping
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.
---
 src/backend/libpq/Makefile          |   1 +
 src/backend/libpq/auth-oauth.c      | 810 ++++++++++++++++++++++++++++
 src/backend/libpq/auth-sasl.c       |  10 +-
 src/backend/libpq/auth-scram.c      |   4 +-
 src/backend/libpq/auth.c            |  26 +-
 src/backend/libpq/hba.c             |  31 +-
 src/backend/libpq/meson.build       |   1 +
 src/backend/utils/misc/guc_tables.c |  12 +
 src/include/libpq/auth.h            |  17 +
 src/include/libpq/hba.h             |   6 +-
 src/include/libpq/oauth.h           |  24 +
 src/include/libpq/sasl.h            |  11 +
 src/tools/pgindent/typedefs.list    |   1 +
 13 files changed, 922 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..e74c008161
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,810 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+
+/* GUC */
+char	   *oauth_validator_command;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth, const char **logdetail);
+static bool run_validator_command(Port *port, const char *token);
+static bool check_exit(FILE **fh, const char *command);
+static bool set_cloexec(int fd);
+static bool username_ok_for_shell(const char *username);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character %s.",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth, logdetail))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 *
+		 * TODO further validate the key/value grammar? empty keys, bad
+		 * chars...
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: JSON escaping
+	 */
+	appendStringInfo(&buf,
+					 "{ "
+					 "\"status\": \"invalid_token\", "
+					 "\"openid-configuration\": \"%s/.well-known/openid-configuration\", "
+					 "\"scope\": \"%s\" "
+					 "}",
+					 ctx->issuer, ctx->scope);
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+static bool
+validate(Port *port, const char *auth, const char **logdetail)
+{
+	static const char *const b64_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	const char *token;
+	size_t		span;
+	int			ret;
+
+	/* TODO: handle logdetail when the test framework can check it */
+
+	/*-----
+	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+	 * 2.1:
+	 *
+	 *      b64token    = 1*( ALPHA / DIGIT /
+	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+	 *      credentials = "Bearer" 1*SP b64token
+	 *
+	 * The "credentials" construction is what we receive in our auth value.
+	 *
+	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
+	 * it's pointed out in RFC 7628 Sec. 4.)
+	 *
+	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+	 */
+	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return false;
+
+	/* Pull the bearer token out of the auth value. */
+	token = auth + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/*
+	 * Before invoking the validator command, sanity-check the token format to
+	 * avoid any injection attacks later in the chain. Invalid formats are
+	 * technically a protocol violation, but don't reflect any information
+	 * about the sensitive Bearer token back to the client; log at COMMERROR
+	 * instead.
+	 */
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is empty.")));
+		return false;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return false;
+	}
+
+	/* Have the validator check the token. */
+	if (!run_validator_command(port, token))
+		return false;
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (!MyClientConnectionInfo.authn_id)
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	ret = check_usermap(port->hba->usermap, port->user_name,
+						MyClientConnectionInfo.authn_id, false);
+	return (ret == STATUS_OK);
+}
+
+static bool
+run_validator_command(Port *port, const char *token)
+{
+	bool		success = false;
+	int			rc;
+	int			pipefd[2];
+	int			rfd = -1;
+	int			wfd = -1;
+
+	StringInfoData command = {0};
+	char	   *p;
+	FILE	   *fh = NULL;
+
+	ssize_t		written;
+	char	   *line = NULL;
+	size_t		size = 0;
+	ssize_t		len;
+
+	Assert(oauth_validator_command);
+
+	if (!oauth_validator_command[0])
+	{
+		ereport(COMMERROR,
+				(errmsg("oauth_validator_command is not set"),
+				 errhint("To allow OAuth authenticated connections, set "
+						 "oauth_validator_command in postgresql.conf.")));
+		return false;
+	}
+
+	/*------
+	 * Since popen() is unidirectional, open up a pipe for the other
+	 * direction. Use CLOEXEC to ensure that our write end doesn't
+	 * accidentally get copied into child processes, which would prevent us
+	 * from closing it cleanly.
+	 *
+	 * XXX this is ugly. We should just read from the child process's stdout,
+	 * but that's a lot more code.
+	 * XXX by bypassing the popen API, we open the potential of process
+	 * deadlock. Clearly document child process requirements (i.e. the child
+	 * MUST read all data off of the pipe before writing anything).
+	 * TODO: port to Windows using _pipe().
+	 */
+	rc = pipe(pipefd);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not create child pipe: %m")));
+		return false;
+	}
+
+	rfd = pipefd[0];
+	wfd = pipefd[1];
+
+	if (!set_cloexec(wfd))
+	{
+		/* error message was already logged */
+		goto cleanup;
+	}
+
+	/*----------
+	 * Construct the command, substituting any recognized %-specifiers:
+	 *
+	 *   %f: the file descriptor of the input pipe
+	 *   %r: the role that the client wants to assume (port->user_name)
+	 *   %%: a literal '%'
+	 */
+	initStringInfo(&command);
+
+	for (p = oauth_validator_command; *p; p++)
+	{
+		if (p[0] == '%')
+		{
+			switch (p[1])
+			{
+				case 'f':
+					appendStringInfo(&command, "%d", rfd);
+					p++;
+					break;
+				case 'r':
+
+					/*
+					 * TODO: decide how this string should be escaped. The
+					 * role is controlled by the client, so if we don't escape
+					 * it, command injections are inevitable.
+					 *
+					 * This is probably an indication that the role name needs
+					 * to be communicated to the validator process in some
+					 * other way. For this proof of concept, just be
+					 * incredibly strict about the characters that are allowed
+					 * in user names.
+					 */
+					if (!username_ok_for_shell(port->user_name))
+						goto cleanup;
+
+					appendStringInfoString(&command, port->user_name);
+					p++;
+					break;
+				case '%':
+					appendStringInfoChar(&command, '%');
+					p++;
+					break;
+				default:
+					appendStringInfoChar(&command, p[0]);
+			}
+		}
+		else
+			appendStringInfoChar(&command, p[0]);
+	}
+
+	/* Execute the command. */
+	fh = OpenPipeStream(command.data, "r");
+	if (!fh)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("opening pipe to OAuth validator: %m")));
+		goto cleanup;
+	}
+
+	/* We don't need the read end of the pipe anymore. */
+	close(rfd);
+	rfd = -1;
+
+	/* Give the command the token to validate. */
+	written = write(wfd, token, strlen(token));
+	if (written != strlen(token))
+	{
+		/* TODO must loop for short writes, EINTR et al */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not write token to child pipe: %m")));
+		goto cleanup;
+	}
+
+	close(wfd);
+	wfd = -1;
+
+	/*-----
+	 * Read the command's response.
+	 *
+	 * TODO: getline() is probably too new to use, unfortunately.
+	 * TODO: loop over all lines
+	 */
+	if ((len = getline(&line, &size, fh)) >= 0)
+	{
+		/* TODO: fail if the authn_id doesn't end with a newline */
+		if (len > 0)
+			line[len - 1] = '\0';
+
+		set_authn_id(port, line);
+	}
+	else if (ferror(fh))
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not read from command \"%s\": %m",
+						command.data)));
+		goto cleanup;
+	}
+
+	/* Make sure the command exits cleanly. */
+	if (!check_exit(&fh, command.data))
+	{
+		/* error message already logged */
+		goto cleanup;
+	}
+
+	/* Done. */
+	success = true;
+
+cleanup:
+	if (line)
+		free(line);
+
+	/*
+	 * In the successful case, the pipe fds are already closed. For the error
+	 * case, always close out the pipe before waiting for the command, to
+	 * prevent deadlock.
+	 */
+	if (rfd >= 0)
+		close(rfd);
+	if (wfd >= 0)
+		close(wfd);
+
+	if (fh)
+	{
+		Assert(!success);
+		check_exit(&fh, command.data);
+	}
+
+	if (command.data)
+		pfree(command.data);
+
+	return success;
+}
+
+static bool
+check_exit(FILE **fh, const char *command)
+{
+	int			rc;
+
+	rc = ClosePipeStream(*fh);
+	*fh = NULL;
+
+	if (rc == -1)
+	{
+		/* pclose() itself failed. */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not close pipe to command \"%s\": %m",
+						command)));
+	}
+	else if (rc != 0)
+	{
+		char	   *reason = wait_result_to_str(rc);
+
+		ereport(COMMERROR,
+				(errmsg("failed to execute command \"%s\": %s",
+						command, reason)));
+
+		pfree(reason);
+	}
+
+	return (rc == 0);
+}
+
+static bool
+set_cloexec(int fd)
+{
+	int			flags;
+	int			rc;
+
+	flags = fcntl(fd, F_GETFD);
+	if (flags == -1)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not get fd flags for child pipe: %m")));
+		return false;
+	}
+
+	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * XXX This should go away eventually and be replaced with either a proper
+ * escape or a different strategy for communication with the validator command.
+ */
+static bool
+username_ok_for_shell(const char *username)
+{
+	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
+	static const char *const allowed =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-_./:";
+	size_t		span;
+
+	Assert(username && username[0]);	/* should have already been checked */
+
+	span = strspn(username, allowed);
+	if (username[span] != '\0')
+	{
+		ereport(COMMERROR,
+				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
+		return false;
+	}
+
+	return true;
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 2abb1a9b3a..aa6b5020dc 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -118,7 +118,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 9bbdc4beb0..db7c77da86 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -47,7 +48,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -203,22 +203,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -307,6 +291,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -342,7 +329,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -629,6 +616,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 8004d102ad..03c3f038c7 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -119,7 +119,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1748,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2067,8 +2070,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2451,6 +2455,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 527a2b2734..8c1b90e310 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -47,6 +47,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4639,6 +4640,17 @@ struct config_string ConfigureNamesString[] =
 		check_debug_io_direct, assign_debug_io_direct, NULL
 	},
 
+	{
+		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&oauth_validator_command,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..5edab3b25a
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern char *oauth_validator_command;
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6234fe66f1..fe65a4222b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3543,6 +3543,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v17-0009-WIP-Python-OAuth-provider-implementation.patchapplication/octet-stream; name=v17-0009-WIP-Python-OAuth-provider-implementation.patchDownload
From de8f81bd7d792e7739381d15bf2afa000729476f Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 26 Feb 2024 16:24:32 -0800
Subject: [PATCH v17 9/9] WIP: Python OAuth provider implementation

---
 src/test/modules/oauth_validator/Makefile     |   2 +
 src/test/modules/oauth_validator/meson.build  |   3 +
 .../modules/oauth_validator/t/001_server.pl   |  12 +-
 .../modules/oauth_validator/t/oauth_server.py |  92 ++++++++++++
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 141 +++---------------
 5 files changed, 125 insertions(+), 125 deletions(-)
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py

diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
index 1f874cd7f2..e93e01455a 100644
--- a/src/test/modules/oauth_validator/Makefile
+++ b/src/test/modules/oauth_validator/Makefile
@@ -1,3 +1,5 @@
+export PYTHON
+
 MODULES = validator
 PGFILEDESC = "validator - test OAuth validator module"
 
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index d9c1d1d577..3feba6f826 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -29,5 +29,8 @@ tests += {
     'tests': [
       't/001_server.pl',
     ],
+    'env': {
+      'PYTHON': python.path(),
+    },
   },
 }
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 49e04b0afe..bbfa69e442 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -34,20 +34,16 @@ $node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n"
 $node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
 $node->start;
 
-reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:18080" scope="openid postgres"');
-
-my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
 
 my $port = $webserver->port();
-
-is($port, 18080, "Port is 18080");
-
-$webserver->setup();
-$webserver->run();
+reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:' . $port . '" scope="openid postgres"');
 
 $node->connect_ok("dbname=postgres oauth_client_id=f02c6361-0635", "connect",
 				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
 
+$webserver->stop();
 $node->stop;
 
 done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..60d2f68f29
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,92 @@
+#! /usr/bin/env python3
+
+import http.server
+import json
+import os
+import sys
+from typing import TypeAlias
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject: TypeAlias = dict[str, object]
+
+    def do_GET(self):
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def do_POST(self):
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        """
+
+        resp = json.dumps(js).encode("ascii")
+
+        self.send_response(200, "OK")
+        self.send_header("Content-Type", "application/json")
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        return {
+            "issuer": f"http://localhost:{port}",
+            "token_endpoint": f"http://localhost:{port}/token",
+            "device_authorization_endpoint": f"http://localhost:{port}/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    def authorization(self) -> JsonObject:
+        return {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            "interval": 0,
+            "verification_uri": "https://example.com/",
+            "expires-in": 5,
+        }
+
+    def token(self) -> JsonObject:
+        return {
+            "access_token": "9243959234",
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
index 3ac90c3d0f..d96733f531 100644
--- a/src/test/perl/PostgreSQL/Test/OAuthServer.pm
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -5,6 +5,7 @@ package PostgreSQL::Test::OAuthServer;
 use warnings;
 use strict;
 use threads;
+use Scalar::Util;
 use Socket;
 use IO::Select;
 
@@ -13,27 +14,13 @@ local *server_socket;
 sub new
 {
 	my $class = shift;
-	my $port = shift;
 
 	my $self = {};
 	bless($self, $class);
 
-	$self->{'port'} = $port;
-
 	return $self;
 }
 
-sub setup
-{
-	my $self = shift;
-	my $tcp = getprotobyname('tcp');
-
-	socket($self->{'socket'}, PF_INET, SOCK_STREAM, $tcp)
-		or die "no socket";
-	setsockopt($self->{'socket'}, SOL_SOCKET, SO_REUSEADDR, pack("l", 1));
-	bind($self->{'socket'}, sockaddr_in($self->{'port'}, INADDR_ANY));
-}
-
 sub port
 {
 	my $self = shift;
@@ -44,115 +31,35 @@ sub port
 sub run
 {
 	my $self = shift;
+	my $port;
 
-	my $server_thread = threads->create(\&_listen, $self);
-	$server_thread->detach();
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		// die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	print("# OAuth provider (PID $pid) is listening on port $port\n");
 }
 
-sub _listen
+sub stop
 {
 	my $self = shift;
 
-	listen($self->{'socket'}, SOMAXCONN) or die "fail to listen: $!";
-
-	while (1)
-	{
-		my $fh;
-		my %request;
-		my $remote = accept($fh, $self->{'socket'});
-		binmode $fh;
-
-		my ($method, $object, $prot) = split(/ /, <$fh>);
-		$request{'method'} = $method;
-		$request{'object'} = $object;
-		chomp($request{'object'});
-
-		local $/ = Socket::CRLF;
-		my $c = 0;
-		while(<$fh>)
-		{
-			chomp;
-			# Headers
-			if (/:/)
-			{
-				my ($field, $value) = split(/:/, $_, 2);
-				$value =~ s/^\s+//;
-				$request{'headers'}{lc $field} = $value;
-			}
-			# POST data
-			elsif (/^$/)
-			{
-				read($fh, $request{'content'}, $request{'headers'}{'content-length'})
-					if defined $request{'headers'}{'content-length'};
-				last;
-			}
-		}
-
-		# Debug printing
-		# print ": read ".$request{'method'} . ";" . $request{'object'}.";\n";
-		# foreach my $h (keys(%{$request{'headers'}}))
-		#{
-		#	printf ": headers: " . $request{'headers'}{$h} . "\n";
-		#}
-		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
-
-		if ($request{'object'} eq '/.well-known/openid-configuration')
-		{
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"issuer": "http://localhost:$self->{'port'}",
-				"token_endpoint": "http://localhost:$self->{'port'}/token",
-				"device_authorization_endpoint": "http://localhost:$self->{'port'}/authorize",
-				"response_types_supported": ["token"],
-				"subject_types_supported": ["public"],
-				"id_token_signing_alg_values_supported": ["RS256"],
-				"grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"]
-			}
-EOR
-		}
-		elsif ($request{'object'} eq '/authorize')
-		{
-			print ": returning device_code\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"device_code": "postgres",
-				"user_code" : "postgresuser",
-				"interval" : 0,
-				"verification_uri" : "https://example.com/",
-				"expires-in": 5
-			}
-EOR
-		}
-		elsif ($request{'object'} eq '/token')
-		{
-			print ": returning token\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"access_token": "9243959234",
-				"token_type": "bearer"
-			}
-EOR
-		}
-		else
-		{
-			print ": returning default\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: text/html\r\n";
-			print $fh "\r\n";
-			print $fh "Ok\n";
-		}
-
-		close($fh);
-	}
+	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
 }
 
 1;
-- 
2.34.1

#90Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#89)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 28 Feb 2024, at 15:05, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

[Trying again, with all patches unzipped and the CC list temporarily
removed to avoid flooding people's inboxes. Original message follows.]

On Fri, Feb 23, 2024 at 5:01 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

The
patchset is now carrying a lot of squash-cruft, and I plan to flatten
it in the next version.

This is done in v17, which is also now based on the two patches pulled
out by Daniel in [1]. Besides the squashes, which make up most of the
range-diff, I've fixed a call to strncasecmp() which is not available
on Windows.

Daniel and I discussed trying a Python version of the test server,
since the standard library there should give us more goodies to work
with. A proof of concept is in 0009. I think the big question I have
for it is, how would we communicate what we want the server to do for
the test? (We could perhaps switch on magic values of the client ID?)
In the end I'd like to be testing close to 100% of the failure modes,
and that's likely to mean a lot of back-and-forth if the server
implementation isn't in the Perl process.

Thanks for the new version, I'm digesting the test patches but for now I have a
few smaller comments:

+#define ALLOC(size) malloc(size)
I wonder if we should use pg_malloc_extended(size, MCXT_ALLOC_NO_OOM) instead
to self document the code. We clearly don't want feature-parity with server-
side palloc here. I know we use malloc in similar ALLOC macros so it's not
unique in that regard, but maybe?

+#ifdef FRONTEND
+               destroyPQExpBuffer(lex->errormsg);
+#else
+               pfree(lex->errormsg->data);
+               pfree(lex->errormsg);
+#endif
Wouldn't it be nicer if we abstracted this into a destroyStrVal function to a)
avoid the ifdefs and b) make it more like the rest of the new API?  While it's
only used in two places (close to each other) it's a shame to let the
underlying API bleed through the abstraction.

+ CURLM *curlm; /* top-level multi handle for cURL operations */
Nitpick, but curl is not capitalized cURL anymore (for some value of "anymore"
since it changed in 2016 [0]). I do wonder if we should consistently write
"libcurl" as well since we don't use curl but libcurl.

+   PQExpBufferData     work_data;  /* scratch buffer for general use (remember
+                                      to clear out prior contents first!) */
This seems like asking for subtle bugs due to uncleared buffers bleeding into
another operation (especially since we are writing this data across the wire).
How about having an array the size of OAuthStep of unallocated buffers where
each step use it's own?  Storing the content of each step could also be useful
for debugging.  Looking at the statemachine here it's not an obvious change but
also not impossible.
+ * TODO: This disables DNS resolution timeouts unless libcurl has been
+ * compiled against alternative resolution support. We should check that.
curl_version_info() can be used to check for c-ares support.
+ * so you don't have to write out the error handling every time. They assume
+ * that they're embedded in a function returning bool, however.
It feels a bit iffy to encode the returntype in the macro, we can use the same
trick that DISABLE_SIGPIPE employs where a failaction is passed in.

+ if (!strcmp(name, field->name))
Project style is to test for (strcmp(x,y) == 0) rather than (!strcmp()) to
improve readability.

+ libpq_append_conn_error(conn, "out of memory");
While not introduced in this patch, it's not an ideal pattern to report "out of
memory" errors via a function which may allocate memory.

+  appendPQExpBufferStr(&conn->errorMessage,
+           libpq_gettext("server's error message contained an embedded NULL"));
We should maybe add ", discarding" or something similar after this string to
indicate that there was an actual error which has been thrown away, the error
wasn't that the server passed an embedded NULL.
+#ifdef USE_OAUTH
+       else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+               !selected_mechanism)
I wonder if we instead should move the guards inside the statement and error
out with "not built with OAuth support" or something similar like how we do
with TLS and other optional components?
+   errdetail("Comma expected, but found character %s.",
+             sanitize_char(*p))));
The %s formatter should be wrapped like '%s' to indicate that the message part
is the character in question (and we can then reuse the translation since the
error message already exist for SCRAM).
+       temp = curl_slist_append(temp, "authorization_code");
+       if (!temp)
+           oom = true;
+
+       temp = curl_slist_append(temp, "implicit");
While not a bug per se, it reads a bit odd to call another operation that can
allocate memory when the oom flag has been set.  I think we can move some
things around a little to make it clearer.

The attached diff contains some (most?) of the above as a patch on top of your
v17, but as a .txt to keep the CFBot from munging on it.

--
Daniel Gustafsson

Attachments:

v17_review_suggestions.txttext/plain; name=v17_review_suggestions.txt; x-unix-mode=0644Download
commit 62b6bda852d09458ffcd5854b7810b3b316e2b1f
Author: Daniel Gustafsson <daniel@yesql.se>
Date:   Wed Feb 28 18:26:03 2024 +0100

    Suggested changes

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 765a18b9b2..f5cf271566 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -204,7 +204,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 				ereport(ERROR,
 						(errcode(ERRCODE_PROTOCOL_VIOLATION),
 						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Comma expected, but found character %s.",
+						 errdetail("Comma expected, but found character \"%s\".",
 								   sanitize_char(*p))));
 			p++;
 			break;
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 479310f598..2d1f30353a 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -42,6 +42,7 @@
 #define appendStrValChar	appendPQExpBufferChar
 #define createStrVal		createPQExpBuffer
 #define resetStrVal			resetPQExpBuffer
+#define destroyStrVal		destroyPQExpBuffer
 
 #else							/* !FRONTEND */
 
@@ -53,6 +54,7 @@
 #define appendStrValChar	appendStringInfoChar
 #define createStrVal		makeStringInfo
 #define resetStrVal			resetStringInfo
+#define destroyStrVal		destroyStringInfo
 
 #endif
 
@@ -223,23 +225,11 @@ freeJsonLexContext(JsonLexContext *lex)
 	static const JsonLexContext empty = {0};
 
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-	{
-#ifdef FRONTEND
-		destroyPQExpBuffer(lex->strval);
-#else
-		pfree(lex->strval->data);
-		pfree(lex->strval);
-#endif
-	}
+		destroyStrVal(lex->strval);
+
 	if (lex->errormsg)
-	{
-#ifdef FRONTEND
-		destroyPQExpBuffer(lex->errormsg);
-#else
-		pfree(lex->errormsg->data);
-		pfree(lex->errormsg);
-#endif
-	}
+		destroyStrVal(lex->errormsg);
+
 	if (lex->flags & JSONLEX_FREE_STRUCT)
 		pfree(lex);
 	else
diff --git a/src/common/stringinfo.c b/src/common/stringinfo.c
index c61d5c58f3..09419f6042 100644
--- a/src/common/stringinfo.c
+++ b/src/common/stringinfo.c
@@ -350,3 +350,10 @@ enlargeStringInfo(StringInfo str, int needed)
 
 	str->maxlen = newlen;
 }
+
+void
+destroyStringInfo(StringInfo str)
+{
+	pfree(str->data);
+	pfree(str);
+}
diff --git a/src/include/lib/stringinfo.h b/src/include/lib/stringinfo.h
index 2cd636b01c..64ec6419af 100644
--- a/src/include/lib/stringinfo.h
+++ b/src/include/lib/stringinfo.h
@@ -233,4 +233,6 @@ extern void appendBinaryStringInfoNT(StringInfo str,
  */
 extern void enlargeStringInfo(StringInfo str, int needed);
 
+
+extern void destroyStringInfo(StringInfo str);
 #endif							/* STRINGINFO_H */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 3e20ba5818..7a27198d66 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -293,40 +293,39 @@ free_curl_async_ctx(PGconn *conn, void *ctx)
 
 /*
  * Macros for getting and setting state for the connection's two cURL handles,
- * so you don't have to write out the error handling every time. They assume
- * that they're embedded in a function returning bool, however.
+ * so you don't have to write out the error handling every time.
  */
 
-#define CHECK_MSETOPT(ACTX, OPT, VAL) \
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
 	do { \
 		struct async_ctx *_actx = (ACTX); \
 		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
 		if (_setopterr) { \
 			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
 					   #OPT, curl_multi_strerror(_setopterr)); \
-			return false; \
+			FAILACTION; \
 		} \
 	} while (0)
 
-#define CHECK_SETOPT(ACTX, OPT, VAL) \
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
 	do { \
 		struct async_ctx *_actx = (ACTX); \
 		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
 		if (_setopterr) { \
 			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
 					   #OPT, curl_easy_strerror(_setopterr)); \
-			return false; \
+			FAILACTION; \
 		} \
 	} while (0)
 
-#define CHECK_GETINFO(ACTX, INFO, OUT) \
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
 	do { \
 		struct async_ctx *_actx = (ACTX); \
 		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
 		if (_getinfoerr) { \
 			actx_error(_actx, "failed to get %s from OAuth response: %s",\
 					   #INFO, curl_easy_strerror(_getinfoerr)); \
-			return false; \
+			FAILACTION; \
 		} \
 	} while (0)
 
@@ -450,7 +449,7 @@ oauth_json_object_field_start(void *state, char *name, bool isnull)
 
 		while (field->name)
 		{
-			if (!strcmp(name, field->name))
+			if (strcmp(name, field->name) == 0)
 			{
 				ctx->active = field;
 				break;
@@ -616,7 +615,7 @@ parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
 	bool		success = false;
 
 	/* Make sure the server thinks it's given us JSON. */
-	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type);
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
 
 	if (!content_type)
 	{
@@ -1109,6 +1108,8 @@ register_timer(CURLM *curlm, long timeout, void *ctx)
 static bool
 setup_curl_handles(struct async_ctx *actx)
 {
+	curl_version_info_data	*curl_info;
+
 	/*
 	 * Create our multi handle. This encapsulates the entire conversation with
 	 * cURL for this connection.
@@ -1121,14 +1122,19 @@ setup_curl_handles(struct async_ctx *actx)
 		return false;
 	}
 
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
 	/*
 	 * The multi handle tells us what to wait on using two callbacks. These
 	 * will manipulate actx->mux as needed.
 	 */
-	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket);
-	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx);
-	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer);
-	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
 
 	/*
 	 * Set up an easy handle. All of our requests are made serially, so we
@@ -1145,25 +1151,26 @@ setup_curl_handles(struct async_ctx *actx)
 	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
 	 * to handle the possibility of SIGPIPE ourselves.
 	 *
-	 * TODO: This disables DNS resolution timeouts unless libcurl has been
-	 * compiled against alternative resolution support. We should check that.
-	 *
 	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
 	 * CURLOPT_SOCKOPTFUNCTION maybe...
 	 */
-	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L);
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
 
 	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
-	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L);
-	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err);
+	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
 
 	/* TODO */
-	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, stderr);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, stderr, return false);
 
 	/*
 	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
 	 */
-	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS);
+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS, return false);
 
 	/*
 	 * Suppress the Accept header to make our request as minimal as possible.
@@ -1172,7 +1179,7 @@ setup_curl_handles(struct async_ctx *actx)
 	 * what comes back anyway.)
 	 */
 	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
-	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers);
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
 
 	return true;
 }
@@ -1213,8 +1220,8 @@ start_request(struct async_ctx *actx)
 	int			running;
 
 	resetPQExpBuffer(&actx->work_data);
-	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data);
-	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
 
 	err = curl_multi_add_handle(actx->curlm, actx->curl);
 	if (err)
@@ -1339,8 +1346,8 @@ drive_request(struct async_ctx *actx)
 static bool
 start_discovery(struct async_ctx *actx, const char *discovery_uri)
 {
-	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L);
-	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri);
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
 
 	return start_request(actx);
 }
@@ -1363,7 +1370,7 @@ finish_discovery(struct async_ctx *actx)
 	 * validation into question), or non-authoritative responses, or any other
 	 * complications.
 	 */
-	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code);
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	if (response_code != 200)
 	{
@@ -1387,20 +1394,17 @@ finish_discovery(struct async_ctx *actx)
 		 * Per Section 3, the default is ["authorization_code", "implicit"].
 		 */
 		struct curl_slist *temp = actx->provider.grant_types_supported;
-		bool		oom = false;
 
 		temp = curl_slist_append(temp, "authorization_code");
-		if (!temp)
-			oom = true;
-
-		temp = curl_slist_append(temp, "implicit");
-		if (!temp)
-			oom = true;
-
-		if (oom)
+		if (temp)
 		{
-			actx_error(actx, "out of memory");
-			return false;
+			temp = curl_slist_append(temp, "implicit");
+			if (!temp)
+			{
+				curl_slist_free_all(temp);
+				actx_error(actx, "out of memory");
+				return false;
+			}
 		}
 
 		actx->provider.grant_types_supported = temp;
@@ -1491,8 +1495,8 @@ start_device_authz(struct async_ctx *actx, PGconn *conn)
 	/* TODO check for broken buffer */
 
 	/* Make our request. */
-	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri);
-	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data);
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
 
 	if (conn->oauth_client_secret)
 	{
@@ -1506,12 +1510,12 @@ start_device_authz(struct async_ctx *actx, PGconn *conn)
 		 *
 		 * TODO: should we omit client_id from the body in this case?
 		 */
-		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
-		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id);
-		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret);
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
 	}
 	else
-		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE);
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
 
 	return start_request(actx);
 }
@@ -1521,7 +1525,7 @@ finish_device_authz(struct async_ctx *actx)
 {
 	long		response_code;
 
-	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code);
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
 	 * The device authorization endpoint uses the same error response as the
@@ -1593,8 +1597,8 @@ start_token_request(struct async_ctx *actx, PGconn *conn)
 	/* TODO check for broken buffer */
 
 	/* Make our request. */
-	CHECK_SETOPT(actx, CURLOPT_URL, token_uri);
-	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data);
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
 
 	if (conn->oauth_client_secret)
 	{
@@ -1608,16 +1612,16 @@ start_token_request(struct async_ctx *actx, PGconn *conn)
 		 *
 		 * TODO: should we omit client_id from the body in this case?
 		 */
-		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
-		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id);
-		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret);
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
 	}
 	else
-		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE);
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
 
 	resetPQExpBuffer(work_buffer);
-	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data);
-	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
 
 	return start_request(actx);
 }
@@ -1627,7 +1631,7 @@ finish_token_request(struct async_ctx *actx, struct token *tok)
 {
 	long		response_code;
 
-	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code);
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
 	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
@@ -1889,7 +1893,7 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 				 * A slow_down error requires us to permanently increase our
 				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
 				 */
-				if (!strcmp(err->error, "slow_down"))
+				if (strcmp(err->error, "slow_down") == 0)
 				{
 					actx->authz.interval += 5;	/* TODO check for overflow? */
 				}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 2a35c7438c..66ee8ff076 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -49,7 +49,7 @@ oauth_init(PGconn *conn, const char *password,
 	 * error.
 	 */
 	Assert(sasl_mechanism != NULL);
-	Assert(!strcmp(sasl_mechanism, OAUTHBEARER_NAME));
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
 
 	state = calloc(1, sizeof(*state));
 	if (!state)
@@ -156,17 +156,17 @@ oauth_json_object_field_start(void *state, char *name, bool isnull)
 
 	if (ctx->nested == 1)
 	{
-		if (!strcmp(name, ERROR_STATUS_FIELD))
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
 		{
 			ctx->target_field_name = ERROR_STATUS_FIELD;
 			ctx->target_field = &ctx->status;
 		}
-		else if (!strcmp(name, ERROR_SCOPE_FIELD))
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
 		{
 			ctx->target_field_name = ERROR_SCOPE_FIELD;
 			ctx->target_field = &ctx->scope;
 		}
-		else if (!strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD))
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
 		{
 			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
 			ctx->target_field = &ctx->discovery_uri;
@@ -243,7 +243,7 @@ handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
 	if (strlen(msg) != msglen)
 	{
 		appendPQExpBufferStr(&conn->errorMessage,
-							 libpq_gettext("server's error message contained an embedded NULL"));
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
 		return false;
 	}
 
@@ -316,7 +316,7 @@ handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
 		return false;
 	}
 
-	if (!strcmp(ctx.status, "invalid_token"))
+	if (strcmp(ctx.status, "invalid_token") == 0)
 	{
 		/*
 		 * invalid_token is the only error code we'll automatically retry for,
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 9e084dd1c7..6e3538c9fd 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -525,15 +525,18 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
-#ifdef USE_OAUTH
 		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
 				 !selected_mechanism)
 		{
+#ifdef USE_OAUTH
 			selected_mechanism = OAUTHBEARER_NAME;
 			conn->sasl = &pg_oauth_mech;
 			conn->password_needed = false;
-		}
+#else
+			libpq_append_conn_error(conn, "OAuth is required, but client does not support it");
+			goto error;
 #endif
+		}
 	}
 
 	if (!selected_mechanism)
#91Andrew Dunstan
andrew@dunslane.net
In reply to: Jacob Champion (#89)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 2024-02-28 We 09:05, Jacob Champion wrote:

Daniel and I discussed trying a Python version of the test server,
since the standard library there should give us more goodies to work
with. A proof of concept is in 0009. I think the big question I have
for it is, how would we communicate what we want the server to do for
the test? (We could perhaps switch on magic values of the client ID?)
In the end I'd like to be testing close to 100% of the failure modes,
and that's likely to mean a lot of back-and-forth if the server
implementation isn't in the Perl process.

Can you give some more details about what this python gadget would buy
us? I note that there are a couple of CPAN modules that provide OAuth2
servers, not sure if they would be of any use.

cheers

andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com

#92Daniel Gustafsson
daniel@yesql.se
In reply to: Andrew Dunstan (#91)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 28 Feb 2024, at 22:50, Andrew Dunstan <andrew@dunslane.net> wrote:

On 2024-02-28 We 09:05, Jacob Champion wrote:

Daniel and I discussed trying a Python version of the test server,
since the standard library there should give us more goodies to work
with. A proof of concept is in 0009. I think the big question I have
for it is, how would we communicate what we want the server to do for
the test? (We could perhaps switch on magic values of the client ID?)
In the end I'd like to be testing close to 100% of the failure modes,
and that's likely to mean a lot of back-and-forth if the server
implementation isn't in the Perl process.

Can you give some more details about what this python gadget would buy us? I note that there are a couple of CPAN modules that provide OAuth2 servers, not sure if they would be of any use.

The main benefit would be to be able to provide a full testharness without
adding any additional dependencies over what we already have (Python being
required by meson). That should ideally make it easy to get good coverage from
BF animals as no installation is needed.

--
Daniel Gustafsson

#93Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#92)
Re: [PoC] Federated Authn/z with OAUTHBEARER

[re-adding the CC list I dropped earlier]

On Wed, Feb 28, 2024 at 1:52 PM Daniel Gustafsson <daniel@yesql.se> wrote:

On 28 Feb 2024, at 22:50, Andrew Dunstan <andrew@dunslane.net> wrote:
Can you give some more details about what this python gadget would buy us? I note that there are a couple of CPAN modules that provide OAuth2 servers, not sure if they would be of any use.

The main benefit would be to be able to provide a full testharness without
adding any additional dependencies over what we already have (Python being
required by meson). That should ideally make it easy to get good coverage from
BF animals as no installation is needed.

As an additional note, the test suite ideally needs to be able to
exercise failure modes where the provider itself is malfunctioning. So
we hand-roll responses rather than deferring to an external
OAuth/OpenID implementation, which adds HTTP and JSON dependencies at
minimum, and Python includes both. See also the discussion with
Stephen upthread [1]/messages/by-id/CAAWbhmh+6q4t3P+wDmS=JuHBpcgF-VM2cXNft8XV02yk-cHCpQ@mail.gmail.com.

(I do think it'd be nice to eventually include a prepackaged OAuth
server in the test suite, to stack coverage for the happy path and
further test interoperability.)

Thanks,
--Jacob

[1]: /messages/by-id/CAAWbhmh+6q4t3P+wDmS=JuHBpcgF-VM2cXNft8XV02yk-cHCpQ@mail.gmail.com

#94Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#1)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 27 Feb 2024, at 20:20, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Fri, Feb 23, 2024 at 5:01 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

The
patchset is now carrying a lot of squash-cruft, and I plan to flatten
it in the next version.

This is done in v17, which is also now based on the two patches pulled
out by Daniel in [1]. Besides the squashes, which make up most of the
range-diff, I've fixed a call to strncasecmp() which is not available
on Windows.

Two quick questions:

+   /* TODO */
+   CHECK_SETOPT(actx, CURLOPT_WRITEDATA, stderr);
I might be missing something, but what this is intended for in
setup_curl_handles()?
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-iddawc.c
As discussed off-list I think we should leave iddawc support for later and
focus on getting one library properly supported to start with.  If you agree,
let's drop this from the patchset to make it easier to digest.  We should make
sure we keep pluggability such that another library can be supported though,
much like the libpq TLS support.

--
Daniel Gustafsson

#95Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#90)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Feb 28, 2024 at 9:40 AM Daniel Gustafsson <daniel@yesql.se> wrote:

+#define ALLOC(size) malloc(size)
I wonder if we should use pg_malloc_extended(size, MCXT_ALLOC_NO_OOM) instead
to self document the code. We clearly don't want feature-parity with server-
side palloc here. I know we use malloc in similar ALLOC macros so it's not
unique in that regard, but maybe?

I have a vague recollection that linking fe_memutils into libpq
tripped the exit() checks, but I can try again and see.

+#ifdef FRONTEND
+               destroyPQExpBuffer(lex->errormsg);
+#else
+               pfree(lex->errormsg->data);
+               pfree(lex->errormsg);
+#endif
Wouldn't it be nicer if we abstracted this into a destroyStrVal function to a)
avoid the ifdefs and b) make it more like the rest of the new API?  While it's
only used in two places (close to each other) it's a shame to let the
underlying API bleed through the abstraction.

Good idea. I'll fold this from your patch into the next set (and do
the same for the ones I've marked +1 below).

+ CURLM *curlm; /* top-level multi handle for cURL operations */
Nitpick, but curl is not capitalized cURL anymore (for some value of "anymore"
since it changed in 2016 [0]). I do wonder if we should consistently write
"libcurl" as well since we don't use curl but libcurl.

Huh, I missed that memo. Thanks -- that makes it much easier to type!

+   PQExpBufferData     work_data;  /* scratch buffer for general use (remember
+                                      to clear out prior contents first!) */
This seems like asking for subtle bugs due to uncleared buffers bleeding into
another operation (especially since we are writing this data across the wire).
How about having an array the size of OAuthStep of unallocated buffers where
each step use it's own?  Storing the content of each step could also be useful
for debugging.  Looking at the statemachine here it's not an obvious change but
also not impossible.

I like that idea; I'll give it a look.

+ * TODO: This disables DNS resolution timeouts unless libcurl has been
+ * compiled against alternative resolution support. We should check that.
curl_version_info() can be used to check for c-ares support.

+1

+ * so you don't have to write out the error handling every time. They assume
+ * that they're embedded in a function returning bool, however.
It feels a bit iffy to encode the returntype in the macro, we can use the same
trick that DISABLE_SIGPIPE employs where a failaction is passed in.

+1

+ if (!strcmp(name, field->name))
Project style is to test for (strcmp(x,y) == 0) rather than (!strcmp()) to
improve readability.

+1

+ libpq_append_conn_error(conn, "out of memory");
While not introduced in this patch, it's not an ideal pattern to report "out of
memory" errors via a function which may allocate memory.

Does trying (and failing) to allocate more memory cause any harm? Best
case, we still have enough room in the errorMessage to fit the whole
error; worst case, we mark the errorMessage broken and then
PQerrorMessage() can handle it correctly.

+  appendPQExpBufferStr(&conn->errorMessage,
+           libpq_gettext("server's error message contained an embedded NULL"));
We should maybe add ", discarding" or something similar after this string to
indicate that there was an actual error which has been thrown away, the error
wasn't that the server passed an embedded NULL.

+1

+#ifdef USE_OAUTH
+       else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+               !selected_mechanism)
I wonder if we instead should move the guards inside the statement and error
out with "not built with OAuth support" or something similar like how we do
with TLS and other optional components?

This one seems like a step backwards. IIUC, the client can currently
handle a situation where the server returns multiple mechanisms
(though the server doesn't support that yet), and I'd really like to
make use of that property without making users upgrade libpq.

That said, it'd be good to have a more specific error message in the
case where we don't have a match...

+   errdetail("Comma expected, but found character %s.",
+             sanitize_char(*p))));
The %s formatter should be wrapped like '%s' to indicate that the message part
is the character in question (and we can then reuse the translation since the
error message already exist for SCRAM).

+1

+       temp = curl_slist_append(temp, "authorization_code");
+       if (!temp)
+           oom = true;
+
+       temp = curl_slist_append(temp, "implicit");
While not a bug per se, it reads a bit odd to call another operation that can
allocate memory when the oom flag has been set.  I think we can move some
things around a little to make it clearer.

I'm not a huge fan of nested happy paths/pyramids of doom, but in this
case it's small enough that I'm not opposed. :D

The attached diff contains some (most?) of the above as a patch on top of your
v17, but as a .txt to keep the CFBot from munging on it.

Thanks very much! I plan to apply all but the USE_OAUTH guard change
(but let me know if you feel strongly about it).

--Jacob

#96Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#94)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Feb 29, 2024 at 1:08 PM Daniel Gustafsson <daniel@yesql.se> wrote:

+   /* TODO */
+   CHECK_SETOPT(actx, CURLOPT_WRITEDATA, stderr);
I might be missing something, but what this is intended for in
setup_curl_handles()?

Ah, that's cruft left over from early debugging, just so that I could
see what was going on. I'll remove it.

--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-iddawc.c
As discussed off-list I think we should leave iddawc support for later and
focus on getting one library properly supported to start with.  If you agree,
let's drop this from the patchset to make it easier to digest.  We should make
sure we keep pluggability such that another library can be supported though,
much like the libpq TLS support.

Agreed. The number of changes being folded into the next set is
already pretty big so I think this will wait until next+1.

--Jacob

#97Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#95)
10 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Feb 29, 2024 at 4:04 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

On Wed, Feb 28, 2024 at 9:40 AM Daniel Gustafsson <daniel@yesql.se> wrote:

+       temp = curl_slist_append(temp, "authorization_code");
+       if (!temp)
+           oom = true;
+
+       temp = curl_slist_append(temp, "implicit");
While not a bug per se, it reads a bit odd to call another operation that can
allocate memory when the oom flag has been set.  I think we can move some
things around a little to make it clearer.

I'm not a huge fan of nested happy paths/pyramids of doom, but in this
case it's small enough that I'm not opposed. :D

I ended up rewriting this patch hunk a bit to handle earlier OOM
failures; let me know what you think.

--

v18 is the result of plenty of yak shaving now that the Windows build
is working. In addition to Daniel's changes as discussed upthread,
- I have rebased over v2 of the SASL-refactoring patches
- the last CompilerWarnings failure has been fixed
- the py.test suite now runs on Windows (but does not yet completely pass)
- py.test has been completely disabled for the 32-bit Debian test in
Cirrus; I don't know if there's a way to install 32-bit Python
side-by-side with 64-bit

We are now very, very close to green.

The new oauth_validator tests can't work on Windows, since the client
doesn't support OAuth there. The python/server tests can handle this
case, since they emulate the client behavior; do we want to try
something similar in Perl?

--Jacob

Attachments:

since-v17.diff.txttext/plain; charset=US-ASCII; name=since-v17.diff.txtDownload
 1:  00976d4f75 !  1:  e2a0b48561 common/jsonapi: support FRONTEND clients
    @@ Commit message
         We can now partially revert b44669b2ca, now that json_errdetail() works
         correctly.
     
    +    Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
    +
      ## src/bin/pg_verifybackup/t/005_bad_manifest.pl ##
     @@ src/bin/pg_verifybackup/t/005_bad_manifest.pl: use Test::More;
      my $tempdir = PostgreSQL::Test::Utils::tempdir;
    @@ src/common/jsonapi.c
     +#define appendStrValChar	appendPQExpBufferChar
     +#define createStrVal		createPQExpBuffer
     +#define resetStrVal			resetPQExpBuffer
    ++#define destroyStrVal		destroyPQExpBuffer
     +
     +#else							/* !FRONTEND */
     +
    @@ src/common/jsonapi.c
     +#define appendStrValChar	appendStringInfoChar
     +#define createStrVal		makeStringInfo
     +#define resetStrVal			resetStringInfo
    ++#define destroyStrVal		destroyStringInfo
     +
     +#endif
     +
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, char *js
     +	static const JsonLexContext empty = {0};
     +
      	if (lex->flags & JSONLEX_FREE_STRVAL)
    - 	{
    -+#ifdef FRONTEND
    -+		destroyPQExpBuffer(lex->strval);
    -+#else
    - 		pfree(lex->strval->data);
    - 		pfree(lex->strval);
    -+#endif
    -+	}
    +-	{
    +-		pfree(lex->strval->data);
    +-		pfree(lex->strval);
    +-	}
    ++		destroyStrVal(lex->strval);
    ++
     +	if (lex->errormsg)
    -+	{
    -+#ifdef FRONTEND
    -+		destroyPQExpBuffer(lex->errormsg);
    -+#else
    -+		pfree(lex->errormsg->data);
    -+		pfree(lex->errormsg);
    -+#endif
    - 	}
    ++		destroyStrVal(lex->errormsg);
    ++
      	if (lex->flags & JSONLEX_FREE_STRUCT)
      		pfree(lex);
     +	else
    @@ src/common/parse_manifest.c: json_parse_manifest(JsonManifestParseContext *conte
      		json_manifest_parse_failure(context, "manifest ended unexpectedly");
      
     
    + ## src/common/stringinfo.c ##
    +@@ src/common/stringinfo.c: enlargeStringInfo(StringInfo str, int needed)
    + 
    + 	str->maxlen = newlen;
    + }
    ++
    ++void
    ++destroyStringInfo(StringInfo str)
    ++{
    ++	pfree(str->data);
    ++	pfree(str);
    ++}
    +
      ## src/include/common/jsonapi.h ##
     @@
      #ifndef JSONAPI_H
    @@ src/include/common/jsonapi.h: typedef struct JsonLexContext
      } JsonLexContext;
      
      typedef JsonParseErrorType (*json_struct_action) (void *state);
    +
    + ## src/include/lib/stringinfo.h ##
    +@@ src/include/lib/stringinfo.h: extern void appendBinaryStringInfoNT(StringInfo str,
    +  */
    + extern void enlargeStringInfo(StringInfo str, int needed);
    + 
    ++
    ++extern void destroyStringInfo(StringInfo str);
    + #endif							/* STRINGINFO_H */
 2:  d8b567dd55 !  2:  db625e1d01 Refactor SASL exchange to return tri-state status
    @@ src/interfaces/libpq/fe-auth-sasl.h: typedef struct pg_fe_sasl_mech
     +	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
     +	 *					Additional server challenge is expected
     +	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
    -+	 *	SASL_FAILED:	The exchance has failed and the connection should be
    ++	 *	SASL_FAILED:	The exchange has failed and the connection should be
     +	 *					dropped.
      	 *--------
      	 */
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_continue(PGconn *conn, int payloadlen, b
      	free(challenge);			/* don't need the input anymore */
      
     -	if (final && !done)
    -+	if (final && !(status == SASL_FAILED || status == SASL_COMPLETE))
    ++	if (final && status == SASL_CONTINUE)
      	{
      		if (outputlen != 0)
      			free(output);
 3:  83d78f598c !  3:  e4ad0260d5 Explicitly require password for SCRAM exchange
    @@ Commit message
         Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
     
      ## src/interfaces/libpq/fe-auth.c ##
    +@@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
    + 	int			initialresponselen;
    + 	const char *selected_mechanism;
    + 	PQExpBufferData mechanism_buf;
    +-	char	   *password;
    ++	char	   *password = NULL;
    + 	SASLStatus	status;
    + 
    + 	initPQExpBuffer(&mechanism_buf);
     @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      	/*
      	 * Parse the list of SASL authentication mechanisms in the
 4:  00c8073807 !  4:  229f602d5c libpq: add OAUTHBEARER SASL mechanism
    @@ Commit message
         - figure out pgsocket/int difference on Windows
         - fix intermittent failure in the cleanup callback tests (race
           condition?)
    +    - support require_auth
         - ...and more.
     
    +    Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
    +
      ## configure ##
     @@ configure: with_uuid
      with_readline
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +/*
     + * Macros for getting and setting state for the connection's two cURL handles,
    -+ * so you don't have to write out the error handling every time. They assume
    -+ * that they're embedded in a function returning bool, however.
    ++ * so you don't have to write out the error handling every time.
     + */
     +
    -+#define CHECK_MSETOPT(ACTX, OPT, VAL) \
    ++#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
     +	do { \
     +		struct async_ctx *_actx = (ACTX); \
     +		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
     +		if (_setopterr) { \
     +			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
     +					   #OPT, curl_multi_strerror(_setopterr)); \
    -+			return false; \
    ++			FAILACTION; \
     +		} \
     +	} while (0)
     +
    -+#define CHECK_SETOPT(ACTX, OPT, VAL) \
    ++#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
     +	do { \
     +		struct async_ctx *_actx = (ACTX); \
     +		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
     +		if (_setopterr) { \
     +			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
     +					   #OPT, curl_easy_strerror(_setopterr)); \
    -+			return false; \
    ++			FAILACTION; \
     +		} \
     +	} while (0)
     +
    -+#define CHECK_GETINFO(ACTX, INFO, OUT) \
    ++#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
     +	do { \
     +		struct async_ctx *_actx = (ACTX); \
     +		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
     +		if (_getinfoerr) { \
     +			actx_error(_actx, "failed to get %s from OAuth response: %s",\
     +					   #INFO, curl_easy_strerror(_getinfoerr)); \
    -+			return false; \
    ++			FAILACTION; \
     +		} \
     +	} while (0)
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +		while (field->name)
     +		{
    -+			if (!strcmp(name, field->name))
    ++			if (strcmp(name, field->name) == 0)
     +			{
     +				ctx->active = field;
     +				break;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	bool		success = false;
     +
     +	/* Make sure the server thinks it's given us JSON. */
    -+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type);
    ++	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
     +
     +	if (!content_type)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +static int
     +parse_interval(const char *interval_str)
     +{
    -+	float		parsed;
    ++	double		parsed;
     +	int			cnt;
     +
     +	/*
     +	 * The JSON lexer has already validated the number, which is stricter than
     +	 * the %f format, so we should be good to use sscanf().
     +	 */
    -+	cnt = sscanf(interval_str, "%f", &parsed);
    ++	cnt = sscanf(interval_str, "%lf", &parsed);
     +
     +	if (cnt != 1)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		return 1;				/* don't fall through in release builds */
     +	}
     +
    -+	parsed = ceilf(parsed);
    ++	parsed = ceil(parsed);
     +
     +	if (parsed < 1)
     +		return 1;				/* TODO this slows down the tests
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +static bool
     +setup_curl_handles(struct async_ctx *actx)
     +{
    ++	curl_version_info_data	*curl_info;
    ++
     +	/*
     +	 * Create our multi handle. This encapsulates the entire conversation with
     +	 * cURL for this connection.
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	}
     +
     +	/*
    ++	 * Extract information about the libcurl we are linked against.
    ++	 */
    ++	curl_info = curl_version_info(CURLVERSION_NOW);
    ++
    ++	/*
     +	 * The multi handle tells us what to wait on using two callbacks. These
     +	 * will manipulate actx->mux as needed.
     +	 */
    -+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket);
    -+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx);
    -+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer);
    -+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx);
    ++	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
    ++	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
    ++	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
    ++	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
     +
     +	/*
     +	 * Set up an easy handle. All of our requests are made serially, so we
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
     +	 * to handle the possibility of SIGPIPE ourselves.
     +	 *
    -+	 * TODO: This disables DNS resolution timeouts unless libcurl has been
    -+	 * compiled against alternative resolution support. We should check that.
    -+	 *
     +	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
     +	 * CURLOPT_SOCKOPTFUNCTION maybe...
     +	 */
    -+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L);
    ++	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
    ++	if (!curl_info->ares_num)
    ++	{
    ++		/* No alternative resolver, TODO: warn about timeouts */
    ++	}
     +
     +	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
    -+	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L);
    -+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err);
    -+
    -+	/* TODO */
    -+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, stderr);
    ++	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
    ++	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
     +
     +	/*
     +	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
     +	 */
    -+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS);
    ++	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS, return false);
     +
     +	/*
     +	 * Suppress the Accept header to make our request as minimal as possible.
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 * what comes back anyway.)
     +	 */
     +	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
    -+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers);
    ++	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
     +
     +	return true;
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	int			running;
     +
     +	resetPQExpBuffer(&actx->work_data);
    -+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data);
    -+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data);
    ++	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
    ++	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
     +
     +	err = curl_multi_add_handle(actx->curlm, actx->curl);
     +	if (err)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +static bool
     +start_discovery(struct async_ctx *actx, const char *discovery_uri)
     +{
    -+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L);
    -+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri);
    ++	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
    ++	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
     +
     +	return start_request(actx);
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 * validation into question), or non-authoritative responses, or any other
     +	 * complications.
     +	 */
    -+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code);
    ++	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
     +
     +	if (response_code != 200)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 * Per Section 3, the default is ["authorization_code", "implicit"].
     +		 */
     +		struct curl_slist *temp = actx->provider.grant_types_supported;
    -+		bool		oom = false;
     +
     +		temp = curl_slist_append(temp, "authorization_code");
    -+		if (!temp)
    -+			oom = true;
    ++		if (temp)
    ++		{
    ++			temp = curl_slist_append(temp, "implicit");
    ++		}
     +
    -+		temp = curl_slist_append(temp, "implicit");
     +		if (!temp)
    -+			oom = true;
    -+
    -+		if (oom)
     +		{
     +			actx_error(actx, "out of memory");
     +			return false;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	/* TODO check for broken buffer */
     +
     +	/* Make our request. */
    -+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri);
    -+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data);
    ++	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
    ++	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
     +
     +	if (conn->oauth_client_secret)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 *
     +		 * TODO: should we omit client_id from the body in this case?
     +		 */
    -+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
    -+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id);
    -+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret);
    ++		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
    ++		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
    ++		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
     +	}
     +	else
    -+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE);
    ++		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
     +
     +	return start_request(actx);
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +{
     +	long		response_code;
     +
    -+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code);
    ++	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
     +
     +	/*
     +	 * The device authorization endpoint uses the same error response as the
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	/* TODO check for broken buffer */
     +
     +	/* Make our request. */
    -+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri);
    -+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data);
    ++	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
    ++	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
     +
     +	if (conn->oauth_client_secret)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 *
     +		 * TODO: should we omit client_id from the body in this case?
     +		 */
    -+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
    -+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id);
    -+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret);
    ++		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
    ++		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
    ++		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
     +	}
     +	else
    -+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE);
    ++		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
     +
     +	resetPQExpBuffer(work_buffer);
    -+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data);
    -+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer);
    ++	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
    ++	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
     +
     +	return start_request(actx);
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +{
     +	long		response_code;
     +
    -+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code);
    ++	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
     +
     +	/*
     +	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +				 * A slow_down error requires us to permanently increase our
     +				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
     +				 */
    -+				if (!strcmp(err->error, "slow_down"))
    ++				if (strcmp(err->error, "slow_down") == 0)
     +				{
     +					actx->authz.interval += 5;	/* TODO check for overflow? */
     +				}
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	 * error.
     +	 */
     +	Assert(sasl_mechanism != NULL);
    -+	Assert(!strcmp(sasl_mechanism, OAUTHBEARER_NAME));
    ++	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
     +
     +	state = calloc(1, sizeof(*state));
     +	if (!state)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +	if (ctx->nested == 1)
     +	{
    -+		if (!strcmp(name, ERROR_STATUS_FIELD))
    ++		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
     +		{
     +			ctx->target_field_name = ERROR_STATUS_FIELD;
     +			ctx->target_field = &ctx->status;
     +		}
    -+		else if (!strcmp(name, ERROR_SCOPE_FIELD))
    ++		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
     +		{
     +			ctx->target_field_name = ERROR_SCOPE_FIELD;
     +			ctx->target_field = &ctx->scope;
     +		}
    -+		else if (!strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD))
    ++		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
     +		{
     +			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
     +			ctx->target_field = &ctx->discovery_uri;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	if (strlen(msg) != msglen)
     +	{
     +		appendPQExpBufferStr(&conn->errorMessage,
    -+							 libpq_gettext("server's error message contained an embedded NULL"));
    ++							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
     +		return false;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		return false;
     +	}
     +
    -+	if (!strcmp(ctx.status, "invalid_token"))
    ++	if (strcmp(ctx.status, "invalid_token") == 0)
     +	{
     +		/*
     +		 * invalid_token is the only error code we'll automatically retry for,
    @@ src/interfaces/libpq/fe-auth-sasl.h: typedef struct pg_fe_sasl_mech
     +	 *					generated. The mechanism is responsible for setting up
     +	 *					conn->async_auth appropriately before returning.
      	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
    --	 *	SASL_FAILED:	The exchance has failed and the connection should be
    -+	 *	SASL_FAILED:	The exchange has failed and the connection should be
    + 	 *	SASL_FAILED:	The exchange has failed and the connection should be
      	 *					dropped.
      	 *--------
      	 */
    @@ src/interfaces/libpq/fe-auth.c: pg_SSPI_startup(PGconn *conn, int use_negotiate,
      {
      	char	   *initialresponse = NULL;
      	int			initialresponselen;
    - 	const char *selected_mechanism;
    - 	PQExpBufferData mechanism_buf;
    --	char	   *password;
    -+	char	   *password = NULL;
    - 	SASLStatus	status;
    - 
    - 	initPQExpBuffer(&mechanism_buf);
     @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      		goto error;
      	}
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_continue(PGconn *conn, int payloadlen, b
     +		return STATUS_OK;
     +	}
     +
    - 	if (final && !(status == SASL_FAILED || status == SASL_COMPLETE))
    + 	if (final && status == SASL_CONTINUE)
      	{
      		if (outputlen != 0)
     @@ src/interfaces/libpq/fe-auth.c: check_expected_areq(AuthRequest areq, PGconn *conn)
 5:  29d7e3cbed !  5:  5c5a83e44e backend: add OAUTHBEARER SASL mechanism
    @@ Commit message
           deal with multi-issuer setups
         - ...and more.
     
    +    Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
    +
      ## src/backend/libpq/Makefile ##
     @@ src/backend/libpq/Makefile: include $(top_builddir)/src/Makefile.global
      # be-fsstubs is here for historical reasons, probably belongs elsewhere
    @@ src/backend/libpq/auth-oauth.c (new)
     +				ereport(ERROR,
     +						(errcode(ERRCODE_PROTOCOL_VIOLATION),
     +						 errmsg("malformed OAUTHBEARER message"),
    -+						 errdetail("Comma expected, but found character %s.",
    ++						 errdetail("Comma expected, but found character \"%s\".",
     +								   sanitize_char(*p))));
     +			p++;
     +			break;
 6:  0661817808 =  6:  7a42365d62 Introduce OAuth validator libraries
 7:  45755e8461 !  7:  9c46ea6cf9 Add pytest suite for OAuth
    @@ Commit message
         TODOs:
         - The --tap-stream option to pytest-tap is slightly broken during test
           failures (it suppresses error information), which impedes debugging.
    -    - Unsurprisingly, Windows builds fail on the Linux-/BSD-specific backend
    -      changes. 32-bit builds on Ubuntu fail during testing as well.
         - pyca/cryptography is pinned at an old version. Since we use it for
           testing and not security, this isn't a critical problem yet, but it's
           not ideal. Newer versions require a Rust compiler to build, and while
    @@ Commit message
           with the Rust pieces bypassed, compilation on FreeBSD takes a while.
         - The with_oauth test skip logic should probably be integrated into the
           Makefile side as well...
    +    - See if 32-bit tests can be enabled with a 32-bit Python.
     
      ## .cirrus.tasks.yml ##
     @@ .cirrus.tasks.yml: env:
    @@ .cirrus.tasks.yml: task:
      
        matrix:
          - name: Linux - Debian Bullseye - Autoconf
    +@@ .cirrus.tasks.yml: task:
    + 
    +       # Also build & test in a 32bit build - it's gotten rare to test that
    +       # locally.
    ++      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
    ++      # Python modules can't link against libpq.
    +       configure_32_script: |
    +         su postgres <<-EOF
    +           export CC='ccache gcc -m32'
    +@@ .cirrus.tasks.yml: task:
    +             -Dllvm=disabled \
    +             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
    +             -DPERL=perl5.32-i386-linux-gnu \
    +-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
    ++            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
    +             build-32
    +         EOF
    + 
     @@ .cirrus.tasks.yml: task:
          folder: $CCACHE_DIR
      
    @@ meson.build: foreach test_dir : tests
     +        env.set(name, value)
     +      endforeach
     +
    -+      reqs = files(t['requirements'])
    -+      test('install_' + venv_name,
    -+        python,
    -+        args: [ make_venv, '--requirements', reqs, venv_path ],
    -+        env: env,
    -+        priority: setup_tests_priority - 1,  # must run after tmp_install
    -+        is_parallel: false,
    -+        suite: ['setup'],
    -+        timeout: 60,  # 30s is too short for the cryptography package compile
    -+      )
    ++      if get_option('PG_TEST_EXTRA').contains('python')
    ++        reqs = files(t['requirements'])
    ++        test('install_' + venv_name,
    ++          python,
    ++          args: [ make_venv, '--requirements', reqs, venv_path ],
    ++          env: env,
    ++          priority: setup_tests_priority - 1,  # must run after tmp_install
    ++          is_parallel: false,
    ++          suite: ['setup'],
    ++          timeout: 60,  # 30s is too short for the cryptography package compile
    ++        )
    ++      endif
     +
     +      test_group = test_dir['name']
     +      test_output = test_result_dir / test_group / kind
    @@ meson.build: foreach test_dir : tests
     +        'env': env,
     +      }
     +
    -+      pytest = venv_path / 'bin' / 'py.test'
    ++      if fs.is_dir(venv_path / 'Scripts')
    ++        # Windows virtualenv layout
    ++        pytest = venv_path / 'Scripts' / 'py.test'
    ++      else
    ++        pytest = venv_path / 'bin' / 'py.test'
    ++      endif
    ++
     +      test_command = [
     +        pytest,
     +        # Avoid running these tests against an existing database.
    @@ meson.build: foreach test_dir : tests
     +          pyt_p = fs.stem(pyt_p)
     +        endif
     +
    ++        testwrap_pytest = testwrap_base + [
    ++          '--testgroup', test_group,
    ++          '--testname', pyt_p,
    ++        ]
    ++        if not get_option('PG_TEST_EXTRA').contains('python')
    ++          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
    ++        endif
    ++
     +        test(test_group / pyt_p,
     +          python,
     +          kwargs: test_kwargs,
    -+          args: testwrap_base + [
    -+            '--testgroup', test_group,
    -+            '--testname', pyt_p,
    ++          args: testwrap_pytest + [
     +            '--', test_command,
     +            test_dir['sd'] / pyt,
     +          ],
    @@ src/test/python/client/test_oauth.py (new)
     +
     +if platform.system() == "Darwin":
     +    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
    ++elif platform.system() == "Windows":
    ++    pass  # TODO
     +else:
     +    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
     +
    @@ src/test/python/conftest.py (new)
     +    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
     +    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
     +    I've made this an autoused fixture instead.
    -+
    -+    TODO: there are tests here that are probably safe, but until I do a full
    -+    analysis on which are and which are not, I've made the entire thing opt-in.
     +    """
     +    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
     +    if "python" not in extra_tests:
    @@ src/test/python/pq3.py (new)
     +import getpass
     +import io
     +import os
    ++import platform
     +import ssl
     +import sys
     +import textwrap
    @@ src/test/python/pq3.py (new)
     +    try:
     +        return os.environ["PGUSER"]
     +    except KeyError:
    ++        if platform.system() == "Windows":
    ++            # libpq defaults to GetUserName() on Windows.
    ++            return os.getlogin()
     +        return getpass.getuser()
     +
     +
    @@ src/test/python/server/meson.build (new)
     @@
     +# Copyright (c) 2024, PostgreSQL Global Development Group
     +
    -+if not oauth.found()
    -+  subdir_done()
    -+endif
    -+
     +oauthtest_sources = files(
     +  'oauthtest.c',
     +)
    @@ src/test/python/server/test_oauth.py (new)
     +import json
     +import os
     +import pathlib
    ++import platform
     +import secrets
     +import shlex
     +import shutil
    @@ src/test/python/server/test_oauth.py (new)
     +        scope = "openid " + id
     +
     +    ctx = Context()
    -+    hba_lines = (
    ++    hba_lines = [
     +        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
     +        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
     +        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
    -+    )
    -+    ident_lines = (r"oauth /^(.*)@example\.com$ \1",)
    ++    ]
    ++    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
    ++
    ++    if platform.system() == "Windows":
    ++        # XXX why is 'samehost' not behaving as expected on Windows?
    ++        for l in list(hba_lines):
    ++            hba_lines.append(l.replace("samehost", "::1/128"))
     +
     +    host, port = postgres_instance
     +    conn = psycopg2.connect(host=host, port=port)
    @@ src/test/python/test_pq3.py (new)
     +import contextlib
     +import getpass
     +import io
    ++import platform
     +import struct
     +import sys
     +
    @@ src/test/python/test_pq3.py (new)
     +    [
     +        ("PGHOST", pq3.pghost, "localhost"),
     +        ("PGPORT", pq3.pgport, 5432),
    -+        ("PGUSER", pq3.pguser, getpass.getuser()),
    ++        (
    ++            "PGUSER",
    ++            pq3.pguser,
    ++            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
    ++        ),
     +        ("PGDATABASE", pq3.pgdatabase, "postgres"),
     +    ],
     +)
    @@ src/tools/make_venv (new)
     +import argparse
     +import subprocess
     +import os
    ++import platform
     +import sys
     +
     +parser = argparse.ArgumentParser()
    @@ src/tools/make_venv (new)
     +
     +# Update pip next. This helps avoid old pip bugs; the version inside system
     +# Pythons tends to be pretty out of date.
    -+pip = os.path.join(args.venv_path, 'bin', 'pip')
    -+run(pip, 'install', '-U', 'pip')
    ++bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
    ++python = os.path.join(args.venv_path, bindir, 'python3')
    ++run(python, '-m', 'pip', 'install', '-U', 'pip')
     +
     +# Finally, install the test's requirements. We need pytest and pytest-tap, no
     +# matter what the test needs.
    ++pip = os.path.join(args.venv_path, bindir, 'pip')
     +run(pip, 'install', 'pytest', 'pytest-tap')
     +if args.requirements:
     +    run(pip, 'install', '-r', args.requirements)
 8:  0f9f884856 !  8:  8ad4ce3068 XXX temporary patches to build and test
    @@ src/bin/pg_verifybackup/Makefile: top_builddir = ../../..
      ## src/interfaces/libpq/Makefile ##
     @@ src/interfaces/libpq/Makefile: libpq-refs-stamp: $(shlib)
      ifneq ($(enable_coverage), yes)
    - ifeq (,$(filter aix solaris,$(PORTNAME)))
    + ifeq (,$(filter solaris,$(PORTNAME)))
      	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
     -		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
     +		echo 'libpq must not be calling any function which invokes exit'; \
 9:  de8f81bd7d !  9:  5630465578 WIP: Python OAuth provider implementation
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +import json
     +import os
     +import sys
    -+from typing import TypeAlias
     +
     +
     +class OAuthHandler(http.server.BaseHTTPRequestHandler):
    -+    JsonObject: TypeAlias = dict[str, object]
    ++    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
     +
     +    def do_GET(self):
     +        if self.path == "/.well-known/openid-configuration":
v18-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v18-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 5c5a83e44e74d022b65aa884e369653bb6a0c175 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v18 5/9] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external program: the oauth_validator_command.
This command must do the following:

1. Receive the bearer token by reading its contents from a file
   descriptor passed from the server. (The numeric value of this
   descriptor may be inserted into the oauth_validator_command using the
   %f specifier.)

   This MUST be the first action the command performs. The server will
   not begin reading stdout from the command until the token has been
   read in full, so if the command tries to print anything and hits a
   buffer limit, the backend will deadlock and time out.

2. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the command must exit with a
   non-zero status. Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The command should print the
      authenticated identity string to stdout, followed by a newline.

      If the user cannot be authenticated, the validator should not
      print anything to stdout. It should also exit with a non-zero
      status, unless the token may be used to authorize the connection
      through some other means (see below).

      On a success, the command may then exit with a zero success code.
      By default, the server will then check to make sure the identity
      string matches the role that is being used (or matches a usermap
      entry, if one is in use).

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below), the validator simply
      returns a zero exit code if the client should be allowed to
      connect with its presented role (which can be passed to the
      command using the %r specifier), or a non-zero code otherwise.

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the command may print
      the authenticated ID and then fail with a non-zero exit code.
      (This makes it easier to see what's going on in the Postgres
      logs.)

4. Token validators may optionally log to stderr. This will be printed
   verbatim into the Postgres server logs.

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Unlike the client, servers support OAuth without needing to be built
against libiddawc (since the responsibility for "speaking" OAuth/OIDC
correctly is delegated entirely to the oauth_validator_command).

Several TODOs:
- port to platforms other than "modern Linux/BSD"
- overhaul the communication with oauth_validator_command, which is
  currently a bad hack on OpenPipeStream()
- implement more sanity checks on the OAUTHBEARER message format and
  tokens sent by the client
- implement more helpful handling of HBA misconfigurations
- properly interpolate JSON when generating error responses
- use logdetail during auth failures
- deal with role names that can't be safely passed to system() without
  shell-escaping
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/backend/libpq/Makefile          |   1 +
 src/backend/libpq/auth-oauth.c      | 810 ++++++++++++++++++++++++++++
 src/backend/libpq/auth-sasl.c       |  10 +-
 src/backend/libpq/auth-scram.c      |   4 +-
 src/backend/libpq/auth.c            |  26 +-
 src/backend/libpq/hba.c             |  31 +-
 src/backend/libpq/meson.build       |   1 +
 src/backend/utils/misc/guc_tables.c |  12 +
 src/include/libpq/auth.h            |  17 +
 src/include/libpq/hba.h             |   6 +-
 src/include/libpq/oauth.h           |  24 +
 src/include/libpq/sasl.h            |  11 +
 src/tools/pgindent/typedefs.list    |   1 +
 13 files changed, 922 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..a9d2646023
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,810 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+
+/* GUC */
+char	   *oauth_validator_command;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth, const char **logdetail);
+static bool run_validator_command(Port *port, const char *token);
+static bool check_exit(FILE **fh, const char *command);
+static bool set_cloexec(int fd);
+static bool username_ok_for_shell(const char *username);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth, logdetail))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 *
+		 * TODO further validate the key/value grammar? empty keys, bad
+		 * chars...
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: JSON escaping
+	 */
+	appendStringInfo(&buf,
+					 "{ "
+					 "\"status\": \"invalid_token\", "
+					 "\"openid-configuration\": \"%s/.well-known/openid-configuration\", "
+					 "\"scope\": \"%s\" "
+					 "}",
+					 ctx->issuer, ctx->scope);
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+static bool
+validate(Port *port, const char *auth, const char **logdetail)
+{
+	static const char *const b64_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	const char *token;
+	size_t		span;
+	int			ret;
+
+	/* TODO: handle logdetail when the test framework can check it */
+
+	/*-----
+	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+	 * 2.1:
+	 *
+	 *      b64token    = 1*( ALPHA / DIGIT /
+	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+	 *      credentials = "Bearer" 1*SP b64token
+	 *
+	 * The "credentials" construction is what we receive in our auth value.
+	 *
+	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
+	 * it's pointed out in RFC 7628 Sec. 4.)
+	 *
+	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+	 */
+	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return false;
+
+	/* Pull the bearer token out of the auth value. */
+	token = auth + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/*
+	 * Before invoking the validator command, sanity-check the token format to
+	 * avoid any injection attacks later in the chain. Invalid formats are
+	 * technically a protocol violation, but don't reflect any information
+	 * about the sensitive Bearer token back to the client; log at COMMERROR
+	 * instead.
+	 */
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is empty.")));
+		return false;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return false;
+	}
+
+	/* Have the validator check the token. */
+	if (!run_validator_command(port, token))
+		return false;
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (!MyClientConnectionInfo.authn_id)
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	ret = check_usermap(port->hba->usermap, port->user_name,
+						MyClientConnectionInfo.authn_id, false);
+	return (ret == STATUS_OK);
+}
+
+static bool
+run_validator_command(Port *port, const char *token)
+{
+	bool		success = false;
+	int			rc;
+	int			pipefd[2];
+	int			rfd = -1;
+	int			wfd = -1;
+
+	StringInfoData command = {0};
+	char	   *p;
+	FILE	   *fh = NULL;
+
+	ssize_t		written;
+	char	   *line = NULL;
+	size_t		size = 0;
+	ssize_t		len;
+
+	Assert(oauth_validator_command);
+
+	if (!oauth_validator_command[0])
+	{
+		ereport(COMMERROR,
+				(errmsg("oauth_validator_command is not set"),
+				 errhint("To allow OAuth authenticated connections, set "
+						 "oauth_validator_command in postgresql.conf.")));
+		return false;
+	}
+
+	/*------
+	 * Since popen() is unidirectional, open up a pipe for the other
+	 * direction. Use CLOEXEC to ensure that our write end doesn't
+	 * accidentally get copied into child processes, which would prevent us
+	 * from closing it cleanly.
+	 *
+	 * XXX this is ugly. We should just read from the child process's stdout,
+	 * but that's a lot more code.
+	 * XXX by bypassing the popen API, we open the potential of process
+	 * deadlock. Clearly document child process requirements (i.e. the child
+	 * MUST read all data off of the pipe before writing anything).
+	 * TODO: port to Windows using _pipe().
+	 */
+	rc = pipe(pipefd);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not create child pipe: %m")));
+		return false;
+	}
+
+	rfd = pipefd[0];
+	wfd = pipefd[1];
+
+	if (!set_cloexec(wfd))
+	{
+		/* error message was already logged */
+		goto cleanup;
+	}
+
+	/*----------
+	 * Construct the command, substituting any recognized %-specifiers:
+	 *
+	 *   %f: the file descriptor of the input pipe
+	 *   %r: the role that the client wants to assume (port->user_name)
+	 *   %%: a literal '%'
+	 */
+	initStringInfo(&command);
+
+	for (p = oauth_validator_command; *p; p++)
+	{
+		if (p[0] == '%')
+		{
+			switch (p[1])
+			{
+				case 'f':
+					appendStringInfo(&command, "%d", rfd);
+					p++;
+					break;
+				case 'r':
+
+					/*
+					 * TODO: decide how this string should be escaped. The
+					 * role is controlled by the client, so if we don't escape
+					 * it, command injections are inevitable.
+					 *
+					 * This is probably an indication that the role name needs
+					 * to be communicated to the validator process in some
+					 * other way. For this proof of concept, just be
+					 * incredibly strict about the characters that are allowed
+					 * in user names.
+					 */
+					if (!username_ok_for_shell(port->user_name))
+						goto cleanup;
+
+					appendStringInfoString(&command, port->user_name);
+					p++;
+					break;
+				case '%':
+					appendStringInfoChar(&command, '%');
+					p++;
+					break;
+				default:
+					appendStringInfoChar(&command, p[0]);
+			}
+		}
+		else
+			appendStringInfoChar(&command, p[0]);
+	}
+
+	/* Execute the command. */
+	fh = OpenPipeStream(command.data, "r");
+	if (!fh)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("opening pipe to OAuth validator: %m")));
+		goto cleanup;
+	}
+
+	/* We don't need the read end of the pipe anymore. */
+	close(rfd);
+	rfd = -1;
+
+	/* Give the command the token to validate. */
+	written = write(wfd, token, strlen(token));
+	if (written != strlen(token))
+	{
+		/* TODO must loop for short writes, EINTR et al */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not write token to child pipe: %m")));
+		goto cleanup;
+	}
+
+	close(wfd);
+	wfd = -1;
+
+	/*-----
+	 * Read the command's response.
+	 *
+	 * TODO: getline() is probably too new to use, unfortunately.
+	 * TODO: loop over all lines
+	 */
+	if ((len = getline(&line, &size, fh)) >= 0)
+	{
+		/* TODO: fail if the authn_id doesn't end with a newline */
+		if (len > 0)
+			line[len - 1] = '\0';
+
+		set_authn_id(port, line);
+	}
+	else if (ferror(fh))
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not read from command \"%s\": %m",
+						command.data)));
+		goto cleanup;
+	}
+
+	/* Make sure the command exits cleanly. */
+	if (!check_exit(&fh, command.data))
+	{
+		/* error message already logged */
+		goto cleanup;
+	}
+
+	/* Done. */
+	success = true;
+
+cleanup:
+	if (line)
+		free(line);
+
+	/*
+	 * In the successful case, the pipe fds are already closed. For the error
+	 * case, always close out the pipe before waiting for the command, to
+	 * prevent deadlock.
+	 */
+	if (rfd >= 0)
+		close(rfd);
+	if (wfd >= 0)
+		close(wfd);
+
+	if (fh)
+	{
+		Assert(!success);
+		check_exit(&fh, command.data);
+	}
+
+	if (command.data)
+		pfree(command.data);
+
+	return success;
+}
+
+static bool
+check_exit(FILE **fh, const char *command)
+{
+	int			rc;
+
+	rc = ClosePipeStream(*fh);
+	*fh = NULL;
+
+	if (rc == -1)
+	{
+		/* pclose() itself failed. */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not close pipe to command \"%s\": %m",
+						command)));
+	}
+	else if (rc != 0)
+	{
+		char	   *reason = wait_result_to_str(rc);
+
+		ereport(COMMERROR,
+				(errmsg("failed to execute command \"%s\": %s",
+						command, reason)));
+
+		pfree(reason);
+	}
+
+	return (rc == 0);
+}
+
+static bool
+set_cloexec(int fd)
+{
+	int			flags;
+	int			rc;
+
+	flags = fcntl(fd, F_GETFD);
+	if (flags == -1)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not get fd flags for child pipe: %m")));
+		return false;
+	}
+
+	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * XXX This should go away eventually and be replaced with either a proper
+ * escape or a different strategy for communication with the validator command.
+ */
+static bool
+username_ok_for_shell(const char *username)
+{
+	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
+	static const char *const allowed =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-_./:";
+	size_t		span;
+
+	Assert(username && username[0]);	/* should have already been checked */
+
+	span = strspn(username, allowed);
+	if (username[span] != '\0')
+	{
+		ereport(COMMERROR,
+				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
+		return false;
+	}
+
+	return true;
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 2abb1a9b3a..aa6b5020dc 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -118,7 +118,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 9bbdc4beb0..db7c77da86 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -47,7 +48,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -203,22 +203,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -307,6 +291,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -342,7 +329,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -629,6 +616,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 8004d102ad..03c3f038c7 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -119,7 +119,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1748,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2067,8 +2070,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2451,6 +2455,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 93ded31ed9..0f83f0d870 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -48,6 +48,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4670,6 +4671,17 @@ struct config_string ConfigureNamesString[] =
 		check_debug_io_direct, assign_debug_io_direct, NULL
 	},
 
+	{
+		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&oauth_validator_command,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..5edab3b25a
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern char *oauth_validator_command;
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6234fe66f1..fe65a4222b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3543,6 +3543,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v18-0003-Explicitly-require-password-for-SCRAM-exchange.patchapplication/octet-stream; name=v18-0003-Explicitly-require-password-for-SCRAM-exchange.patchDownload
From e4ad0260d54a5be3964c06621cf3d2bb35fa60d5 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Fri, 23 Feb 2024 11:19:55 +0100
Subject: [PATCH v18 3/9] Explicitly require password for SCRAM exchange

This refactors the SASL init flow to set password_needed on the two
SCRAM exchanges currently supported. The code already required this
but was set up in such a way that all SASL exchanges required using
a password, a restriction which may not hold for all exchanges (the
example at hand being the proposed OAuthbearer exchange).

This was extracted from a larger patchset to introduce OAuthBearer
authentication and authorization.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 src/interfaces/libpq/fe-auth.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index cf8af4c62e..81ec08485d 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -425,7 +425,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	int			initialresponselen;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
-	char	   *password;
+	char	   *password = NULL;
 	SASLStatus	status;
 
 	initPQExpBuffer(&mechanism_buf);
@@ -446,8 +446,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	/*
 	 * Parse the list of SASL authentication mechanisms in the
 	 * AuthenticationSASL message, and select the best mechanism that we
-	 * support.  SCRAM-SHA-256-PLUS and SCRAM-SHA-256 are the only ones
-	 * supported at the moment, listed by order of decreasing importance.
+	 * support. Mechanisms are listed by order of decreasing importance.
 	 */
 	selected_mechanism = NULL;
 	for (;;)
@@ -487,6 +486,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 				{
 					selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
 					conn->sasl = &pg_scram_mech;
+					conn->password_needed = true;
 				}
 #else
 				/*
@@ -522,6 +522,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		{
 			selected_mechanism = SCRAM_SHA_256_NAME;
 			conn->sasl = &pg_scram_mech;
+			conn->password_needed = true;
 		}
 	}
 
@@ -545,18 +546,19 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	/*
 	 * First, select the password to use for the exchange, complaining if
-	 * there isn't one.  Currently, all supported SASL mechanisms require a
-	 * password, so we can just go ahead here without further distinction.
+	 * there isn't one and the selected SASL mechanism needs it.
 	 */
-	conn->password_needed = true;
-	password = conn->connhost[conn->whichhost].password;
-	if (password == NULL)
-		password = conn->pgpass;
-	if (password == NULL || password[0] == '\0')
+	if (conn->password_needed)
 	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 PQnoPasswordSupplied);
-		goto error;
+		password = conn->connhost[conn->whichhost].password;
+		if (password == NULL)
+			password = conn->pgpass;
+		if (password == NULL || password[0] == '\0')
+		{
+			appendPQExpBufferStr(&conn->errorMessage,
+								 PQnoPasswordSupplied);
+			goto error;
+		}
 	}
 
 	Assert(conn->sasl);
-- 
2.34.1

v18-0002-Refactor-SASL-exchange-to-return-tri-state-statu.patchapplication/octet-stream; name=v18-0002-Refactor-SASL-exchange-to-return-tri-state-statu.patchDownload
From db625e1d01c3de1156fc7d68b119892e7f147f49 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Fri, 23 Feb 2024 11:09:54 +0100
Subject: [PATCH v18 2/9] Refactor SASL exchange to return tri-state status

The SASL exchange callback returned state in to output variables:
done and success.  This refactors that logic by introducing a new
return variable of type SASLStatus which makes the code easier to
read and understand, and prepares for future SASL exchanges which
operate asynchronously.

This was extracted from a larger patchset to introduce OAuthBearer
authentication and authorization.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 src/interfaces/libpq/fe-auth-sasl.h  | 31 +++++++----
 src/interfaces/libpq/fe-auth-scram.c | 78 +++++++++++++---------------
 src/interfaces/libpq/fe-auth.c       | 28 +++++-----
 src/tools/pgindent/typedefs.list     |  1 +
 4 files changed, 71 insertions(+), 67 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index ee5d1525b5..4eecf53a15 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -21,6 +21,17 @@
 
 #include "libpq-fe.h"
 
+/*
+ * Possible states for the SASL exchange, see the comment on exchange for an
+ * explanation of these.
+ */
+typedef enum
+{
+	SASL_COMPLETE = 0,
+	SASL_FAILED,
+	SASL_CONTINUE,
+} SASLStatus;
+
 /*
  * Frontend SASL mechanism callbacks.
  *
@@ -59,7 +70,8 @@ typedef struct pg_fe_sasl_mech
 	 * Produces a client response to a server challenge.  As a special case
 	 * for client-first SASL mechanisms, exchange() is called with a NULL
 	 * server response once at the start of the authentication exchange to
-	 * generate an initial response.
+	 * generate an initial response. Returns a SASLStatus indicating the
+	 * state and status of the exchange.
 	 *
 	 * Input parameters:
 	 *
@@ -79,22 +91,23 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	output:	   A malloc'd buffer containing the client's response to
 	 *			   the server (can be empty), or NULL if the exchange should
-	 *			   be aborted.  (*success should be set to false in the
+	 *			   be aborted.  (The callback should return SASL_FAILED in the
 	 *			   latter case.)
 	 *
 	 *	outputlen: The length (0 or higher) of the client response buffer,
 	 *			   ignored if output is NULL.
 	 *
-	 *	done:      Set to true if the SASL exchange should not continue,
-	 *			   because the exchange is either complete or failed
+	 * Return value:
 	 *
-	 *	success:   Set to true if the SASL exchange completed successfully.
-	 *			   Ignored if *done is false.
+	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
+	 *					Additional server challenge is expected
+	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
+	 *	SASL_FAILED:	The exchange has failed and the connection should be
+	 *					dropped.
 	 *--------
 	 */
-	void		(*exchange) (void *state, char *input, int inputlen,
-							 char **output, int *outputlen,
-							 bool *done, bool *success);
+	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+							 char **output, int *outputlen);
 
 	/*--------
 	 * channel_bound()
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 04f0e5713d..0bb820e0d9 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,9 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static void scram_exchange(void *opaq, char *input, int inputlen,
-						   char **output, int *outputlen,
-						   bool *done, bool *success);
+static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
 
@@ -202,17 +201,14 @@ scram_free(void *opaq)
 /*
  * Exchange a SCRAM message with backend.
  */
-static void
+static SASLStatus
 scram_exchange(void *opaq, char *input, int inputlen,
-			   char **output, int *outputlen,
-			   bool *done, bool *success)
+			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
 	PGconn	   *conn = state->conn;
 	const char *errstr = NULL;
 
-	*done = false;
-	*success = false;
 	*output = NULL;
 	*outputlen = 0;
 
@@ -225,12 +221,12 @@ scram_exchange(void *opaq, char *input, int inputlen,
 		if (inputlen == 0)
 		{
 			libpq_append_conn_error(conn, "malformed SCRAM message (empty message)");
-			goto error;
+			return SASL_FAILED;
 		}
 		if (inputlen != strlen(input))
 		{
 			libpq_append_conn_error(conn, "malformed SCRAM message (length mismatch)");
-			goto error;
+			return SASL_FAILED;
 		}
 	}
 
@@ -240,61 +236,59 @@ scram_exchange(void *opaq, char *input, int inputlen,
 			/* Begin the SCRAM handshake, by sending client nonce */
 			*output = build_client_first_message(state);
 			if (*output == NULL)
-				goto error;
+				return SASL_FAILED;
 
 			*outputlen = strlen(*output);
-			*done = false;
 			state->state = FE_SCRAM_NONCE_SENT;
-			break;
+			return SASL_CONTINUE;
 
 		case FE_SCRAM_NONCE_SENT:
 			/* Receive salt and server nonce, send response. */
 			if (!read_server_first_message(state, input))
-				goto error;
+				return SASL_FAILED;
 
 			*output = build_client_final_message(state);
 			if (*output == NULL)
-				goto error;
+				return SASL_FAILED;
 
 			*outputlen = strlen(*output);
-			*done = false;
 			state->state = FE_SCRAM_PROOF_SENT;
-			break;
+			return SASL_CONTINUE;
 
 		case FE_SCRAM_PROOF_SENT:
-			/* Receive server signature */
-			if (!read_server_final_message(state, input))
-				goto error;
-
-			/*
-			 * Verify server signature, to make sure we're talking to the
-			 * genuine server.
-			 */
-			if (!verify_server_signature(state, success, &errstr))
-			{
-				libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
-				goto error;
-			}
-
-			if (!*success)
 			{
-				libpq_append_conn_error(conn, "incorrect server signature");
+				bool		match;
+
+				/* Receive server signature */
+				if (!read_server_final_message(state, input))
+					return SASL_FAILED;
+
+				/*
+				 * Verify server signature, to make sure we're talking to the
+				 * genuine server.
+				 */
+				if (!verify_server_signature(state, &match, &errstr))
+				{
+					libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
+					return SASL_FAILED;
+				}
+
+				if (!match)
+				{
+					libpq_append_conn_error(conn, "incorrect server signature");
+				}
+				state->state = FE_SCRAM_FINISHED;
+				state->conn->client_finished_auth = true;
+				return match ? SASL_COMPLETE : SASL_FAILED;
 			}
-			*done = true;
-			state->state = FE_SCRAM_FINISHED;
-			state->conn->client_finished_auth = true;
-			break;
 
 		default:
 			/* shouldn't happen */
 			libpq_append_conn_error(conn, "invalid SCRAM exchange state");
-			goto error;
+			break;
 	}
-	return;
 
-error:
-	*done = true;
-	*success = false;
+	return SASL_FAILED;
 }
 
 /*
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 1a8e4f6fbf..cf8af4c62e 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -423,11 +423,10 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
-	bool		done;
-	bool		success;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
 	char	   *password;
+	SASLStatus	status;
 
 	initPQExpBuffer(&mechanism_buf);
 
@@ -575,12 +574,11 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto oom_error;
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	conn->sasl->exchange(conn->sasl_state,
-						 NULL, -1,
-						 &initialresponse, &initialresponselen,
-						 &done, &success);
+	status = conn->sasl->exchange(conn->sasl_state,
+								  NULL, -1,
+								  &initialresponse, &initialresponselen);
 
-	if (done && !success)
+	if (status == SASL_FAILED)
 		goto error;
 
 	/*
@@ -629,10 +627,9 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 {
 	char	   *output;
 	int			outputlen;
-	bool		done;
-	bool		success;
 	int			res;
 	char	   *challenge;
+	SASLStatus	status;
 
 	/* Read the SASL challenge from the AuthenticationSASLContinue message. */
 	challenge = malloc(payloadlen + 1);
@@ -651,13 +648,12 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	conn->sasl->exchange(conn->sasl_state,
-						 challenge, payloadlen,
-						 &output, &outputlen,
-						 &done, &success);
+	status = conn->sasl->exchange(conn->sasl_state,
+								  challenge, payloadlen,
+								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
-	if (final && !done)
+	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
 			free(output);
@@ -670,7 +666,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	 * If the exchange is not completed yet, we need to make sure that the
 	 * SASL mechanism has generated a message to send back.
 	 */
-	if (output == NULL && !done)
+	if (output == NULL && status == SASL_CONTINUE)
 	{
 		libpq_append_conn_error(conn, "no client response found after SASL exchange success");
 		return STATUS_ERROR;
@@ -692,7 +688,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 			return STATUS_ERROR;
 	}
 
-	if (done && !success)
+	if (status == SASL_FAILED)
 		return STATUS_ERROR;
 
 	return STATUS_OK;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index fc8b15d0cf..2461567026 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2423,6 +2423,7 @@ RuleLock
 RuleStmt
 RunningTransactions
 RunningTransactionsData
+SASLStatus
 SC_HANDLE
 SECURITY_ATTRIBUTES
 SECURITY_STATUS
-- 
2.34.1

v18-0001-common-jsonapi-support-FRONTEND-clients.patchapplication/octet-stream; name=v18-0001-common-jsonapi-support-FRONTEND-clients.patchDownload
From e2a0b485617f0f6f3479c784a90f8f9579c35fea Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v18 1/9] common/jsonapi: support FRONTEND clients

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed. json_errdetail() now allocates its error message inside
memory owned by the JsonLexContext, so clients don't need to worry about
freeing it.

We can now partially revert b44669b2ca, now that json_errdetail() works
correctly.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/bin/pg_verifybackup/t/005_bad_manifest.pl |   3 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 268 +++++++++++++-----
 src/common/meson.build                        |   8 +-
 src/common/parse_manifest.c                   |   2 +-
 src/common/stringinfo.c                       |   7 +
 src/include/common/jsonapi.h                  |  18 +-
 src/include/lib/stringinfo.h                  |   2 +
 8 files changed, 225 insertions(+), 85 deletions(-)

diff --git a/src/bin/pg_verifybackup/t/005_bad_manifest.pl b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
index e278ccea5a..e2a297930e 100644
--- a/src/bin/pg_verifybackup/t/005_bad_manifest.pl
+++ b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
@@ -13,7 +13,8 @@ use Test::More;
 my $tempdir = PostgreSQL::Test::Utils::tempdir;
 
 test_bad_manifest('input string ended unexpectedly',
-	qr/could not parse backup manifest: parsing failed/, <<EOM);
+	qr/could not parse backup manifest: The input string ended unexpectedly/,
+	<<EOM);
 {
 EOM
 
diff --git a/src/common/Makefile b/src/common/Makefile
index 2ba5069dca..bbb5c3ab11 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 OBJS_COMMON = \
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 32931ded82..2d1f30353a 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,43 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+
+#define appendStrVal		appendPQExpBuffer
+#define appendBinaryStrVal  appendBinaryPQExpBuffer
+#define appendStrValChar	appendPQExpBufferChar
+#define createStrVal		createPQExpBuffer
+#define resetStrVal			resetPQExpBuffer
+#define destroyStrVal		destroyPQExpBuffer
+
+#else							/* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+
+#define appendStrVal		appendStringInfo
+#define appendBinaryStrVal  appendBinaryStringInfo
+#define appendStrValChar	appendStringInfoChar
+#define createStrVal		makeStringInfo
+#define resetStrVal			resetStringInfo
+#define destroyStrVal		destroyStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -167,9 +200,16 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+	lex->errormsg = NULL;
 
 	return lex;
 }
@@ -182,13 +222,18 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-	{
-		pfree(lex->strval->data);
-		pfree(lex->strval);
-	}
+		destroyStrVal(lex->strval);
+
+	if (lex->errormsg)
+		destroyStrVal(lex->errormsg);
+
 	if (lex->flags & JSONLEX_FREE_STRUCT)
 		pfree(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -254,7 +299,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -316,14 +361,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -357,8 +409,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -414,6 +470,11 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -762,8 +823,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -800,7 +868,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -857,19 +925,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -879,22 +947,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -929,7 +997,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to appendBinaryStrVal.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -953,8 +1021,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				appendBinaryStrVal(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -970,6 +1038,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -1145,72 +1218,93 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 	return JSON_SUCCESS;		/* silence stupider compilers */
 }
 
-
-#ifndef FRONTEND
-/*
- * Extract the current token from a lexing context, for error reporting.
- */
-static char *
-extract_token(JsonLexContext *lex)
-{
-	int			toklen = lex->token_terminator - lex->token_start;
-	char	   *token = palloc(toklen + 1);
-
-	memcpy(token, lex->token_start, toklen);
-	token[toklen] = '\0';
-	return token;
-}
-
 /*
  * Construct an (already translated) detail message for a JSON error.
  *
- * Note that the error message generated by this routine may not be
- * palloc'd, making it unsafe for frontend code as there is no way to
- * know if this can be safely pfree'd or not.
+ * The returned allocation is either static or owned by the JsonLexContext and
+ * should not be freed.
  */
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	int			toklen = lex->token_terminator - lex->token_start;
+
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
+	if (lex->errormsg)
+		resetStrVal(lex->errormsg);
+	else
+		lex->errormsg = createStrVal();
+
 	switch (error)
 	{
 		case JSON_SUCCESS:
 			/* fall through to the error code after switch */
 			break;
 		case JSON_ESCAPING_INVALID:
-			return psprintf(_("Escape sequence \"\\%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Escape sequence \"\\%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_ESCAPING_REQUIRED:
-			return psprintf(_("Character with value 0x%02x must be escaped."),
-							(unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
+			break;
 		case JSON_EXPECTED_END:
-			return psprintf(_("Expected end of input, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected end of input, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_FIRST:
-			return psprintf(_("Expected array element or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected array element or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_NEXT:
-			return psprintf(_("Expected \",\" or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_COLON:
-			return psprintf(_("Expected \":\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \":\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_JSON:
-			return psprintf(_("Expected JSON value, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected JSON value, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_MORE:
 			return _("The input string ended unexpectedly.");
 		case JSON_EXPECTED_OBJECT_FIRST:
-			return psprintf(_("Expected string or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_OBJECT_NEXT:
-			return psprintf(_("Expected \",\" or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_STRING:
-			return psprintf(_("Expected string, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_INVALID_TOKEN:
-			return psprintf(_("Token \"%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Token \"%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -1219,9 +1313,19 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			/* note: this case is only reachable in frontend not backend */
 			return _("Unicode escape values cannot be used for code point values above 007F when the encoding is not UTF8.");
 		case JSON_UNICODE_UNTRANSLATABLE:
-			/* note: this case is only reachable in backend not frontend */
+
+			/*
+			 * note: this case is only reachable in backend not frontend.
+			 * #ifdef it away so the frontend doesn't try to link against
+			 * backend functionality.
+			 */
+#ifndef FRONTEND
 			return psprintf(_("Unicode escape value could not be translated to the server's encoding %s."),
 							GetDatabaseEncodingName());
+#else
+			Assert(false);
+			break;
+#endif
 		case JSON_UNICODE_HIGH_SURROGATE:
 			return _("Unicode high surrogate must not follow a high surrogate.");
 		case JSON_UNICODE_LOW_SURROGATE:
@@ -1231,12 +1335,22 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			break;
 	}
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	elog(ERROR, "unexpected json parse error type: %d", (int) error);
-	return NULL;
-}
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && !lex->errormsg->data[0])
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d", (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
 #endif
+
+	return lex->errormsg->data;
+}
diff --git a/src/common/meson.build b/src/common/meson.build
index 4eb16024cb..5d2c7abaa6 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -124,13 +124,18 @@ common_sources_frontend_static += files(
 # least cryptohash_openssl.c, hmac_openssl.c depend on it.
 # controldata_utils.c depends on wait_event_types_h. That's arguably a
 # layering violation, but ...
+#
+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
+# appropriately. This seems completely broken.
 pgcommon = {}
 pgcommon_variants = {
   '_srv': internal_lib_args + {
+    'include_directories': include_directories('.'),
     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
     'dependencies': [backend_common_code],
   },
   '': default_lib_args + {
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_static,
     'dependencies': [frontend_common_code],
     # Files in libpgcommon.a should use/export the "xxx_private" versions
@@ -139,6 +144,7 @@ pgcommon_variants = {
   },
   '_shlib': default_lib_args + {
     'pic': true,
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
   },
@@ -156,7 +162,6 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'sources': sources,
         'c_args': c_args,
@@ -169,7 +174,6 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'dependencies': opts['dependencies'] + [ssl],
       }
diff --git a/src/common/parse_manifest.c b/src/common/parse_manifest.c
index 92a97714f3..62d93989be 100644
--- a/src/common/parse_manifest.c
+++ b/src/common/parse_manifest.c
@@ -147,7 +147,7 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 	/* Run the actual JSON parser. */
 	json_error = pg_parse_json(lex, &sem);
 	if (json_error != JSON_SUCCESS)
-		json_manifest_parse_failure(context, "parsing failed");
+		json_manifest_parse_failure(context, json_errdetail(json_error, lex));
 	if (parse.state != JM_EXPECT_EOF)
 		json_manifest_parse_failure(context, "manifest ended unexpectedly");
 
diff --git a/src/common/stringinfo.c b/src/common/stringinfo.c
index c61d5c58f3..09419f6042 100644
--- a/src/common/stringinfo.c
+++ b/src/common/stringinfo.c
@@ -350,3 +350,10 @@ enlargeStringInfo(StringInfo str, int needed)
 
 	str->maxlen = newlen;
 }
+
+void
+destroyStringInfo(StringInfo str)
+{
+	pfree(str->data);
+	pfree(str);
+}
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 02943cdad8..75d444c17a 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -48,6 +46,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -57,6 +56,17 @@ typedef enum JsonParseErrorType
 	JSON_SEM_ACTION_FAILED,		/* error should already be reported */
 } JsonParseErrorType;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
 
 /*
  * All the fields in this structure should be treated as read-only.
@@ -88,7 +98,9 @@ typedef struct JsonLexContext
 	bits32		flags;
 	int			line_number;	/* line number, starting from 1 */
 	char	   *line_start;		/* where that line starts within input */
-	StringInfo	strval;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
diff --git a/src/include/lib/stringinfo.h b/src/include/lib/stringinfo.h
index 2cd636b01c..64ec6419af 100644
--- a/src/include/lib/stringinfo.h
+++ b/src/include/lib/stringinfo.h
@@ -233,4 +233,6 @@ extern void appendBinaryStringInfoNT(StringInfo str,
  */
 extern void enlargeStringInfo(StringInfo str, int needed);
 
+
+extern void destroyStringInfo(StringInfo str);
 #endif							/* STRINGINFO_H */
-- 
2.34.1

v18-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v18-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 229f602d5c801cd602fe61b23c87f4f769497f64 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v18 4/9] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires either libcurl or libiddawc and their
development headers. Pass `curl` or `iddawc` to --with-oauth/-Doauth
during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 configure                                   |  186 ++
 configure.ac                                |   37 +
 meson.build                                 |   45 +
 meson_options.txt                           |    4 +
 src/Makefile.global.in                      |    1 +
 src/include/common/oauth-common.h           |   19 +
 src/include/pg_config.h.in                  |   18 +
 src/interfaces/libpq/Makefile               |   12 +-
 src/interfaces/libpq/exports.txt            |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c   | 1982 +++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth-iddawc.c |  319 +++
 src/interfaces/libpq/fe-auth-oauth.c        |  659 ++++++
 src/interfaces/libpq/fe-auth-oauth.h        |   42 +
 src/interfaces/libpq/fe-auth-sasl.h         |   10 +-
 src/interfaces/libpq/fe-auth-scram.c        |    6 +-
 src/interfaces/libpq/fe-auth.c              |  105 +-
 src/interfaces/libpq/fe-auth.h              |    9 +-
 src/interfaces/libpq/fe-connect.c           |   85 +-
 src/interfaces/libpq/fe-misc.c              |    7 +-
 src/interfaces/libpq/libpq-fe.h             |   77 +-
 src/interfaces/libpq/libpq-int.h            |   14 +
 src/interfaces/libpq/meson.build            |    9 +
 src/makefiles/meson.build                   |    1 +
 src/tools/pgindent/typedefs.list            |   10 +
 24 files changed, 3633 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-iddawc.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/configure b/configure
index 46859a4244..142d49d7b6 100755
--- a/configure
+++ b/configure
@@ -712,6 +712,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -858,6 +859,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl, iddawc)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8485,6 +8488,59 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" = x"iddawc"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_IDDAWC 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl or iddawc" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13037,6 +13093,116 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+elif test "$with_oauth" = iddawc ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for i_init_session in -liddawc" >&5
+$as_echo_n "checking for i_init_session in -liddawc... " >&6; }
+if ${ac_cv_lib_iddawc_i_init_session+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-liddawc  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char i_init_session ();
+int
+main ()
+{
+return i_init_session ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_iddawc_i_init_session=yes
+else
+  ac_cv_lib_iddawc_i_init_session=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_iddawc_i_init_session" >&5
+$as_echo "$ac_cv_lib_iddawc_i_init_session" >&6; }
+if test "x$ac_cv_lib_iddawc_i_init_session" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBIDDAWC 1
+_ACEOF
+
+  LIBS="-liddawc $LIBS"
+
+else
+  as_fn_error $? "library 'iddawc' is required for --with-oauth=iddawc" "$LINENO" 5
+fi
+
+  # Check for an older spelling of i_get_openid_config
+  for ac_func in i_load_openid_config
+do :
+  ac_fn_c_check_func "$LINENO" "i_load_openid_config" "ac_cv_func_i_load_openid_config"
+if test "x$ac_cv_func_i_load_openid_config" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_I_LOAD_OPENID_CONFIG 1
+_ACEOF
+
+fi
+done
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14062,6 +14228,26 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
+elif test "$with_oauth" = iddawc; then
+  ac_fn_c_check_header_mongrel "$LINENO" "iddawc.h" "ac_cv_header_iddawc_h" "$ac_includes_default"
+if test "x$ac_cv_header_iddawc_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <iddawc.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 88b75a7696..a4c2e558f9 100644
--- a/configure.ac
+++ b/configure.ac
@@ -927,6 +927,29 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl, iddawc)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" = x"iddawc"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_IDDAWC], 1, [Define to 1 to use libiddawc for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl or iddawc])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1423,6 +1446,14 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+elif test "$with_oauth" = iddawc ; then
+  AC_CHECK_LIB(iddawc, i_init_session, [], [AC_MSG_ERROR([library 'iddawc' is required for --with-oauth=iddawc])])
+  # Check for an older spelling of i_get_openid_config
+  AC_CHECK_FUNCS([i_load_openid_config])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1614,6 +1645,12 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+elif test "$with_oauth" = iddawc; then
+  AC_CHECK_HEADER(iddawc.h, [], [AC_MSG_ERROR([header file <iddawc.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/meson.build b/meson.build
index a198eca25d..43066a017f 100644
--- a/meson.build
+++ b/meson.build
@@ -830,6 +830,49 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  oauth = dependency('libcurl', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if not oauth.found() and oauthopt in ['auto', 'iddawc']
+  oauth = dependency('libiddawc', required: (oauthopt == 'iddawc'))
+
+  if oauth.found()
+    oauth_library = 'iddawc'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_IDDAWC', 1)
+
+    # Check for an older spelling of i_get_openid_config
+    if cc.has_function('i_load_openid_config',
+                       dependencies: oauth, args: test_c_args)
+      cdata.set('HAVE_I_LOAD_OPENID_CONFIG', 1)
+    endif
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -2834,6 +2877,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3435,6 +3479,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index 249ecc5ffd..f54f7fd717 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -121,6 +121,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl', 'iddawc'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl, iddawc)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 8b3f8c24e0..79b3647834 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..5ff3488bfb
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07e73567dc..f470c77669 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -231,6 +231,9 @@
 /* Define to 1 if __builtin_constant_p(x) implies "i"(x) acceptance. */
 #undef HAVE_I_CONSTRAINT__BUILTIN_CONSTANT_P
 
+/* Define to 1 if you have the `i_load_openid_config' function. */
+#undef HAVE_I_LOAD_OPENID_CONFIG
+
 /* Define to 1 if you have the `kqueue' function. */
 #undef HAVE_KQUEUE
 
@@ -243,6 +246,12 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
+/* Define to 1 if you have the `iddawc' library (-liddawc). */
+#undef HAVE_LIBIDDAWC
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -711,6 +720,15 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
+/* Define to 1 to use libiddawc for OAuth support. */
+#undef USE_OAUTH_IDDAWC
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index fe2af575c5..86aada810f 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -61,6 +61,16 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),iddawc)
+OBJS += fe-auth-oauth-iddawc.o
+else
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -79,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -liddawc -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb1..0f8f5e3125 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,6 @@ PQsendClosePrepared       190
 PQsendClosePortal         191
 PQchangePassword          192
 PQsendPipelineSync        193
+PQsetAuthDataHook         194
+PQgetAuthDataHook         195
+PQdefaultAuthDataHook     196
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..0504f96e4e
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,1982 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls to
+ * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by cURL, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by cURL to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two cURL handles,
+ * so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	char	   *content_type;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	/* Make sure the server thinks it's given us JSON. */
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		goto cleanup;
+	}
+	else if (strcasecmp(content_type, "application/json") != 0)
+	{
+		actx_error(actx, "unexpected content type \"%s\"", content_type);
+		goto cleanup;
+	}
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		goto cleanup;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.)
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return 1;				/* TODO this slows down the tests
+								 * considerably... */
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(authz->interval_str);
+	else
+	{
+		/* TODO: handle default interval of 5 seconds */
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * cURL Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * cURL multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the `data` field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * cURL multi handle. Rather than continually adding and removing the timer,
+ * we keep it in the set at all times and just disarm it when it's not
+ * needed.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means cURL wants us to call back immediately. That's
+		 * not technically an option for timerfd, but we can make the timeout
+		 * ridiculously short.
+		 *
+		 * TODO: maybe just signal drive_request() to immediately call back in
+		 * this case?
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Initializes the two cURL handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data	*curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * cURL for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create cURL multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create cURL handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
+	 */
+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS, return false);
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from cURL; appends the response body into actx->work_data.
+ * See start_request().
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * Sanity check.
+	 *
+	 * TODO: even though this is nominally an asynchronous process, there are
+	 * apparently operations that can synchronously fail by this point, such
+	 * as connections to closed local ports. Maybe we need to let this case
+	 * fall through to drive_request instead, or else perform a
+	 * curl_multi_info_read immediately.
+	 */
+	if (running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	err = curl_multi_socket_all(actx->curlm, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return PGRES_POLLING_FAILED;
+	}
+
+	if (running)
+	{
+		/* We'll come back again. */
+		return PGRES_POLLING_READING;
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future cURL versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "cURL easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * The top-level, nonblocking entry point for the cURL implementation. This will
+ * be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	struct token tok = {0};
+
+	/*
+	 * XXX This is not safe. cURL has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized cURL,
+	 * which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell cURL to initialize
+	 * everything else, because other pieces of our client executable may
+	 * already be using cURL for their own purposes. If we initialize libcurl
+	 * first, with only a subset of its features, we could break those other
+	 * clients nondeterministically, and that would probably be a nightmare to
+	 * debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	/* By default, the multiplexer is the altsock. Reassign as desired. */
+	*altsock = actx->mux;
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				PostgresPollingStatusType status;
+
+				status = drive_request(actx);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+				else if (status != PGRES_POLLING_OK)
+				{
+					/* not done yet */
+					free_token(&tok);
+					return status;
+				}
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			/* TODO check that the timer has expired */
+			break;
+	}
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			actx->errctx = "failed to fetch OpenID discovery document";
+			if (!start_discovery(actx, conn->oauth_discovery_uri))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DISCOVERY;
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+			if (!finish_discovery(actx))
+				goto error_return;
+
+			/* TODO: check issuer */
+
+			actx->errctx = "cannot run OAuth device authorization";
+			if (!check_for_device_flow(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain device authorization";
+			if (!start_device_authz(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+			break;
+
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			if (!finish_device_authz(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				const struct token_error *err;
+#ifdef HAVE_SYS_EPOLL_H
+				struct itimerspec spec = {0};
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				struct kevent ev = {0};
+#endif
+
+				if (!finish_token_request(actx, &tok))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					int			res;
+					PQpromptOAuthDevice prompt = {
+						.verification_uri = actx->authz.verification_uri,
+						.user_code = actx->authz.user_code,
+						/* TODO: optional fields */
+					};
+
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
+										 &prompt);
+
+					if (!res)
+					{
+						fprintf(stderr, "Visit %s and enter the code: %s",
+								prompt.verification_uri, prompt.user_code);
+					}
+					else if (res < 0)
+					{
+						actx_error(actx, "device prompt failed");
+						goto error_return;
+					}
+
+					actx->user_prompted = true;
+				}
+
+				if (tok.access_token)
+				{
+					/* Construct our Bearer token. */
+					resetPQExpBuffer(&actx->work_data);
+					appendPQExpBuffer(&actx->work_data, "Bearer %s",
+									  tok.access_token);
+
+					if (PQExpBufferDataBroken(actx->work_data))
+					{
+						actx_error(actx, "out of memory");
+						goto error_return;
+					}
+
+					state->token = strdup(actx->work_data.data);
+					break;
+				}
+
+				/*
+				 * authorization_pending and slow_down are the only acceptable
+				 * errors; anything else and we bail.
+				 */
+				err = &tok.err;
+				if (!err->error || (strcmp(err->error, "authorization_pending")
+									&& strcmp(err->error, "slow_down")))
+				{
+					/* TODO handle !err->error */
+					if (err->error_description)
+						appendPQExpBuffer(&actx->errbuf, "%s ",
+										  err->error_description);
+
+					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+
+					goto error_return;
+				}
+
+				/*
+				 * A slow_down error requires us to permanently increase our
+				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
+				 */
+				if (strcmp(err->error, "slow_down") == 0)
+				{
+					actx->authz.interval += 5;	/* TODO check for overflow? */
+				}
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				Assert(actx->authz.interval > 0);
+#ifdef HAVE_SYS_EPOLL_H
+				spec.it_value.tv_sec = actx->authz.interval;
+
+				if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+				{
+					actx_error(actx, "failed to set timerfd: %m");
+					goto error_return;
+				}
+
+				*altsock = actx->timerfd;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				/* XXX: I guess this wants to be hidden in a routine */
+				EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
+					   actx->authz.interval * 1000, 0);
+				if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
+				{
+					actx_error(actx, "failed to set kqueue timer: %m");
+					goto error_return;
+				}
+				/* XXX: why did we change the altsock in the epoll version? */
+#endif
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				break;
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+	}
+
+	free_token(&tok);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	free_token(&tok);
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth-iddawc.c b/src/interfaces/libpq/fe-auth-oauth-iddawc.c
new file mode 100644
index 0000000000..e78d4304d3
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-iddawc.c
@@ -0,0 +1,319 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-iddawc.c
+ *	   The libiddawc implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-iddawc.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <iddawc.h>
+
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+
+#ifdef HAVE_I_LOAD_OPENID_CONFIG
+/* Older versions of iddawc used 'load' instead of 'get' for some APIs. */
+#define i_get_openid_config i_load_openid_config
+#endif
+
+static const char *
+iddawc_error_string(int errcode)
+{
+	switch (errcode)
+	{
+		case I_OK:
+			return "I_OK";
+
+		case I_ERROR:
+			return "I_ERROR";
+
+		case I_ERROR_PARAM:
+			return "I_ERROR_PARAM";
+
+		case I_ERROR_MEMORY:
+			return "I_ERROR_MEMORY";
+
+		case I_ERROR_UNAUTHORIZED:
+			return "I_ERROR_UNAUTHORIZED";
+
+		case I_ERROR_SERVER:
+			return "I_ERROR_SERVER";
+	}
+
+	return "<unknown>";
+}
+
+static void
+iddawc_error(PGconn *conn, int errcode, const char *msg)
+{
+	appendPQExpBufferStr(&conn->errorMessage, libpq_gettext(msg));
+	appendPQExpBuffer(&conn->errorMessage,
+					  libpq_gettext(" (iddawc error %s)\n"),
+					  iddawc_error_string(errcode));
+}
+
+static void
+iddawc_request_error(PGconn *conn, struct _i_session *i, int err, const char *msg)
+{
+	const char *error_code;
+	const char *desc;
+
+	appendPQExpBuffer(&conn->errorMessage, "%s: ", libpq_gettext(msg));
+
+	error_code = i_get_str_parameter(i, I_OPT_ERROR);
+	if (!error_code)
+	{
+		/*
+		 * The server didn't give us any useful information, so just print the
+		 * error code.
+		 */
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("(iddawc error %s)\n"),
+						  iddawc_error_string(err));
+		return;
+	}
+
+	/* If the server gave a string description, print that too. */
+	desc = i_get_str_parameter(i, I_OPT_ERROR_DESCRIPTION);
+	if (desc)
+		appendPQExpBuffer(&conn->errorMessage, "%s ", desc);
+
+	appendPQExpBuffer(&conn->errorMessage, "(%s)\n", error_code);
+}
+
+/*
+ * Runs the device authorization flow using libiddawc. If successful, a malloc'd
+ * token string in "Bearer xxxx..." format, suitable for sending to an
+ * OAUTHBEARER server, is returned. NULL is returned on error.
+ */
+static char *
+run_iddawc_auth_flow(PGconn *conn, const char *discovery_uri)
+{
+	struct _i_session session;
+	PQExpBuffer token_buf = NULL;
+	int			err;
+	int			auth_method;
+	bool		user_prompted = false;
+	const char *verification_uri;
+	const char *user_code;
+	const char *access_token;
+	const char *token_type;
+	char	   *token = NULL;
+
+	i_init_session(&session);
+
+	token_buf = createPQExpBuffer();
+	if (!token_buf)
+		goto cleanup;
+
+	err = i_set_str_parameter(&session, I_OPT_OPENID_CONFIG_ENDPOINT, discovery_uri);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set OpenID config endpoint");
+		goto cleanup;
+	}
+
+	err = i_get_openid_config(&session);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to fetch OpenID discovery document");
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_TOKEN_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer has no token endpoint"));
+		goto cleanup;
+	}
+
+	if (!i_get_str_parameter(&session, I_OPT_DEVICE_AUTHORIZATION_ENDPOINT))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer does not support device authorization"));
+		goto cleanup;
+	}
+
+	err = i_set_response_type(&session, I_RESPONSE_TYPE_DEVICE_CODE);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set device code response type");
+		goto cleanup;
+	}
+
+	auth_method = I_TOKEN_AUTH_METHOD_NONE;
+	if (conn->oauth_client_secret && *conn->oauth_client_secret)
+		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
+
+	err = i_set_parameter_list(&session,
+							   I_OPT_CLIENT_ID, conn->oauth_client_id,
+							   I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
+							   I_OPT_TOKEN_METHOD, auth_method,
+							   I_OPT_SCOPE, conn->oauth_scope,
+							   I_OPT_NONE
+		);
+	if (err)
+	{
+		iddawc_error(conn, err, "failed to set client identifier");
+		goto cleanup;
+	}
+
+	err = i_run_device_auth_request(&session);
+	if (err)
+	{
+		iddawc_request_error(conn, &session, err,
+							 "failed to obtain device authorization");
+		goto cleanup;
+	}
+
+	verification_uri = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_VERIFICATION_URI);
+	if (!verification_uri)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a verification URI"));
+		goto cleanup;
+	}
+
+	user_code = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_USER_CODE);
+	if (!user_code)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a user code"));
+		goto cleanup;
+	}
+
+	/*
+	 * Poll the token endpoint until either the user logs in and authorizes
+	 * the use of a token, or a hard failure occurs. We perform one ping
+	 * _before_ prompting the user, so that we don't make them do the work of
+	 * logging in only to find that the token endpoint is completely
+	 * unreachable.
+	 */
+	err = i_run_token_request(&session);
+	while (err)
+	{
+		const char *error_code;
+		uint		interval;
+
+		error_code = i_get_str_parameter(&session, I_OPT_ERROR);
+
+		/*
+		 * authorization_pending and slow_down are the only acceptable errors;
+		 * anything else and we bail.
+		 */
+		if (!error_code || (strcmp(error_code, "authorization_pending")
+							&& strcmp(error_code, "slow_down")))
+		{
+			iddawc_request_error(conn, &session, err,
+								 "failed to obtain access token");
+			goto cleanup;
+		}
+
+		if (!user_prompted)
+		{
+			int			res;
+			PQpromptOAuthDevice prompt = {
+				.verification_uri = verification_uri,
+				.user_code = user_code,
+				/* TODO: optional fields */
+			};
+
+			/*
+			 * Now that we know the token endpoint isn't broken, give the user
+			 * the login instructions.
+			 */
+			res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
+								 &prompt);
+
+			if (!res)
+			{
+				fprintf(stderr, "Visit %s and enter the code: %s",
+						prompt.verification_uri, prompt.user_code);
+			}
+			else if (res < 0)
+			{
+				appendPQExpBufferStr(&conn->errorMessage,
+									 libpq_gettext("device prompt failed\n"));
+				goto cleanup;
+			}
+
+			user_prompted = true;
+		}
+
+		/*---
+		 * We are required to wait between polls; the server tells us how
+		 * long.
+		 * TODO: if interval's not set, we need to default to five seconds
+		 * TODO: sanity check the interval
+		 */
+		interval = i_get_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL);
+
+		/*
+		 * A slow_down error requires us to permanently increase our retry
+		 * interval by five seconds. RFC 8628, Sec. 3.5.
+		 */
+		if (!strcmp(error_code, "slow_down"))
+		{
+			interval += 5;
+			i_set_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL, interval);
+		}
+
+		sleep(interval);
+
+		/*
+		 * XXX Reset the error code before every call, because iddawc won't do
+		 * that for us. This matters if the server first sends a "pending"
+		 * error code, then later hard-fails without sending an error code to
+		 * overwrite the first one.
+		 *
+		 * That we have to do this at all seems like a bug in iddawc.
+		 */
+		i_set_str_parameter(&session, I_OPT_ERROR, NULL);
+
+		err = i_run_token_request(&session);
+	}
+
+	access_token = i_get_str_parameter(&session, I_OPT_ACCESS_TOKEN);
+	token_type = i_get_str_parameter(&session, I_OPT_TOKEN_TYPE);
+
+	if (!access_token || !token_type || strcasecmp(token_type, "Bearer"))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("issuer did not provide a bearer token"));
+		goto cleanup;
+	}
+
+	appendPQExpBufferStr(token_buf, "Bearer ");
+	appendPQExpBufferStr(token_buf, access_token);
+
+	if (PQExpBufferBroken(token_buf))
+		goto cleanup;
+
+	token = strdup(token_buf->data);
+
+cleanup:
+	if (token_buf)
+		destroyPQExpBuffer(token_buf);
+	i_clean_session(&session);
+
+	return token;
+}
+
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	/* TODO: actually make this asynchronous */
+	state->token = run_iddawc_auth_flow(conn, conn->oauth_discovery_uri);
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..66ee8ff076
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..8d4ea45aa8
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2023, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 81ec08485d..9cd5c8cfb1 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -419,7 +420,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -437,7 +438,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -524,6 +525,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -563,26 +573,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -625,7 +657,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -650,11 +682,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -955,12 +997,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1118,7 +1166,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1135,7 +1183,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1451,3 +1500,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f..15ceb73d01 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -359,6 +359,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -618,6 +635,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_err_msg = NULL;
 	conn->be_pid = 0;
 	conn->be_key = 0;
+	/* conn->oauth_want_retry = false; TODO */
 }
 
 
@@ -2536,6 +2554,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3517,6 +3536,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3672,6 +3692,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 #ifdef ENABLE_GSS
 
 					/*
@@ -3753,7 +3783,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/* OK, we have processed the message; mark data consumed */
 				conn->inStart = conn->inCursor;
@@ -3786,6 +3826,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4285,6 +4360,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4400,6 +4476,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -6868,6 +6949,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index f2fc78a481..663b1c1acf 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1039,10 +1039,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1059,7 +1062,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = pqSocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = pqSocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3..d095351c66 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -38,6 +38,8 @@ extern "C"
 #define LIBPQ_HAS_TRACE_FLAGS 1
 /* Indicates that PQsslAttribute(NULL, "library") is useful */
 #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -78,7 +80,9 @@ typedef enum
 	CONNECTION_CONSUME,			/* Consuming any extra messages. */
 	CONNECTION_GSS_STARTUP,		/* Negotiating GSSAPI. */
 	CONNECTION_CHECK_TARGET,	/* Checking target server properties. */
-	CONNECTION_CHECK_STANDBY	/* Checking if server is in standby mode. */
+	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -160,6 +164,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -658,10 +669,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 82c18f870d..cf26c693e3 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -351,6 +351,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -409,6 +411,15 @@ struct pg_conn
 	char	   *require_auth;	/* name of the expected auth method */
 	char	   *load_balance_hosts; /* load balance over hosts */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -477,6 +488,9 @@ struct pg_conn
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
 
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index be6fadaea2..ae90483319 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,15 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'iddawc'
+    libpq_sources += files('fe-auth-oauth-iddawc.c')
+  else
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index b0f4178b3d..f803c1200b 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -231,6 +231,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 2461567026..6234fe66f1 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -354,6 +355,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1653,6 +1656,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1718,6 +1722,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1877,11 +1882,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3343,6 +3351,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v18-0009-WIP-Python-OAuth-provider-implementation.patchapplication/octet-stream; name=v18-0009-WIP-Python-OAuth-provider-implementation.patchDownload
From 563046557804b1d397d74dfc6bd83c188fc30907 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 26 Feb 2024 16:24:32 -0800
Subject: [PATCH v18 9/9] WIP: Python OAuth provider implementation

---
 src/test/modules/oauth_validator/Makefile     |   2 +
 src/test/modules/oauth_validator/meson.build  |   3 +
 .../modules/oauth_validator/t/001_server.pl   |  12 +-
 .../modules/oauth_validator/t/oauth_server.py |  91 +++++++++++
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 141 +++---------------
 5 files changed, 124 insertions(+), 125 deletions(-)
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py

diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
index 1f874cd7f2..e93e01455a 100644
--- a/src/test/modules/oauth_validator/Makefile
+++ b/src/test/modules/oauth_validator/Makefile
@@ -1,3 +1,5 @@
+export PYTHON
+
 MODULES = validator
 PGFILEDESC = "validator - test OAuth validator module"
 
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index d9c1d1d577..3feba6f826 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -29,5 +29,8 @@ tests += {
     'tests': [
       't/001_server.pl',
     ],
+    'env': {
+      'PYTHON': python.path(),
+    },
   },
 }
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 49e04b0afe..bbfa69e442 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -34,20 +34,16 @@ $node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n"
 $node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
 $node->start;
 
-reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:18080" scope="openid postgres"');
-
-my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
 
 my $port = $webserver->port();
-
-is($port, 18080, "Port is 18080");
-
-$webserver->setup();
-$webserver->run();
+reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:' . $port . '" scope="openid postgres"');
 
 $node->connect_ok("dbname=postgres oauth_client_id=f02c6361-0635", "connect",
 				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
 
+$webserver->stop();
 $node->stop;
 
 done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..7fa0b05a18
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,91 @@
+#! /usr/bin/env python3
+
+import http.server
+import json
+import os
+import sys
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def do_GET(self):
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def do_POST(self):
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        """
+
+        resp = json.dumps(js).encode("ascii")
+
+        self.send_response(200, "OK")
+        self.send_header("Content-Type", "application/json")
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        return {
+            "issuer": f"http://localhost:{port}",
+            "token_endpoint": f"http://localhost:{port}/token",
+            "device_authorization_endpoint": f"http://localhost:{port}/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    def authorization(self) -> JsonObject:
+        return {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            "interval": 0,
+            "verification_uri": "https://example.com/",
+            "expires-in": 5,
+        }
+
+    def token(self) -> JsonObject:
+        return {
+            "access_token": "9243959234",
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
index 3ac90c3d0f..d96733f531 100644
--- a/src/test/perl/PostgreSQL/Test/OAuthServer.pm
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -5,6 +5,7 @@ package PostgreSQL::Test::OAuthServer;
 use warnings;
 use strict;
 use threads;
+use Scalar::Util;
 use Socket;
 use IO::Select;
 
@@ -13,27 +14,13 @@ local *server_socket;
 sub new
 {
 	my $class = shift;
-	my $port = shift;
 
 	my $self = {};
 	bless($self, $class);
 
-	$self->{'port'} = $port;
-
 	return $self;
 }
 
-sub setup
-{
-	my $self = shift;
-	my $tcp = getprotobyname('tcp');
-
-	socket($self->{'socket'}, PF_INET, SOCK_STREAM, $tcp)
-		or die "no socket";
-	setsockopt($self->{'socket'}, SOL_SOCKET, SO_REUSEADDR, pack("l", 1));
-	bind($self->{'socket'}, sockaddr_in($self->{'port'}, INADDR_ANY));
-}
-
 sub port
 {
 	my $self = shift;
@@ -44,115 +31,35 @@ sub port
 sub run
 {
 	my $self = shift;
+	my $port;
 
-	my $server_thread = threads->create(\&_listen, $self);
-	$server_thread->detach();
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		// die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	print("# OAuth provider (PID $pid) is listening on port $port\n");
 }
 
-sub _listen
+sub stop
 {
 	my $self = shift;
 
-	listen($self->{'socket'}, SOMAXCONN) or die "fail to listen: $!";
-
-	while (1)
-	{
-		my $fh;
-		my %request;
-		my $remote = accept($fh, $self->{'socket'});
-		binmode $fh;
-
-		my ($method, $object, $prot) = split(/ /, <$fh>);
-		$request{'method'} = $method;
-		$request{'object'} = $object;
-		chomp($request{'object'});
-
-		local $/ = Socket::CRLF;
-		my $c = 0;
-		while(<$fh>)
-		{
-			chomp;
-			# Headers
-			if (/:/)
-			{
-				my ($field, $value) = split(/:/, $_, 2);
-				$value =~ s/^\s+//;
-				$request{'headers'}{lc $field} = $value;
-			}
-			# POST data
-			elsif (/^$/)
-			{
-				read($fh, $request{'content'}, $request{'headers'}{'content-length'})
-					if defined $request{'headers'}{'content-length'};
-				last;
-			}
-		}
-
-		# Debug printing
-		# print ": read ".$request{'method'} . ";" . $request{'object'}.";\n";
-		# foreach my $h (keys(%{$request{'headers'}}))
-		#{
-		#	printf ": headers: " . $request{'headers'}{$h} . "\n";
-		#}
-		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
-
-		if ($request{'object'} eq '/.well-known/openid-configuration')
-		{
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"issuer": "http://localhost:$self->{'port'}",
-				"token_endpoint": "http://localhost:$self->{'port'}/token",
-				"device_authorization_endpoint": "http://localhost:$self->{'port'}/authorize",
-				"response_types_supported": ["token"],
-				"subject_types_supported": ["public"],
-				"id_token_signing_alg_values_supported": ["RS256"],
-				"grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"]
-			}
-EOR
-		}
-		elsif ($request{'object'} eq '/authorize')
-		{
-			print ": returning device_code\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"device_code": "postgres",
-				"user_code" : "postgresuser",
-				"interval" : 0,
-				"verification_uri" : "https://example.com/",
-				"expires-in": 5
-			}
-EOR
-		}
-		elsif ($request{'object'} eq '/token')
-		{
-			print ": returning token\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"access_token": "9243959234",
-				"token_type": "bearer"
-			}
-EOR
-		}
-		else
-		{
-			print ": returning default\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: text/html\r\n";
-			print $fh "\r\n";
-			print $fh "Ok\n";
-		}
-
-		close($fh);
-	}
+	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
 }
 
 1;
-- 
2.34.1

v18-0008-XXX-temporary-patches-to-build-and-test.patchapplication/octet-stream; name=v18-0008-XXX-temporary-patches-to-build-and-test.patchDownload
From 8ad4ce3068b6f75467d54060bc58df1b73b66d41 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 20 Feb 2024 11:35:29 -0800
Subject: [PATCH v18 8/9] XXX temporary patches to build and test

- the new pg_combinebackup utility uses JSON in the frontend without
  0001; has something changed?
- construct 2.10.70 has some incompatibilities with the current tests
- temporarily skip the exit check (from Daniel Gustafsson); this needs
  to be turned into an exception for curl rather than a plain exit call
---
 src/bin/pg_combinebackup/Makefile    | 6 ++++--
 src/bin/pg_combinebackup/meson.build | 3 ++-
 src/bin/pg_verifybackup/Makefile     | 2 +-
 src/interfaces/libpq/Makefile        | 2 +-
 src/test/python/requirements.txt     | 4 +++-
 5 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index 4f24b1aff6..2f7dc1ed87 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -18,6 +18,8 @@ include $(top_builddir)/src/Makefile.global
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
 LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
@@ -30,8 +32,8 @@ OBJS = \
 
 all: pg_combinebackup
 
-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
-	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
 
 install: all installdirs
 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
diff --git a/src/bin/pg_combinebackup/meson.build b/src/bin/pg_combinebackup/meson.build
index 30dbbaa6cf..926f63f365 100644
--- a/src/bin/pg_combinebackup/meson.build
+++ b/src/bin/pg_combinebackup/meson.build
@@ -17,7 +17,8 @@ endif
 
 pg_combinebackup = executable('pg_combinebackup',
   pg_combinebackup_sources,
-  dependencies: [frontend_code],
+  # XXX linking against libpq isn't good, but how was JSON working?
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args,
 )
 bin_targets += pg_combinebackup
diff --git a/src/bin/pg_verifybackup/Makefile b/src/bin/pg_verifybackup/Makefile
index 7c045f142e..3372fada01 100644
--- a/src/bin/pg_verifybackup/Makefile
+++ b/src/bin/pg_verifybackup/Makefile
@@ -17,7 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 # We need libpq only because fe_utils does.
-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 86aada810f..d01d932a2c 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -126,7 +126,7 @@ libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
 	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
-		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
+		echo 'libpq must not be calling any function which invokes exit'; \
 	fi
 endif
 endif
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
index 57ba1ced94..0dfcffb83e 100644
--- a/src/test/python/requirements.txt
+++ b/src/test/python/requirements.txt
@@ -1,7 +1,9 @@
 black
 # cryptography 35.x and later add many platform/toolchain restrictions, beware
 cryptography~=3.4.8
-construct~=2.10.61
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
 isort~=5.6
 # TODO: update to psycopg[c] 3.1
 psycopg2~=2.9.7
-- 
2.34.1

v18-0007-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v18-0007-Add-pytest-suite-for-OAuth.patchDownload
From 9c46ea6cf9e35c31692ea6fe482e3c404df66df3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v18 7/9] Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

For iddawc, asynchronous tests still hang, as expected. Bad-interval
tests fail because iddawc apparently doesn't care that the interval is
bad.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |   22 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  137 ++
 src/test/python/client/test_client.py |  180 +++
 src/test/python/client/test_oauth.py  | 1768 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  732 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |    9 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  |  945 +++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  563 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5334 insertions(+), 7 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 3b5b54df58..2501743b31 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl load_balance python
 
 
 # What files to preserve in case tests fail
@@ -165,7 +165,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -177,6 +177,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -225,6 +226,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -237,6 +239,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -312,8 +315,11 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bullseye - Autoconf
@@ -368,6 +374,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -378,7 +386,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.32-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
@@ -676,8 +684,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/meson.build b/meson.build
index 43066a017f..154133b22e 100644
--- a/meson.build
+++ b/meson.build
@@ -3190,6 +3190,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3351,6 +3354,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index c3d0dfedf1..f401ec179e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -7,6 +7,7 @@ subdir('authentication')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..94f3620af3
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,137 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            self._pump_async(conn)
+            conn.close()
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..c4c946fda4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,180 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = "server closed the connection unexpectedly"
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..eea86d7f2b
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1768 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": "application/json"}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept, openid_provider, asynchronous, retries, scope, secret, auth_data_cb
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+def test_oauth_retry_interval(accept, openid_provider, retries, error_code):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": expected_retry_interval,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, _ = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we cleaned up after ourselves.
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {}),
+            alt_patterns(
+                r'failed to parse token error response: field "error" is missing',
+                r"failed to obtain device authorization: \(iddawc error I_ERROR_PARAM\)",
+            ),
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            alt_patterns(
+                r"failed to parse device authorization: Token .* is invalid",
+                r"failed to obtain device authorization: \(iddawc error I_ERROR\)",
+            ),
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            alt_patterns(
+                r"failed to parse device authorization: Token .* is invalid",
+                r"failed to obtain device authorization: \(iddawc error I_ERROR\)",
+            ),
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    # XXX iddawc doesn't really check for problems in the device authorization
+    # response, leading to this patchwork:
+    if field_name == "verification_uri":
+        error_pattern = alt_patterns(
+            error_pattern,
+            "issuer did not provide a verification URI",
+        )
+    elif field_name == "user_code":
+        error_pattern = alt_patterns(
+            error_pattern,
+            "issuer did not provide a user code",
+        )
+    else:
+        error_pattern = alt_patterns(
+            error_pattern,
+            r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
+        )
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {}),
+            alt_patterns(
+                r'failed to parse token error response: field "error" is missing',
+                r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
+            ),
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            alt_patterns(
+                r"failed to parse access token response: no content type was provided",
+                r"failed to obtain access token: \(iddawc error I_ERROR\)",
+            ),
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            alt_patterns(
+                r"failed to parse access token response: unexpected content type",
+                r"failed to obtain access token: \(iddawc error I_ERROR\)",
+            ),
+            id="wrong content type",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    # XXX iddawc is fairly silent on the topic.
+    error_pattern = alt_patterns(
+        error_pattern,
+        r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
+    )
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # XXX iddawc doesn't differentiate...
+    expected_error = alt_patterns(
+        expected_error,
+        r"failed to fetch OpenID discovery document \(iddawc error I_ERROR(_PARAM)?\)",
+    )
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..a2d2812f0e
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,732 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload" / FixedSized(this.len - 4, Default(_payload, b"")),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..57ba1ced94
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,9 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+construct~=2.10.61
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..f8e6c1651b
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authenticated = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authenticated = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..2a2ca59e94
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,945 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + ".bak"
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"x" * (MAX_SASL_MESSAGE_LENGTH + 1),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if not isinstance(payload, dict):
+        payload = dict(payload_data=payload)
+    pq3.send(conn, type, **payload)
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..638dd337a6
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,563 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05\x00",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

v18-0006-Introduce-OAuth-validator-libraries.patchapplication/octet-stream; name=v18-0006-Introduce-OAuth-validator-libraries.patchDownload
From 7a42365d62fd13bc4d5125017ebc8857f0855406 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Wed, 21 Feb 2024 17:04:26 +0100
Subject: [PATCH v18 6/9] Introduce OAuth validator libraries

This replaces the serverside validation code with an module API
for loading in extensions for validating bearer tokens. A lot of
code is left to be written.

Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
---
 src/backend/libpq/auth-oauth.c                | 431 +++++-------------
 src/backend/utils/misc/guc_tables.c           |   6 +-
 src/bin/pg_combinebackup/Makefile             |   2 +-
 src/common/Makefile                           |   2 +-
 src/include/libpq/oauth.h                     |  29 +-
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  19 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  33 ++
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   |  53 +++
 src/test/modules/oauth_validator/validator.c  |  71 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  14 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 158 +++++++
 src/tools/pgindent/typedefs.list              |   2 +
 16 files changed, 500 insertions(+), 332 deletions(-)
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index a9d2646023..f5cf271566 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -6,7 +6,7 @@
  * See the following RFC for more details:
  * - RFC 7628: https://tools.ietf.org/html/rfc7628
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/backend/libpq/auth-oauth.c
@@ -19,21 +19,29 @@
 #include <fcntl.h>
 
 #include "common/oauth-common.h"
+#include "fmgr.h"
 #include "lib/stringinfo.h"
 #include "libpq/auth.h"
 #include "libpq/hba.h"
 #include "libpq/oauth.h"
 #include "libpq/sasl.h"
 #include "storage/fd.h"
+#include "storage/ipc.h"
 
 /* GUC */
-char	   *oauth_validator_command;
+char	   *OAuthValidatorLibrary = "";
 
 static void oauth_get_mechanisms(Port *port, StringInfo buf);
 static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
 static int	oauth_exchange(void *opaq, const char *input, int inputlen,
 						   char **output, int *outputlen, const char **logdetail);
 
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
 /* Mechanism declaration */
 const pg_be_sasl_mech pg_be_oauth_mech = {
 	oauth_get_mechanisms,
@@ -62,11 +70,7 @@ struct oauth_ctx
 static char *sanitize_char(char c);
 static char *parse_kvpairs_for_auth(char **input);
 static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
-static bool validate(Port *port, const char *auth, const char **logdetail);
-static bool run_validator_command(Port *port, const char *token);
-static bool check_exit(FILE **fh, const char *command);
-static bool set_cloexec(int fd);
-static bool username_ok_for_shell(const char *username);
+static bool validate(Port *port, const char *auth);
 
 #define KVSEP 0x01
 #define AUTH_KEY "auth"
@@ -99,6 +103,8 @@ oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 	ctx->issuer = port->hba->oauth_issuer;
 	ctx->scope = port->hba->oauth_scope;
 
+	load_validator_library();
+
 	return ctx;
 }
 
@@ -249,7 +255,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 				 errmsg("malformed OAUTHBEARER message"),
 				 errdetail("Message contains additional data after the final terminator.")));
 
-	if (!validate(ctx->port, auth, logdetail))
+	if (!validate(ctx->port, auth))
 	{
 		generate_error_response(ctx, output, outputlen);
 
@@ -416,70 +422,73 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	*outputlen = buf.len;
 }
 
-static bool
-validate(Port *port, const char *auth, const char **logdetail)
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
 {
-	static const char *const b64_set =
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
 		"abcdefghijklmnopqrstuvwxyz"
 		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
 		"0123456789-._~+/";
 
-	const char *token;
-	size_t		span;
-	int			ret;
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
 
-	/* TODO: handle logdetail when the test framework can check it */
-
-	/*-----
-	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
-	 * 2.1:
-	 *
-	 *      b64token    = 1*( ALPHA / DIGIT /
-	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
-	 *      credentials = "Bearer" 1*SP b64token
-	 *
-	 * The "credentials" construction is what we receive in our auth value.
-	 *
-	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
-	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
-	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
-	 * it's pointed out in RFC 7628 Sec. 4.)
-	 *
-	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
-	 */
-	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
-		return false;
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
 
 	/* Pull the bearer token out of the auth value. */
-	token = auth + strlen(BEARER_SCHEME);
+	token = header + strlen(BEARER_SCHEME);
 
 	/* Swallow any additional spaces. */
 	while (*token == ' ')
 		token++;
 
-	/*
-	 * Before invoking the validator command, sanity-check the token format to
-	 * avoid any injection attacks later in the chain. Invalid formats are
-	 * technically a protocol violation, but don't reflect any information
-	 * about the sensitive Bearer token back to the client; log at COMMERROR
-	 * instead.
-	 */
-
 	/* Tokens must not be empty. */
 	if (!*token)
 	{
 		ereport(COMMERROR,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
+				 errmsg("malformed OAuth bearer token 2"),
 				 errdetail("Bearer token is empty.")));
-		return false;
+		return NULL;
 	}
 
 	/*
 	 * Make sure the token contains only allowed characters. Tokens may end
 	 * with any number of '=' characters.
 	 */
-	span = strspn(token, b64_set);
+	span = strspn(token, b64token_allowed_set);
 	while (token[span] == '=')
 		span++;
 
@@ -492,15 +501,35 @@ validate(Port *port, const char *auth, const char **logdetail)
 		 */
 		ereport(COMMERROR,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
+				 errmsg("malformed OAuth bearer token 3"),
 				 errdetail("Bearer token is not in the correct format.")));
-		return false;
+		return NULL;
 	}
 
-	/* Have the validator check the token. */
-	if (!run_validator_command(port, token))
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
 		return false;
 
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authenticated)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
 	if (port->hba->oauth_skip_usermap)
 	{
 		/*
@@ -513,7 +542,7 @@ validate(Port *port, const char *auth, const char **logdetail)
 	}
 
 	/* Make sure the validator authenticated the user. */
-	if (!MyClientConnectionInfo.authn_id)
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
 		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
@@ -523,288 +552,42 @@ validate(Port *port, const char *auth, const char **logdetail)
 	}
 
 	/* Finally, check the user map. */
-	ret = check_usermap(port->hba->usermap, port->user_name,
-						MyClientConnectionInfo.authn_id, false);
-	return (ret == STATUS_OK);
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
 }
 
-static bool
-run_validator_command(Port *port, const char *token)
-{
-	bool		success = false;
-	int			rc;
-	int			pipefd[2];
-	int			rfd = -1;
-	int			wfd = -1;
-
-	StringInfoData command = {0};
-	char	   *p;
-	FILE	   *fh = NULL;
-
-	ssize_t		written;
-	char	   *line = NULL;
-	size_t		size = 0;
-	ssize_t		len;
-
-	Assert(oauth_validator_command);
-
-	if (!oauth_validator_command[0])
-	{
-		ereport(COMMERROR,
-				(errmsg("oauth_validator_command is not set"),
-				 errhint("To allow OAuth authenticated connections, set "
-						 "oauth_validator_command in postgresql.conf.")));
-		return false;
-	}
-
-	/*------
-	 * Since popen() is unidirectional, open up a pipe for the other
-	 * direction. Use CLOEXEC to ensure that our write end doesn't
-	 * accidentally get copied into child processes, which would prevent us
-	 * from closing it cleanly.
-	 *
-	 * XXX this is ugly. We should just read from the child process's stdout,
-	 * but that's a lot more code.
-	 * XXX by bypassing the popen API, we open the potential of process
-	 * deadlock. Clearly document child process requirements (i.e. the child
-	 * MUST read all data off of the pipe before writing anything).
-	 * TODO: port to Windows using _pipe().
-	 */
-	rc = pipe(pipefd);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not create child pipe: %m")));
-		return false;
-	}
-
-	rfd = pipefd[0];
-	wfd = pipefd[1];
-
-	if (!set_cloexec(wfd))
-	{
-		/* error message was already logged */
-		goto cleanup;
-	}
-
-	/*----------
-	 * Construct the command, substituting any recognized %-specifiers:
-	 *
-	 *   %f: the file descriptor of the input pipe
-	 *   %r: the role that the client wants to assume (port->user_name)
-	 *   %%: a literal '%'
-	 */
-	initStringInfo(&command);
-
-	for (p = oauth_validator_command; *p; p++)
-	{
-		if (p[0] == '%')
-		{
-			switch (p[1])
-			{
-				case 'f':
-					appendStringInfo(&command, "%d", rfd);
-					p++;
-					break;
-				case 'r':
-
-					/*
-					 * TODO: decide how this string should be escaped. The
-					 * role is controlled by the client, so if we don't escape
-					 * it, command injections are inevitable.
-					 *
-					 * This is probably an indication that the role name needs
-					 * to be communicated to the validator process in some
-					 * other way. For this proof of concept, just be
-					 * incredibly strict about the characters that are allowed
-					 * in user names.
-					 */
-					if (!username_ok_for_shell(port->user_name))
-						goto cleanup;
-
-					appendStringInfoString(&command, port->user_name);
-					p++;
-					break;
-				case '%':
-					appendStringInfoChar(&command, '%');
-					p++;
-					break;
-				default:
-					appendStringInfoChar(&command, p[0]);
-			}
-		}
-		else
-			appendStringInfoChar(&command, p[0]);
-	}
-
-	/* Execute the command. */
-	fh = OpenPipeStream(command.data, "r");
-	if (!fh)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("opening pipe to OAuth validator: %m")));
-		goto cleanup;
-	}
-
-	/* We don't need the read end of the pipe anymore. */
-	close(rfd);
-	rfd = -1;
-
-	/* Give the command the token to validate. */
-	written = write(wfd, token, strlen(token));
-	if (written != strlen(token))
-	{
-		/* TODO must loop for short writes, EINTR et al */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not write token to child pipe: %m")));
-		goto cleanup;
-	}
-
-	close(wfd);
-	wfd = -1;
-
-	/*-----
-	 * Read the command's response.
-	 *
-	 * TODO: getline() is probably too new to use, unfortunately.
-	 * TODO: loop over all lines
-	 */
-	if ((len = getline(&line, &size, fh)) >= 0)
-	{
-		/* TODO: fail if the authn_id doesn't end with a newline */
-		if (len > 0)
-			line[len - 1] = '\0';
-
-		set_authn_id(port, line);
-	}
-	else if (ferror(fh))
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not read from command \"%s\": %m",
-						command.data)));
-		goto cleanup;
-	}
-
-	/* Make sure the command exits cleanly. */
-	if (!check_exit(&fh, command.data))
-	{
-		/* error message already logged */
-		goto cleanup;
-	}
-
-	/* Done. */
-	success = true;
-
-cleanup:
-	if (line)
-		free(line);
-
-	/*
-	 * In the successful case, the pipe fds are already closed. For the error
-	 * case, always close out the pipe before waiting for the command, to
-	 * prevent deadlock.
-	 */
-	if (rfd >= 0)
-		close(rfd);
-	if (wfd >= 0)
-		close(wfd);
-
-	if (fh)
-	{
-		Assert(!success);
-		check_exit(&fh, command.data);
-	}
-
-	if (command.data)
-		pfree(command.data);
-
-	return success;
-}
-
-static bool
-check_exit(FILE **fh, const char *command)
+static void
+load_validator_library(void)
 {
-	int			rc;
+	OAuthValidatorModuleInit validator_init;
 
-	rc = ClosePipeStream(*fh);
-	*fh = NULL;
-
-	if (rc == -1)
-	{
-		/* pclose() itself failed. */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not close pipe to command \"%s\": %m",
-						command)));
-	}
-	else if (rc != 0)
-	{
-		char	   *reason = wait_result_to_str(rc);
-
-		ereport(COMMERROR,
-				(errmsg("failed to execute command \"%s\": %s",
-						command, reason)));
-
-		pfree(reason);
-	}
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
 
-	return (rc == 0);
-}
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
 
-static bool
-set_cloexec(int fd)
-{
-	int			flags;
-	int			rc;
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
 
-	flags = fcntl(fd, F_GETFD);
-	if (flags == -1)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not get fd flags for child pipe: %m")));
-		return false;
-	}
+	ValidatorCallbacks = (*validator_init) ();
 
-	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
-		return false;
-	}
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
 
-	return true;
+	before_shmem_exit(shutdown_validator_library, 0);
 }
 
-/*
- * XXX This should go away eventually and be replaced with either a proper
- * escape or a different strategy for communication with the validator command.
- */
-static bool
-username_ok_for_shell(const char *username)
+static void
+shutdown_validator_library(int code, Datum arg)
 {
-	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
-	static const char *const allowed =
-		"abcdefghijklmnopqrstuvwxyz"
-		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
-		"0123456789-_./:";
-	size_t		span;
-
-	Assert(username && username[0]);	/* should have already been checked */
-
-	span = strspn(username, allowed);
-	if (username[span] != '\0')
-	{
-		ereport(COMMERROR,
-				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
-		return false;
-	}
-
-	return true;
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
 }
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 0f83f0d870..a479b679d1 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -4672,12 +4672,12 @@ struct config_string ConfigureNamesString[] =
 	},
 
 	{
-		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
-			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
 			NULL,
 			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
 		},
-		&oauth_validator_command,
+		&OAuthValidatorLibrary,
 		"",
 		NULL, NULL, NULL
 	},
diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index c3729755ba..4f24b1aff6 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -31,7 +31,7 @@ OBJS = \
 all: pg_combinebackup
 
 pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
-	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
 
 install: all installdirs
 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
diff --git a/src/common/Makefile b/src/common/Makefile
index bbb5c3ab11..00e30e6bfe 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -41,7 +41,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
 override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
-LIBS += $(PTHREAD_LIBS)
+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
 
 OBJS_COMMON = \
 	archive.o \
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
index 5edab3b25a..5c081abfae 100644
--- a/src/include/libpq/oauth.h
+++ b/src/include/libpq/oauth.h
@@ -3,7 +3,7 @@
  * oauth.h
  *	  Interface to libpq/auth-oauth.c
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/include/libpq/oauth.h
@@ -16,7 +16,32 @@
 #include "libpq/libpq-be.h"
 #include "libpq/sasl.h"
 
-extern char *oauth_validator_command;
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authenticated;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
 
 /* Implementation */
 extern const pg_be_sasl_mech pg_be_oauth_mech;
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 8fbe742d38..dc54ce7189 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..1f874cd7f2
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,19 @@
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..d9c1d1d577
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..49e04b0afe
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,53 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+# Delete pg_hba.conf from the given node, add a new entry to it
+# and then execute a reload to refresh it.
+# XXX: this is copied from authentication/t/001_password and should be made
+# generic functionality if we end up using it.
+sub reset_pg_hba
+{
+	my $node = shift;
+	my $database = shift;
+	my $role = shift;
+	my $hba_method = shift;
+
+	unlink($node->data_dir . '/pg_hba.conf');
+	# just for testing purposes, use a continuation line
+	$node->append_conf('pg_hba.conf',
+		"local $database $role\\\n $hba_method");
+	$node->reload;
+	return;
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:18080" scope="openid postgres"');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
+
+my $port = $webserver->port();
+
+is($port, 18080, "Port is 18080");
+
+$webserver->setup();
+$webserver->run();
+
+$node->connect_ok("dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..c76d0599c5
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,71 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "XXX: validating %s for %s", token, role);
+
+	res->authenticated = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 44c1bb5afd..b758ad01cc 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2302,6 +2302,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2345,7 +2350,14 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..3ac90c3d0f
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,158 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use threads;
+use Socket;
+use IO::Select;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+	my $port = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	$self->{'port'} = $port;
+
+	return $self;
+}
+
+sub setup
+{
+	my $self = shift;
+	my $tcp = getprotobyname('tcp');
+
+	socket($self->{'socket'}, PF_INET, SOCK_STREAM, $tcp)
+		or die "no socket";
+	setsockopt($self->{'socket'}, SOL_SOCKET, SO_REUSEADDR, pack("l", 1));
+	bind($self->{'socket'}, sockaddr_in($self->{'port'}, INADDR_ANY));
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+
+	my $server_thread = threads->create(\&_listen, $self);
+	$server_thread->detach();
+}
+
+sub _listen
+{
+	my $self = shift;
+
+	listen($self->{'socket'}, SOMAXCONN) or die "fail to listen: $!";
+
+	while (1)
+	{
+		my $fh;
+		my %request;
+		my $remote = accept($fh, $self->{'socket'});
+		binmode $fh;
+
+		my ($method, $object, $prot) = split(/ /, <$fh>);
+		$request{'method'} = $method;
+		$request{'object'} = $object;
+		chomp($request{'object'});
+
+		local $/ = Socket::CRLF;
+		my $c = 0;
+		while(<$fh>)
+		{
+			chomp;
+			# Headers
+			if (/:/)
+			{
+				my ($field, $value) = split(/:/, $_, 2);
+				$value =~ s/^\s+//;
+				$request{'headers'}{lc $field} = $value;
+			}
+			# POST data
+			elsif (/^$/)
+			{
+				read($fh, $request{'content'}, $request{'headers'}{'content-length'})
+					if defined $request{'headers'}{'content-length'};
+				last;
+			}
+		}
+
+		# Debug printing
+		# print ": read ".$request{'method'} . ";" . $request{'object'}.";\n";
+		# foreach my $h (keys(%{$request{'headers'}}))
+		#{
+		#	printf ": headers: " . $request{'headers'}{$h} . "\n";
+		#}
+		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
+
+		if ($request{'object'} eq '/.well-known/openid-configuration')
+		{
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"issuer": "http://localhost:$self->{'port'}",
+				"token_endpoint": "http://localhost:$self->{'port'}/token",
+				"device_authorization_endpoint": "http://localhost:$self->{'port'}/authorize",
+				"response_types_supported": ["token"],
+				"subject_types_supported": ["public"],
+				"id_token_signing_alg_values_supported": ["RS256"],
+				"grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"]
+			}
+EOR
+		}
+		elsif ($request{'object'} eq '/authorize')
+		{
+			print ": returning device_code\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"device_code": "postgres",
+				"user_code" : "postgresuser",
+				"interval" : 0,
+				"verification_uri" : "https://example.com/",
+				"expires-in": 5
+			}
+EOR
+		}
+		elsif ($request{'object'} eq '/token')
+		{
+			print ": returning token\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"access_token": "9243959234",
+				"token_type": "bearer"
+			}
+EOR
+		}
+		else
+		{
+			print ": returning default\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: text/html\r\n";
+			print $fh "\r\n";
+			print $fh "Ok\n";
+		}
+
+		close($fh);
+	}
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index fe65a4222b..2bca506e16 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1657,6 +1657,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -2980,6 +2981,7 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
 ValuesScan
 ValuesScanState
 Var
-- 
2.34.1

#98Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#97)
10 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Feb 29, 2024 at 5:08 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

We are now very, very close to green.

v19 gets us a bit closer by adding a missed import for Windows. I've
also removed iddawc support, so the client patch is lighter.

The new oauth_validator tests can't work on Windows, since the client
doesn't support OAuth there. The python/server tests can handle this
case, since they emulate the client behavior; do we want to try
something similar in Perl?

In addition to this question, I'm starting to notice intermittent
failures of the form

error: ... failed to fetch OpenID discovery document: failed to
queue HTTP request

This corresponds to a TODO in the libcurl implementation -- if the
initial call to curl_multi_socket_action() reports that no handles are
running, I treated that as an error. But it looks like it's possible
for libcurl to finish a request synchronously if the remote responds
quickly enough, so that needs to change.

--Jacob

Attachments:

since-v18.diff.txttext/plain; charset=US-ASCII; name=since-v18.diff.txtDownload
 1:  e2a0b48561 =  1:  ce06c03e2b common/jsonapi: support FRONTEND clients
 2:  db625e1d01 =  2:  6989b75153 Refactor SASL exchange to return tri-state status
 3:  e4ad0260d5 =  3:  783bfe0b95 Explicitly require password for SCRAM exchange
 4:  229f602d5c !  4:  77550a47db libpq: add OAUTHBEARER SASL mechanism
    @@ Commit message
         are currently implemented (but clients may provide their own flows; see
         below).
     
    -    The client implementation requires either libcurl or libiddawc and their
    -    development headers. Pass `curl` or `iddawc` to --with-oauth/-Doauth
    -    during configuration.
    +    The client implementation requires libcurl and its development headers.
    +    Pass `curl` to --with-oauth/-Doauth during configuration.
     
         Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!
     
    @@ configure: Optional Packages:
        --with-pam              build with PAM support
        --with-bsd-auth         build with BSD Authentication support
        --with-ldap             build with LDAP support
    -+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl, iddawc)
    ++  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
        --with-bonjour          build with Bonjour support
        --with-selinux          build with SELinux support
        --with-systemd          build with systemd support
    @@ configure: $as_echo "$with_ldap" >&6; }
     +
     +$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
     +
    -+elif test x"$with_oauth" = x"iddawc"; then
    -+
    -+$as_echo "#define USE_OAUTH 1" >>confdefs.h
    -+
    -+
    -+$as_echo "#define USE_OAUTH_IDDAWC 1" >>confdefs.h
    -+
     +elif test x"$with_oauth" != x"no"; then
    -+  as_fn_error $? "--with-oauth must specify curl or iddawc" "$LINENO" 5
    ++  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
     +fi
     +
     +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
    @@ configure: fi
     +  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
     +fi
     +
    -+elif test "$with_oauth" = iddawc ; then
    -+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for i_init_session in -liddawc" >&5
    -+$as_echo_n "checking for i_init_session in -liddawc... " >&6; }
    -+if ${ac_cv_lib_iddawc_i_init_session+:} false; then :
    -+  $as_echo_n "(cached) " >&6
    -+else
    -+  ac_check_lib_save_LIBS=$LIBS
    -+LIBS="-liddawc  $LIBS"
    -+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
    -+/* end confdefs.h.  */
    -+
    -+/* Override any GCC internal prototype to avoid an error.
    -+   Use char because int might match the return type of a GCC
    -+   builtin and then its argument prototype would still apply.  */
    -+#ifdef __cplusplus
    -+extern "C"
    -+#endif
    -+char i_init_session ();
    -+int
    -+main ()
    -+{
    -+return i_init_session ();
    -+  ;
    -+  return 0;
    -+}
    -+_ACEOF
    -+if ac_fn_c_try_link "$LINENO"; then :
    -+  ac_cv_lib_iddawc_i_init_session=yes
    -+else
    -+  ac_cv_lib_iddawc_i_init_session=no
    -+fi
    -+rm -f core conftest.err conftest.$ac_objext \
    -+    conftest$ac_exeext conftest.$ac_ext
    -+LIBS=$ac_check_lib_save_LIBS
    -+fi
    -+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_iddawc_i_init_session" >&5
    -+$as_echo "$ac_cv_lib_iddawc_i_init_session" >&6; }
    -+if test "x$ac_cv_lib_iddawc_i_init_session" = xyes; then :
    -+  cat >>confdefs.h <<_ACEOF
    -+#define HAVE_LIBIDDAWC 1
    -+_ACEOF
    -+
    -+  LIBS="-liddawc $LIBS"
    -+
    -+else
    -+  as_fn_error $? "library 'iddawc' is required for --with-oauth=iddawc" "$LINENO" 5
    -+fi
    -+
    -+  # Check for an older spelling of i_get_openid_config
    -+  for ac_func in i_load_openid_config
    -+do :
    -+  ac_fn_c_check_func "$LINENO" "i_load_openid_config" "ac_cv_func_i_load_openid_config"
    -+if test "x$ac_cv_func_i_load_openid_config" = xyes; then :
    -+  cat >>confdefs.h <<_ACEOF
    -+#define HAVE_I_LOAD_OPENID_CONFIG 1
    -+_ACEOF
    -+
    -+fi
    -+done
    -+
     +fi
     +
      # for contrib/sepgsql
    @@ configure: fi
     +  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
     +fi
     +
    -+
    -+elif test "$with_oauth" = iddawc; then
    -+  ac_fn_c_check_header_mongrel "$LINENO" "iddawc.h" "ac_cv_header_iddawc_h" "$ac_includes_default"
    -+if test "x$ac_cv_header_iddawc_h" = xyes; then :
    -+
    -+else
    -+  as_fn_error $? "header file <iddawc.h> is required for OAuth" "$LINENO" 5
    -+fi
    -+
     +
      fi
      
    @@ configure.ac: AC_MSG_RESULT([$with_ldap])
     +# OAuth 2.0
     +#
     +AC_MSG_CHECKING([whether to build with OAuth support])
    -+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl, iddawc)])
    ++PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
     +if test x"$with_oauth" = x"" ; then
     +  with_oauth=no
     +fi
    @@ configure.ac: AC_MSG_RESULT([$with_ldap])
     +if test x"$with_oauth" = x"curl"; then
     +  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
     +  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
    -+elif test x"$with_oauth" = x"iddawc"; then
    -+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
    -+  AC_DEFINE([USE_OAUTH_IDDAWC], 1, [Define to 1 to use libiddawc for OAuth support.])
     +elif test x"$with_oauth" != x"no"; then
    -+  AC_MSG_ERROR([--with-oauth must specify curl or iddawc])
    ++  AC_MSG_ERROR([--with-oauth must specify curl])
     +fi
     +
     +AC_MSG_RESULT([$with_oauth])
    @@ configure.ac: fi
      
     +if test "$with_oauth" = curl ; then
     +  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
    -+elif test "$with_oauth" = iddawc ; then
    -+  AC_CHECK_LIB(iddawc, i_init_session, [], [AC_MSG_ERROR([library 'iddawc' is required for --with-oauth=iddawc])])
    -+  # Check for an older spelling of i_get_openid_config
    -+  AC_CHECK_FUNCS([i_load_openid_config])
     +fi
     +
      # for contrib/sepgsql
    @@ configure.ac: elif test "$with_uuid" = ossp ; then
      
     +if test "$with_oauth" = curl; then
     +  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
    -+elif test "$with_oauth" = iddawc; then
    -+  AC_CHECK_HEADER(iddawc.h, [], [AC_MSG_ERROR([header file <iddawc.h> is required for OAuth])])
     +fi
     +
      if test "$PORTNAME" = "win32" ; then
    @@ meson.build: endif
     +  endif
     +endif
     +
    -+if not oauth.found() and oauthopt in ['auto', 'iddawc']
    -+  oauth = dependency('libiddawc', required: (oauthopt == 'iddawc'))
    -+
    -+  if oauth.found()
    -+    oauth_library = 'iddawc'
    -+    cdata.set('USE_OAUTH', 1)
    -+    cdata.set('USE_OAUTH_IDDAWC', 1)
    -+
    -+    # Check for an older spelling of i_get_openid_config
    -+    if cc.has_function('i_load_openid_config',
    -+                       dependencies: oauth, args: test_c_args)
    -+      cdata.set('HAVE_I_LOAD_OPENID_CONFIG', 1)
    -+    endif
    -+  endif
    -+endif
    -+
     +if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
     +  error('no OAuth implementation library found')
     +endif
    @@ meson_options.txt: option('lz4', type: 'feature', value: 'auto',
      option('nls', type: 'feature', value: 'auto',
        description: 'Native language support')
      
    -+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl', 'iddawc'],
    ++option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
     +  value: 'auto',
    -+  description: 'use LIB for OAuth 2.0 support (curl, iddawc)')
    ++  description: 'use LIB for OAuth 2.0 support (curl)')
     +
      option('pam', type: 'feature', value: 'auto',
        description: 'PAM support')
    @@ src/include/common/oauth-common.h (new)
     +#endif							/* OAUTH_COMMON_H */
     
      ## src/include/pg_config.h.in ##
    -@@
    - /* Define to 1 if __builtin_constant_p(x) implies "i"(x) acceptance. */
    - #undef HAVE_I_CONSTRAINT__BUILTIN_CONSTANT_P
    - 
    -+/* Define to 1 if you have the `i_load_openid_config' function. */
    -+#undef HAVE_I_LOAD_OPENID_CONFIG
    -+
    - /* Define to 1 if you have the `kqueue' function. */
    - #undef HAVE_KQUEUE
    - 
     @@
      /* Define to 1 if you have the `crypto' library (-lcrypto). */
      #undef HAVE_LIBCRYPTO
      
     +/* Define to 1 if you have the `curl' library (-lcurl). */
     +#undef HAVE_LIBCURL
    -+
    -+/* Define to 1 if you have the `iddawc' library (-liddawc). */
    -+#undef HAVE_LIBIDDAWC
     +
      /* Define to 1 if you have the `ldap' library (-lldap). */
      #undef HAVE_LIBLDAP
    @@ src/include/pg_config.h.in
     +
     +/* Define to 1 to use libcurl for OAuth support. */
     +#undef USE_OAUTH_CURL
    -+
    -+/* Define to 1 to use libiddawc for OAuth support. */
    -+#undef USE_OAUTH_IDDAWC
     +
      /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
      #undef USE_OPENSSL
    @@ src/interfaces/libpq/Makefile: OBJS += \
     +ifneq ($(with_oauth),no)
     +OBJS += fe-auth-oauth.o
     +
    -+ifeq ($(with_oauth),iddawc)
    -+OBJS += fe-auth-oauth-iddawc.o
    -+else
    ++ifeq ($(with_oauth),curl)
     +OBJS += fe-auth-oauth-curl.o
     +endif
     +endif
    @@ src/interfaces/libpq/Makefile: endif
      SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
      ifneq ($(PORTNAME), win32)
     -SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
    -+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -liddawc -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
    ++SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
      else
      SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
      endif
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	free_token(&tok);
     +	return PGRES_POLLING_FAILED;
    -+}
    -
    - ## src/interfaces/libpq/fe-auth-oauth-iddawc.c (new) ##
    -@@
    -+/*-------------------------------------------------------------------------
    -+ *
    -+ * fe-auth-oauth-iddawc.c
    -+ *	   The libiddawc implementation of OAuth/OIDC authentication.
    -+ *
    -+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
    -+ * Portions Copyright (c) 1994, Regents of the University of California
    -+ *
    -+ * IDENTIFICATION
    -+ *	  src/interfaces/libpq/fe-auth-oauth-iddawc.c
    -+ *
    -+ *-------------------------------------------------------------------------
    -+ */
    -+
    -+#include "postgres_fe.h"
    -+
    -+#include <iddawc.h>
    -+
    -+#include "fe-auth.h"
    -+#include "fe-auth-oauth.h"
    -+#include "libpq-int.h"
    -+
    -+#ifdef HAVE_I_LOAD_OPENID_CONFIG
    -+/* Older versions of iddawc used 'load' instead of 'get' for some APIs. */
    -+#define i_get_openid_config i_load_openid_config
    -+#endif
    -+
    -+static const char *
    -+iddawc_error_string(int errcode)
    -+{
    -+	switch (errcode)
    -+	{
    -+		case I_OK:
    -+			return "I_OK";
    -+
    -+		case I_ERROR:
    -+			return "I_ERROR";
    -+
    -+		case I_ERROR_PARAM:
    -+			return "I_ERROR_PARAM";
    -+
    -+		case I_ERROR_MEMORY:
    -+			return "I_ERROR_MEMORY";
    -+
    -+		case I_ERROR_UNAUTHORIZED:
    -+			return "I_ERROR_UNAUTHORIZED";
    -+
    -+		case I_ERROR_SERVER:
    -+			return "I_ERROR_SERVER";
    -+	}
    -+
    -+	return "<unknown>";
    -+}
    -+
    -+static void
    -+iddawc_error(PGconn *conn, int errcode, const char *msg)
    -+{
    -+	appendPQExpBufferStr(&conn->errorMessage, libpq_gettext(msg));
    -+	appendPQExpBuffer(&conn->errorMessage,
    -+					  libpq_gettext(" (iddawc error %s)\n"),
    -+					  iddawc_error_string(errcode));
    -+}
    -+
    -+static void
    -+iddawc_request_error(PGconn *conn, struct _i_session *i, int err, const char *msg)
    -+{
    -+	const char *error_code;
    -+	const char *desc;
    -+
    -+	appendPQExpBuffer(&conn->errorMessage, "%s: ", libpq_gettext(msg));
    -+
    -+	error_code = i_get_str_parameter(i, I_OPT_ERROR);
    -+	if (!error_code)
    -+	{
    -+		/*
    -+		 * The server didn't give us any useful information, so just print the
    -+		 * error code.
    -+		 */
    -+		appendPQExpBuffer(&conn->errorMessage,
    -+						  libpq_gettext("(iddawc error %s)\n"),
    -+						  iddawc_error_string(err));
    -+		return;
    -+	}
    -+
    -+	/* If the server gave a string description, print that too. */
    -+	desc = i_get_str_parameter(i, I_OPT_ERROR_DESCRIPTION);
    -+	if (desc)
    -+		appendPQExpBuffer(&conn->errorMessage, "%s ", desc);
    -+
    -+	appendPQExpBuffer(&conn->errorMessage, "(%s)\n", error_code);
    -+}
    -+
    -+/*
    -+ * Runs the device authorization flow using libiddawc. If successful, a malloc'd
    -+ * token string in "Bearer xxxx..." format, suitable for sending to an
    -+ * OAUTHBEARER server, is returned. NULL is returned on error.
    -+ */
    -+static char *
    -+run_iddawc_auth_flow(PGconn *conn, const char *discovery_uri)
    -+{
    -+	struct _i_session session;
    -+	PQExpBuffer token_buf = NULL;
    -+	int			err;
    -+	int			auth_method;
    -+	bool		user_prompted = false;
    -+	const char *verification_uri;
    -+	const char *user_code;
    -+	const char *access_token;
    -+	const char *token_type;
    -+	char	   *token = NULL;
    -+
    -+	i_init_session(&session);
    -+
    -+	token_buf = createPQExpBuffer();
    -+	if (!token_buf)
    -+		goto cleanup;
    -+
    -+	err = i_set_str_parameter(&session, I_OPT_OPENID_CONFIG_ENDPOINT, discovery_uri);
    -+	if (err)
    -+	{
    -+		iddawc_error(conn, err, "failed to set OpenID config endpoint");
    -+		goto cleanup;
    -+	}
    -+
    -+	err = i_get_openid_config(&session);
    -+	if (err)
    -+	{
    -+		iddawc_error(conn, err, "failed to fetch OpenID discovery document");
    -+		goto cleanup;
    -+	}
    -+
    -+	if (!i_get_str_parameter(&session, I_OPT_TOKEN_ENDPOINT))
    -+	{
    -+		appendPQExpBufferStr(&conn->errorMessage,
    -+							 libpq_gettext("issuer has no token endpoint"));
    -+		goto cleanup;
    -+	}
    -+
    -+	if (!i_get_str_parameter(&session, I_OPT_DEVICE_AUTHORIZATION_ENDPOINT))
    -+	{
    -+		appendPQExpBufferStr(&conn->errorMessage,
    -+							 libpq_gettext("issuer does not support device authorization"));
    -+		goto cleanup;
    -+	}
    -+
    -+	err = i_set_response_type(&session, I_RESPONSE_TYPE_DEVICE_CODE);
    -+	if (err)
    -+	{
    -+		iddawc_error(conn, err, "failed to set device code response type");
    -+		goto cleanup;
    -+	}
    -+
    -+	auth_method = I_TOKEN_AUTH_METHOD_NONE;
    -+	if (conn->oauth_client_secret && *conn->oauth_client_secret)
    -+		auth_method = I_TOKEN_AUTH_METHOD_SECRET_BASIC;
    -+
    -+	err = i_set_parameter_list(&session,
    -+							   I_OPT_CLIENT_ID, conn->oauth_client_id,
    -+							   I_OPT_CLIENT_SECRET, conn->oauth_client_secret,
    -+							   I_OPT_TOKEN_METHOD, auth_method,
    -+							   I_OPT_SCOPE, conn->oauth_scope,
    -+							   I_OPT_NONE
    -+		);
    -+	if (err)
    -+	{
    -+		iddawc_error(conn, err, "failed to set client identifier");
    -+		goto cleanup;
    -+	}
    -+
    -+	err = i_run_device_auth_request(&session);
    -+	if (err)
    -+	{
    -+		iddawc_request_error(conn, &session, err,
    -+							 "failed to obtain device authorization");
    -+		goto cleanup;
    -+	}
    -+
    -+	verification_uri = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_VERIFICATION_URI);
    -+	if (!verification_uri)
    -+	{
    -+		appendPQExpBufferStr(&conn->errorMessage,
    -+							 libpq_gettext("issuer did not provide a verification URI"));
    -+		goto cleanup;
    -+	}
    -+
    -+	user_code = i_get_str_parameter(&session, I_OPT_DEVICE_AUTH_USER_CODE);
    -+	if (!user_code)
    -+	{
    -+		appendPQExpBufferStr(&conn->errorMessage,
    -+							 libpq_gettext("issuer did not provide a user code"));
    -+		goto cleanup;
    -+	}
    -+
    -+	/*
    -+	 * Poll the token endpoint until either the user logs in and authorizes
    -+	 * the use of a token, or a hard failure occurs. We perform one ping
    -+	 * _before_ prompting the user, so that we don't make them do the work of
    -+	 * logging in only to find that the token endpoint is completely
    -+	 * unreachable.
    -+	 */
    -+	err = i_run_token_request(&session);
    -+	while (err)
    -+	{
    -+		const char *error_code;
    -+		uint		interval;
    -+
    -+		error_code = i_get_str_parameter(&session, I_OPT_ERROR);
    -+
    -+		/*
    -+		 * authorization_pending and slow_down are the only acceptable errors;
    -+		 * anything else and we bail.
    -+		 */
    -+		if (!error_code || (strcmp(error_code, "authorization_pending")
    -+							&& strcmp(error_code, "slow_down")))
    -+		{
    -+			iddawc_request_error(conn, &session, err,
    -+								 "failed to obtain access token");
    -+			goto cleanup;
    -+		}
    -+
    -+		if (!user_prompted)
    -+		{
    -+			int			res;
    -+			PQpromptOAuthDevice prompt = {
    -+				.verification_uri = verification_uri,
    -+				.user_code = user_code,
    -+				/* TODO: optional fields */
    -+			};
    -+
    -+			/*
    -+			 * Now that we know the token endpoint isn't broken, give the user
    -+			 * the login instructions.
    -+			 */
    -+			res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
    -+								 &prompt);
    -+
    -+			if (!res)
    -+			{
    -+				fprintf(stderr, "Visit %s and enter the code: %s",
    -+						prompt.verification_uri, prompt.user_code);
    -+			}
    -+			else if (res < 0)
    -+			{
    -+				appendPQExpBufferStr(&conn->errorMessage,
    -+									 libpq_gettext("device prompt failed\n"));
    -+				goto cleanup;
    -+			}
    -+
    -+			user_prompted = true;
    -+		}
    -+
    -+		/*---
    -+		 * We are required to wait between polls; the server tells us how
    -+		 * long.
    -+		 * TODO: if interval's not set, we need to default to five seconds
    -+		 * TODO: sanity check the interval
    -+		 */
    -+		interval = i_get_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL);
    -+
    -+		/*
    -+		 * A slow_down error requires us to permanently increase our retry
    -+		 * interval by five seconds. RFC 8628, Sec. 3.5.
    -+		 */
    -+		if (!strcmp(error_code, "slow_down"))
    -+		{
    -+			interval += 5;
    -+			i_set_int_parameter(&session, I_OPT_DEVICE_AUTH_INTERVAL, interval);
    -+		}
    -+
    -+		sleep(interval);
    -+
    -+		/*
    -+		 * XXX Reset the error code before every call, because iddawc won't do
    -+		 * that for us. This matters if the server first sends a "pending"
    -+		 * error code, then later hard-fails without sending an error code to
    -+		 * overwrite the first one.
    -+		 *
    -+		 * That we have to do this at all seems like a bug in iddawc.
    -+		 */
    -+		i_set_str_parameter(&session, I_OPT_ERROR, NULL);
    -+
    -+		err = i_run_token_request(&session);
    -+	}
    -+
    -+	access_token = i_get_str_parameter(&session, I_OPT_ACCESS_TOKEN);
    -+	token_type = i_get_str_parameter(&session, I_OPT_TOKEN_TYPE);
    -+
    -+	if (!access_token || !token_type || strcasecmp(token_type, "Bearer"))
    -+	{
    -+		appendPQExpBufferStr(&conn->errorMessage,
    -+							 libpq_gettext("issuer did not provide a bearer token"));
    -+		goto cleanup;
    -+	}
    -+
    -+	appendPQExpBufferStr(token_buf, "Bearer ");
    -+	appendPQExpBufferStr(token_buf, access_token);
    -+
    -+	if (PQExpBufferBroken(token_buf))
    -+		goto cleanup;
    -+
    -+	token = strdup(token_buf->data);
    -+
    -+cleanup:
    -+	if (token_buf)
    -+		destroyPQExpBuffer(token_buf);
    -+	i_clean_session(&session);
    -+
    -+	return token;
    -+}
    -+
    -+PostgresPollingStatusType
    -+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
    -+{
    -+	fe_oauth_state *state = conn->sasl_state;
    -+
    -+	/* TODO: actually make this asynchronous */
    -+	state->token = run_iddawc_auth_flow(conn, conn->oauth_discovery_uri);
    -+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_FAILED;
     +}
     
      ## src/interfaces/libpq/fe-auth-oauth.c (new) ##
    @@ src/interfaces/libpq/meson.build: if gssapi.found()
      
     +if oauth.found()
     +  libpq_sources += files('fe-auth-oauth.c')
    -+  if oauth_library == 'iddawc'
    -+    libpq_sources += files('fe-auth-oauth-iddawc.c')
    -+  else
    ++  if oauth_library == 'curl'
     +    libpq_sources += files('fe-auth-oauth-curl.c')
     +  endif
     +endif
 5:  5c5a83e44e =  5:  12ae7c4355 backend: add OAUTHBEARER SASL mechanism
 6:  7a42365d62 =  6:  707edf9314 Introduce OAuth validator libraries
 7:  9c46ea6cf9 !  7:  47236c5644 Add pytest suite for OAuth
    @@ Commit message
         option of the same name, which allows an ephemeral server to be spun up
         during a test run.
     
    -    For iddawc, asynchronous tests still hang, as expected. Bad-interval
    -    tests fail because iddawc apparently doesn't care that the interval is
    -    bad.
    -
         TODOs:
         - The --tap-stream option to pytest-tap is slightly broken during test
           failures (it suppresses error information), which impedes debugging.
    @@ src/test/python/client/test_oauth.py (new)
     +        ),
     +        pytest.param(
     +            (400, {}),
    -+            alt_patterns(
    -+                r'failed to parse token error response: field "error" is missing',
    -+                r"failed to obtain device authorization: \(iddawc error I_ERROR_PARAM\)",
    -+            ),
    ++            r'failed to parse token error response: field "error" is missing',
     +            id="broken error response",
     +        ),
     +        pytest.param(
     +            (200, RawResponse(r'{ "interval": 3.5.8 }')),
    -+            alt_patterns(
    -+                r"failed to parse device authorization: Token .* is invalid",
    -+                r"failed to obtain device authorization: \(iddawc error I_ERROR\)",
    -+            ),
    ++            r"failed to parse device authorization: Token .* is invalid",
     +            id="non-numeric interval",
     +        ),
     +        pytest.param(
     +            (200, RawResponse(r'{ "interval": 08 }')),
    -+            alt_patterns(
    -+                r"failed to parse device authorization: Token .* is invalid",
    -+                r"failed to obtain device authorization: \(iddawc error I_ERROR\)",
    -+            ),
    ++            r"failed to parse device authorization: Token .* is invalid",
     +            id="invalid numeric interval",
     +        ),
     +    ],
    @@ src/test/python/client/test_oauth.py (new)
     +    else:
     +        assert False, "update error_pattern for new failure mode"
     +
    -+    # XXX iddawc doesn't really check for problems in the device authorization
    -+    # response, leading to this patchwork:
    -+    if field_name == "verification_uri":
    -+        error_pattern = alt_patterns(
    -+            error_pattern,
    -+            "issuer did not provide a verification URI",
    -+        )
    -+    elif field_name == "user_code":
    -+        error_pattern = alt_patterns(
    -+            error_pattern,
    -+            "issuer did not provide a user code",
    -+        )
    -+    else:
    -+        error_pattern = alt_patterns(
    -+            error_pattern,
    -+            r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
    -+        )
    -+
     +    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
     +        client.check_completed()
     +
    @@ src/test/python/client/test_oauth.py (new)
     +        ),
     +        pytest.param(
     +            (400, {}),
    -+            alt_patterns(
    -+                r'failed to parse token error response: field "error" is missing',
    -+                r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
    -+            ),
    ++            r'failed to parse token error response: field "error" is missing',
     +            id="empty error response",
     +        ),
     +        pytest.param(
     +            (200, {}, {}),
    -+            alt_patterns(
    -+                r"failed to parse access token response: no content type was provided",
    -+                r"failed to obtain access token: \(iddawc error I_ERROR\)",
    -+            ),
    ++            r"failed to parse access token response: no content type was provided",
     +            id="missing content type",
     +        ),
     +        pytest.param(
     +            (200, {"Content-Type": "text/plain"}, {}),
    -+            alt_patterns(
    -+                r"failed to parse access token response: unexpected content type",
    -+                r"failed to obtain access token: \(iddawc error I_ERROR\)",
    -+            ),
    ++            r"failed to parse access token response: unexpected content type",
     +            id="wrong content type",
     +        ),
     +    ],
    @@ src/test/python/client/test_oauth.py (new)
     +    else:
     +        assert False, "update error_pattern for new failure mode"
     +
    -+    # XXX iddawc is fairly silent on the topic.
    -+    error_pattern = alt_patterns(
    -+        error_pattern,
    -+        r"failed to obtain access token: \(iddawc error I_ERROR_PARAM\)",
    -+    )
    -+
     +    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
     +        client.check_completed()
     +
    @@ src/test/python/client/test_oauth.py (new)
     +
     +    expect_disconnected_handshake(sock)
     +
    -+    # XXX iddawc doesn't differentiate...
    -+    expected_error = alt_patterns(
    -+        expected_error,
    -+        r"failed to fetch OpenID discovery document \(iddawc error I_ERROR(_PARAM)?\)",
    -+    )
    -+
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
     +        client.check_completed()
     +
    @@ src/test/python/test_pq3.py (new)
     +import contextlib
     +import getpass
     +import io
    ++import os
     +import platform
     +import struct
     +import sys
 8:  8ad4ce3068 =  8:  116e17eeee XXX temporary patches to build and test
 9:  5630465578 =  9:  28756eda1c WIP: Python OAuth provider implementation
v19-0002-Refactor-SASL-exchange-to-return-tri-state-statu.patchapplication/octet-stream; name=v19-0002-Refactor-SASL-exchange-to-return-tri-state-statu.patchDownload
From 6989b751536149b0224bb1cd38c1f0142c781e62 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Fri, 23 Feb 2024 11:09:54 +0100
Subject: [PATCH v19 2/9] Refactor SASL exchange to return tri-state status

The SASL exchange callback returned state in to output variables:
done and success.  This refactors that logic by introducing a new
return variable of type SASLStatus which makes the code easier to
read and understand, and prepares for future SASL exchanges which
operate asynchronously.

This was extracted from a larger patchset to introduce OAuthBearer
authentication and authorization.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 src/interfaces/libpq/fe-auth-sasl.h  | 31 +++++++----
 src/interfaces/libpq/fe-auth-scram.c | 78 +++++++++++++---------------
 src/interfaces/libpq/fe-auth.c       | 28 +++++-----
 src/tools/pgindent/typedefs.list     |  1 +
 4 files changed, 71 insertions(+), 67 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index ee5d1525b5..4eecf53a15 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -21,6 +21,17 @@
 
 #include "libpq-fe.h"
 
+/*
+ * Possible states for the SASL exchange, see the comment on exchange for an
+ * explanation of these.
+ */
+typedef enum
+{
+	SASL_COMPLETE = 0,
+	SASL_FAILED,
+	SASL_CONTINUE,
+} SASLStatus;
+
 /*
  * Frontend SASL mechanism callbacks.
  *
@@ -59,7 +70,8 @@ typedef struct pg_fe_sasl_mech
 	 * Produces a client response to a server challenge.  As a special case
 	 * for client-first SASL mechanisms, exchange() is called with a NULL
 	 * server response once at the start of the authentication exchange to
-	 * generate an initial response.
+	 * generate an initial response. Returns a SASLStatus indicating the
+	 * state and status of the exchange.
 	 *
 	 * Input parameters:
 	 *
@@ -79,22 +91,23 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	output:	   A malloc'd buffer containing the client's response to
 	 *			   the server (can be empty), or NULL if the exchange should
-	 *			   be aborted.  (*success should be set to false in the
+	 *			   be aborted.  (The callback should return SASL_FAILED in the
 	 *			   latter case.)
 	 *
 	 *	outputlen: The length (0 or higher) of the client response buffer,
 	 *			   ignored if output is NULL.
 	 *
-	 *	done:      Set to true if the SASL exchange should not continue,
-	 *			   because the exchange is either complete or failed
+	 * Return value:
 	 *
-	 *	success:   Set to true if the SASL exchange completed successfully.
-	 *			   Ignored if *done is false.
+	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
+	 *					Additional server challenge is expected
+	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
+	 *	SASL_FAILED:	The exchange has failed and the connection should be
+	 *					dropped.
 	 *--------
 	 */
-	void		(*exchange) (void *state, char *input, int inputlen,
-							 char **output, int *outputlen,
-							 bool *done, bool *success);
+	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+							 char **output, int *outputlen);
 
 	/*--------
 	 * channel_bound()
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 04f0e5713d..0bb820e0d9 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,9 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static void scram_exchange(void *opaq, char *input, int inputlen,
-						   char **output, int *outputlen,
-						   bool *done, bool *success);
+static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
 
@@ -202,17 +201,14 @@ scram_free(void *opaq)
 /*
  * Exchange a SCRAM message with backend.
  */
-static void
+static SASLStatus
 scram_exchange(void *opaq, char *input, int inputlen,
-			   char **output, int *outputlen,
-			   bool *done, bool *success)
+			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
 	PGconn	   *conn = state->conn;
 	const char *errstr = NULL;
 
-	*done = false;
-	*success = false;
 	*output = NULL;
 	*outputlen = 0;
 
@@ -225,12 +221,12 @@ scram_exchange(void *opaq, char *input, int inputlen,
 		if (inputlen == 0)
 		{
 			libpq_append_conn_error(conn, "malformed SCRAM message (empty message)");
-			goto error;
+			return SASL_FAILED;
 		}
 		if (inputlen != strlen(input))
 		{
 			libpq_append_conn_error(conn, "malformed SCRAM message (length mismatch)");
-			goto error;
+			return SASL_FAILED;
 		}
 	}
 
@@ -240,61 +236,59 @@ scram_exchange(void *opaq, char *input, int inputlen,
 			/* Begin the SCRAM handshake, by sending client nonce */
 			*output = build_client_first_message(state);
 			if (*output == NULL)
-				goto error;
+				return SASL_FAILED;
 
 			*outputlen = strlen(*output);
-			*done = false;
 			state->state = FE_SCRAM_NONCE_SENT;
-			break;
+			return SASL_CONTINUE;
 
 		case FE_SCRAM_NONCE_SENT:
 			/* Receive salt and server nonce, send response. */
 			if (!read_server_first_message(state, input))
-				goto error;
+				return SASL_FAILED;
 
 			*output = build_client_final_message(state);
 			if (*output == NULL)
-				goto error;
+				return SASL_FAILED;
 
 			*outputlen = strlen(*output);
-			*done = false;
 			state->state = FE_SCRAM_PROOF_SENT;
-			break;
+			return SASL_CONTINUE;
 
 		case FE_SCRAM_PROOF_SENT:
-			/* Receive server signature */
-			if (!read_server_final_message(state, input))
-				goto error;
-
-			/*
-			 * Verify server signature, to make sure we're talking to the
-			 * genuine server.
-			 */
-			if (!verify_server_signature(state, success, &errstr))
-			{
-				libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
-				goto error;
-			}
-
-			if (!*success)
 			{
-				libpq_append_conn_error(conn, "incorrect server signature");
+				bool		match;
+
+				/* Receive server signature */
+				if (!read_server_final_message(state, input))
+					return SASL_FAILED;
+
+				/*
+				 * Verify server signature, to make sure we're talking to the
+				 * genuine server.
+				 */
+				if (!verify_server_signature(state, &match, &errstr))
+				{
+					libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
+					return SASL_FAILED;
+				}
+
+				if (!match)
+				{
+					libpq_append_conn_error(conn, "incorrect server signature");
+				}
+				state->state = FE_SCRAM_FINISHED;
+				state->conn->client_finished_auth = true;
+				return match ? SASL_COMPLETE : SASL_FAILED;
 			}
-			*done = true;
-			state->state = FE_SCRAM_FINISHED;
-			state->conn->client_finished_auth = true;
-			break;
 
 		default:
 			/* shouldn't happen */
 			libpq_append_conn_error(conn, "invalid SCRAM exchange state");
-			goto error;
+			break;
 	}
-	return;
 
-error:
-	*done = true;
-	*success = false;
+	return SASL_FAILED;
 }
 
 /*
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 1a8e4f6fbf..cf8af4c62e 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -423,11 +423,10 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
-	bool		done;
-	bool		success;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
 	char	   *password;
+	SASLStatus	status;
 
 	initPQExpBuffer(&mechanism_buf);
 
@@ -575,12 +574,11 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto oom_error;
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	conn->sasl->exchange(conn->sasl_state,
-						 NULL, -1,
-						 &initialresponse, &initialresponselen,
-						 &done, &success);
+	status = conn->sasl->exchange(conn->sasl_state,
+								  NULL, -1,
+								  &initialresponse, &initialresponselen);
 
-	if (done && !success)
+	if (status == SASL_FAILED)
 		goto error;
 
 	/*
@@ -629,10 +627,9 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 {
 	char	   *output;
 	int			outputlen;
-	bool		done;
-	bool		success;
 	int			res;
 	char	   *challenge;
+	SASLStatus	status;
 
 	/* Read the SASL challenge from the AuthenticationSASLContinue message. */
 	challenge = malloc(payloadlen + 1);
@@ -651,13 +648,12 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	conn->sasl->exchange(conn->sasl_state,
-						 challenge, payloadlen,
-						 &output, &outputlen,
-						 &done, &success);
+	status = conn->sasl->exchange(conn->sasl_state,
+								  challenge, payloadlen,
+								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
-	if (final && !done)
+	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
 			free(output);
@@ -670,7 +666,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	 * If the exchange is not completed yet, we need to make sure that the
 	 * SASL mechanism has generated a message to send back.
 	 */
-	if (output == NULL && !done)
+	if (output == NULL && status == SASL_CONTINUE)
 	{
 		libpq_append_conn_error(conn, "no client response found after SASL exchange success");
 		return STATUS_ERROR;
@@ -692,7 +688,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 			return STATUS_ERROR;
 	}
 
-	if (done && !success)
+	if (status == SASL_FAILED)
 		return STATUS_ERROR;
 
 	return STATUS_OK;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ee40a341d3..2dfd8176a3 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2423,6 +2423,7 @@ RuleLock
 RuleStmt
 RunningTransactions
 RunningTransactionsData
+SASLStatus
 SC_HANDLE
 SECURITY_ATTRIBUTES
 SECURITY_STATUS
-- 
2.34.1

v19-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v19-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 77550a47db85ec83edb4f6c6f83caff3bc8fd9b4 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v19 4/9] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 configure                                 |  110 ++
 configure.ac                              |   28 +
 meson.build                               |   29 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   10 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 1982 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 +++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   77 +-
 src/interfaces/libpq/libpq-int.h          |   14 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 23 files changed, 3200 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/configure b/configure
index 46859a4244..2ccfb01b7a 100755
--- a/configure
+++ b/configure
@@ -712,6 +712,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -858,6 +859,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8485,6 +8488,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13037,6 +13086,56 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14062,6 +14161,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 88b75a7696..4a80c97d5b 100644
--- a/configure.ac
+++ b/configure.ac
@@ -927,6 +927,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1423,6 +1443,10 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1614,6 +1638,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/meson.build b/meson.build
index a198eca25d..45b20d11c1 100644
--- a/meson.build
+++ b/meson.build
@@ -830,6 +830,33 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  oauth = dependency('libcurl', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -2834,6 +2861,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3435,6 +3463,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index 249ecc5ffd..3248b9cc1c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -121,6 +121,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 8b3f8c24e0..79b3647834 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..5ff3488bfb
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07e73567dc..a5e6f99ba4 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -243,6 +243,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -711,6 +714,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index fe2af575c5..2618c293af 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -61,6 +61,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -79,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb1..0f8f5e3125 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,6 @@ PQsendClosePrepared       190
 PQsendClosePortal         191
 PQchangePassword          192
 PQsendPipelineSync        193
+PQsetAuthDataHook         194
+PQgetAuthDataHook         195
+PQdefaultAuthDataHook     196
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..0504f96e4e
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,1982 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls to
+ * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by cURL, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by cURL to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two cURL handles,
+ * so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	char	   *content_type;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	/* Make sure the server thinks it's given us JSON. */
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		goto cleanup;
+	}
+	else if (strcasecmp(content_type, "application/json") != 0)
+	{
+		actx_error(actx, "unexpected content type \"%s\"", content_type);
+		goto cleanup;
+	}
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		goto cleanup;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.)
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return 1;				/* TODO this slows down the tests
+								 * considerably... */
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(authz->interval_str);
+	else
+	{
+		/* TODO: handle default interval of 5 seconds */
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * cURL Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * cURL multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the `data` field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * cURL multi handle. Rather than continually adding and removing the timer,
+ * we keep it in the set at all times and just disarm it when it's not
+ * needed.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means cURL wants us to call back immediately. That's
+		 * not technically an option for timerfd, but we can make the timeout
+		 * ridiculously short.
+		 *
+		 * TODO: maybe just signal drive_request() to immediately call back in
+		 * this case?
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Initializes the two cURL handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data	*curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * cURL for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create cURL multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create cURL handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
+	 */
+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS, return false);
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from cURL; appends the response body into actx->work_data.
+ * See start_request().
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * Sanity check.
+	 *
+	 * TODO: even though this is nominally an asynchronous process, there are
+	 * apparently operations that can synchronously fail by this point, such
+	 * as connections to closed local ports. Maybe we need to let this case
+	 * fall through to drive_request instead, or else perform a
+	 * curl_multi_info_read immediately.
+	 */
+	if (running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	err = curl_multi_socket_all(actx->curlm, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return PGRES_POLLING_FAILED;
+	}
+
+	if (running)
+	{
+		/* We'll come back again. */
+		return PGRES_POLLING_READING;
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future cURL versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "cURL easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * The top-level, nonblocking entry point for the cURL implementation. This will
+ * be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	struct token tok = {0};
+
+	/*
+	 * XXX This is not safe. cURL has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized cURL,
+	 * which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell cURL to initialize
+	 * everything else, because other pieces of our client executable may
+	 * already be using cURL for their own purposes. If we initialize libcurl
+	 * first, with only a subset of its features, we could break those other
+	 * clients nondeterministically, and that would probably be a nightmare to
+	 * debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	/* By default, the multiplexer is the altsock. Reassign as desired. */
+	*altsock = actx->mux;
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				PostgresPollingStatusType status;
+
+				status = drive_request(actx);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+				else if (status != PGRES_POLLING_OK)
+				{
+					/* not done yet */
+					free_token(&tok);
+					return status;
+				}
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			/* TODO check that the timer has expired */
+			break;
+	}
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			actx->errctx = "failed to fetch OpenID discovery document";
+			if (!start_discovery(actx, conn->oauth_discovery_uri))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DISCOVERY;
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+			if (!finish_discovery(actx))
+				goto error_return;
+
+			/* TODO: check issuer */
+
+			actx->errctx = "cannot run OAuth device authorization";
+			if (!check_for_device_flow(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain device authorization";
+			if (!start_device_authz(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+			break;
+
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			if (!finish_device_authz(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				const struct token_error *err;
+#ifdef HAVE_SYS_EPOLL_H
+				struct itimerspec spec = {0};
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				struct kevent ev = {0};
+#endif
+
+				if (!finish_token_request(actx, &tok))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					int			res;
+					PQpromptOAuthDevice prompt = {
+						.verification_uri = actx->authz.verification_uri,
+						.user_code = actx->authz.user_code,
+						/* TODO: optional fields */
+					};
+
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
+										 &prompt);
+
+					if (!res)
+					{
+						fprintf(stderr, "Visit %s and enter the code: %s",
+								prompt.verification_uri, prompt.user_code);
+					}
+					else if (res < 0)
+					{
+						actx_error(actx, "device prompt failed");
+						goto error_return;
+					}
+
+					actx->user_prompted = true;
+				}
+
+				if (tok.access_token)
+				{
+					/* Construct our Bearer token. */
+					resetPQExpBuffer(&actx->work_data);
+					appendPQExpBuffer(&actx->work_data, "Bearer %s",
+									  tok.access_token);
+
+					if (PQExpBufferDataBroken(actx->work_data))
+					{
+						actx_error(actx, "out of memory");
+						goto error_return;
+					}
+
+					state->token = strdup(actx->work_data.data);
+					break;
+				}
+
+				/*
+				 * authorization_pending and slow_down are the only acceptable
+				 * errors; anything else and we bail.
+				 */
+				err = &tok.err;
+				if (!err->error || (strcmp(err->error, "authorization_pending")
+									&& strcmp(err->error, "slow_down")))
+				{
+					/* TODO handle !err->error */
+					if (err->error_description)
+						appendPQExpBuffer(&actx->errbuf, "%s ",
+										  err->error_description);
+
+					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+
+					goto error_return;
+				}
+
+				/*
+				 * A slow_down error requires us to permanently increase our
+				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
+				 */
+				if (strcmp(err->error, "slow_down") == 0)
+				{
+					actx->authz.interval += 5;	/* TODO check for overflow? */
+				}
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				Assert(actx->authz.interval > 0);
+#ifdef HAVE_SYS_EPOLL_H
+				spec.it_value.tv_sec = actx->authz.interval;
+
+				if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+				{
+					actx_error(actx, "failed to set timerfd: %m");
+					goto error_return;
+				}
+
+				*altsock = actx->timerfd;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				/* XXX: I guess this wants to be hidden in a routine */
+				EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
+					   actx->authz.interval * 1000, 0);
+				if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
+				{
+					actx_error(actx, "failed to set kqueue timer: %m");
+					goto error_return;
+				}
+				/* XXX: why did we change the altsock in the epoll version? */
+#endif
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				break;
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+	}
+
+	free_token(&tok);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	free_token(&tok);
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..66ee8ff076
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..8d4ea45aa8
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2023, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 81ec08485d..9cd5c8cfb1 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -419,7 +420,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -437,7 +438,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -524,6 +525,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -563,26 +573,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -625,7 +657,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -650,11 +682,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -955,12 +997,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1118,7 +1166,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1135,7 +1183,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1451,3 +1500,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f..15ceb73d01 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -359,6 +359,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -618,6 +635,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_err_msg = NULL;
 	conn->be_pid = 0;
 	conn->be_key = 0;
+	/* conn->oauth_want_retry = false; TODO */
 }
 
 
@@ -2536,6 +2554,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3517,6 +3536,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3672,6 +3692,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 #ifdef ENABLE_GSS
 
 					/*
@@ -3753,7 +3783,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/* OK, we have processed the message; mark data consumed */
 				conn->inStart = conn->inCursor;
@@ -3786,6 +3826,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4285,6 +4360,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4400,6 +4476,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -6868,6 +6949,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index f2fc78a481..663b1c1acf 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1039,10 +1039,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1059,7 +1062,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = pqSocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = pqSocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3..d095351c66 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -38,6 +38,8 @@ extern "C"
 #define LIBPQ_HAS_TRACE_FLAGS 1
 /* Indicates that PQsslAttribute(NULL, "library") is useful */
 #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -78,7 +80,9 @@ typedef enum
 	CONNECTION_CONSUME,			/* Consuming any extra messages. */
 	CONNECTION_GSS_STARTUP,		/* Negotiating GSSAPI. */
 	CONNECTION_CHECK_TARGET,	/* Checking target server properties. */
-	CONNECTION_CHECK_STANDBY	/* Checking if server is in standby mode. */
+	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -160,6 +164,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -658,10 +669,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 82c18f870d..cf26c693e3 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -351,6 +351,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -409,6 +411,15 @@ struct pg_conn
 	char	   *require_auth;	/* name of the expected auth method */
 	char	   *load_balance_hosts; /* load balance over hosts */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -477,6 +488,9 @@ struct pg_conn
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
 
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index be6fadaea2..0d4b7ac17d 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index b0f4178b3d..f803c1200b 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -231,6 +231,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 2dfd8176a3..c45aae6f9e 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -354,6 +355,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1653,6 +1656,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1718,6 +1722,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1877,11 +1882,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3343,6 +3351,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v19-0001-common-jsonapi-support-FRONTEND-clients.patchapplication/octet-stream; name=v19-0001-common-jsonapi-support-FRONTEND-clients.patchDownload
From ce06c03e2b5c50675d0bd8961c858bd0165cfabd Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v19 1/9] common/jsonapi: support FRONTEND clients

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed. json_errdetail() now allocates its error message inside
memory owned by the JsonLexContext, so clients don't need to worry about
freeing it.

We can now partially revert b44669b2ca, now that json_errdetail() works
correctly.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/bin/pg_verifybackup/t/005_bad_manifest.pl |   3 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 268 +++++++++++++-----
 src/common/meson.build                        |   8 +-
 src/common/parse_manifest.c                   |   2 +-
 src/common/stringinfo.c                       |   7 +
 src/include/common/jsonapi.h                  |  18 +-
 src/include/lib/stringinfo.h                  |   2 +
 8 files changed, 225 insertions(+), 85 deletions(-)

diff --git a/src/bin/pg_verifybackup/t/005_bad_manifest.pl b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
index e278ccea5a..e2a297930e 100644
--- a/src/bin/pg_verifybackup/t/005_bad_manifest.pl
+++ b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
@@ -13,7 +13,8 @@ use Test::More;
 my $tempdir = PostgreSQL::Test::Utils::tempdir;
 
 test_bad_manifest('input string ended unexpectedly',
-	qr/could not parse backup manifest: parsing failed/, <<EOM);
+	qr/could not parse backup manifest: The input string ended unexpectedly/,
+	<<EOM);
 {
 EOM
 
diff --git a/src/common/Makefile b/src/common/Makefile
index 2ba5069dca..bbb5c3ab11 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 OBJS_COMMON = \
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 32931ded82..2d1f30353a 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,43 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+
+#define appendStrVal		appendPQExpBuffer
+#define appendBinaryStrVal  appendBinaryPQExpBuffer
+#define appendStrValChar	appendPQExpBufferChar
+#define createStrVal		createPQExpBuffer
+#define resetStrVal			resetPQExpBuffer
+#define destroyStrVal		destroyPQExpBuffer
+
+#else							/* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+
+#define appendStrVal		appendStringInfo
+#define appendBinaryStrVal  appendBinaryStringInfo
+#define appendStrValChar	appendStringInfoChar
+#define createStrVal		makeStringInfo
+#define resetStrVal			resetStringInfo
+#define destroyStrVal		destroyStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -167,9 +200,16 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+	lex->errormsg = NULL;
 
 	return lex;
 }
@@ -182,13 +222,18 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-	{
-		pfree(lex->strval->data);
-		pfree(lex->strval);
-	}
+		destroyStrVal(lex->strval);
+
+	if (lex->errormsg)
+		destroyStrVal(lex->errormsg);
+
 	if (lex->flags & JSONLEX_FREE_STRUCT)
 		pfree(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -254,7 +299,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -316,14 +361,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -357,8 +409,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -414,6 +470,11 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -762,8 +823,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -800,7 +868,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -857,19 +925,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -879,22 +947,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -929,7 +997,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to appendBinaryStrVal.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -953,8 +1021,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				appendBinaryStrVal(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -970,6 +1038,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -1145,72 +1218,93 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 	return JSON_SUCCESS;		/* silence stupider compilers */
 }
 
-
-#ifndef FRONTEND
-/*
- * Extract the current token from a lexing context, for error reporting.
- */
-static char *
-extract_token(JsonLexContext *lex)
-{
-	int			toklen = lex->token_terminator - lex->token_start;
-	char	   *token = palloc(toklen + 1);
-
-	memcpy(token, lex->token_start, toklen);
-	token[toklen] = '\0';
-	return token;
-}
-
 /*
  * Construct an (already translated) detail message for a JSON error.
  *
- * Note that the error message generated by this routine may not be
- * palloc'd, making it unsafe for frontend code as there is no way to
- * know if this can be safely pfree'd or not.
+ * The returned allocation is either static or owned by the JsonLexContext and
+ * should not be freed.
  */
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	int			toklen = lex->token_terminator - lex->token_start;
+
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
+	if (lex->errormsg)
+		resetStrVal(lex->errormsg);
+	else
+		lex->errormsg = createStrVal();
+
 	switch (error)
 	{
 		case JSON_SUCCESS:
 			/* fall through to the error code after switch */
 			break;
 		case JSON_ESCAPING_INVALID:
-			return psprintf(_("Escape sequence \"\\%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Escape sequence \"\\%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_ESCAPING_REQUIRED:
-			return psprintf(_("Character with value 0x%02x must be escaped."),
-							(unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
+			break;
 		case JSON_EXPECTED_END:
-			return psprintf(_("Expected end of input, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected end of input, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_FIRST:
-			return psprintf(_("Expected array element or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected array element or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_NEXT:
-			return psprintf(_("Expected \",\" or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_COLON:
-			return psprintf(_("Expected \":\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \":\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_JSON:
-			return psprintf(_("Expected JSON value, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected JSON value, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_MORE:
 			return _("The input string ended unexpectedly.");
 		case JSON_EXPECTED_OBJECT_FIRST:
-			return psprintf(_("Expected string or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_OBJECT_NEXT:
-			return psprintf(_("Expected \",\" or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_STRING:
-			return psprintf(_("Expected string, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_INVALID_TOKEN:
-			return psprintf(_("Token \"%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Token \"%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -1219,9 +1313,19 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			/* note: this case is only reachable in frontend not backend */
 			return _("Unicode escape values cannot be used for code point values above 007F when the encoding is not UTF8.");
 		case JSON_UNICODE_UNTRANSLATABLE:
-			/* note: this case is only reachable in backend not frontend */
+
+			/*
+			 * note: this case is only reachable in backend not frontend.
+			 * #ifdef it away so the frontend doesn't try to link against
+			 * backend functionality.
+			 */
+#ifndef FRONTEND
 			return psprintf(_("Unicode escape value could not be translated to the server's encoding %s."),
 							GetDatabaseEncodingName());
+#else
+			Assert(false);
+			break;
+#endif
 		case JSON_UNICODE_HIGH_SURROGATE:
 			return _("Unicode high surrogate must not follow a high surrogate.");
 		case JSON_UNICODE_LOW_SURROGATE:
@@ -1231,12 +1335,22 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			break;
 	}
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	elog(ERROR, "unexpected json parse error type: %d", (int) error);
-	return NULL;
-}
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && !lex->errormsg->data[0])
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d", (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
 #endif
+
+	return lex->errormsg->data;
+}
diff --git a/src/common/meson.build b/src/common/meson.build
index 4eb16024cb..5d2c7abaa6 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -124,13 +124,18 @@ common_sources_frontend_static += files(
 # least cryptohash_openssl.c, hmac_openssl.c depend on it.
 # controldata_utils.c depends on wait_event_types_h. That's arguably a
 # layering violation, but ...
+#
+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
+# appropriately. This seems completely broken.
 pgcommon = {}
 pgcommon_variants = {
   '_srv': internal_lib_args + {
+    'include_directories': include_directories('.'),
     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
     'dependencies': [backend_common_code],
   },
   '': default_lib_args + {
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_static,
     'dependencies': [frontend_common_code],
     # Files in libpgcommon.a should use/export the "xxx_private" versions
@@ -139,6 +144,7 @@ pgcommon_variants = {
   },
   '_shlib': default_lib_args + {
     'pic': true,
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
   },
@@ -156,7 +162,6 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'sources': sources,
         'c_args': c_args,
@@ -169,7 +174,6 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'dependencies': opts['dependencies'] + [ssl],
       }
diff --git a/src/common/parse_manifest.c b/src/common/parse_manifest.c
index 92a97714f3..62d93989be 100644
--- a/src/common/parse_manifest.c
+++ b/src/common/parse_manifest.c
@@ -147,7 +147,7 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 	/* Run the actual JSON parser. */
 	json_error = pg_parse_json(lex, &sem);
 	if (json_error != JSON_SUCCESS)
-		json_manifest_parse_failure(context, "parsing failed");
+		json_manifest_parse_failure(context, json_errdetail(json_error, lex));
 	if (parse.state != JM_EXPECT_EOF)
 		json_manifest_parse_failure(context, "manifest ended unexpectedly");
 
diff --git a/src/common/stringinfo.c b/src/common/stringinfo.c
index c61d5c58f3..09419f6042 100644
--- a/src/common/stringinfo.c
+++ b/src/common/stringinfo.c
@@ -350,3 +350,10 @@ enlargeStringInfo(StringInfo str, int needed)
 
 	str->maxlen = newlen;
 }
+
+void
+destroyStringInfo(StringInfo str)
+{
+	pfree(str->data);
+	pfree(str);
+}
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 02943cdad8..75d444c17a 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -48,6 +46,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -57,6 +56,17 @@ typedef enum JsonParseErrorType
 	JSON_SEM_ACTION_FAILED,		/* error should already be reported */
 } JsonParseErrorType;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
 
 /*
  * All the fields in this structure should be treated as read-only.
@@ -88,7 +98,9 @@ typedef struct JsonLexContext
 	bits32		flags;
 	int			line_number;	/* line number, starting from 1 */
 	char	   *line_start;		/* where that line starts within input */
-	StringInfo	strval;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
diff --git a/src/include/lib/stringinfo.h b/src/include/lib/stringinfo.h
index 2cd636b01c..64ec6419af 100644
--- a/src/include/lib/stringinfo.h
+++ b/src/include/lib/stringinfo.h
@@ -233,4 +233,6 @@ extern void appendBinaryStringInfoNT(StringInfo str,
  */
 extern void enlargeStringInfo(StringInfo str, int needed);
 
+
+extern void destroyStringInfo(StringInfo str);
 #endif							/* STRINGINFO_H */
-- 
2.34.1

v19-0003-Explicitly-require-password-for-SCRAM-exchange.patchapplication/octet-stream; name=v19-0003-Explicitly-require-password-for-SCRAM-exchange.patchDownload
From 783bfe0b953bf9c6591f1aa5cb747f5c0f96a8d9 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Fri, 23 Feb 2024 11:19:55 +0100
Subject: [PATCH v19 3/9] Explicitly require password for SCRAM exchange

This refactors the SASL init flow to set password_needed on the two
SCRAM exchanges currently supported. The code already required this
but was set up in such a way that all SASL exchanges required using
a password, a restriction which may not hold for all exchanges (the
example at hand being the proposed OAuthbearer exchange).

This was extracted from a larger patchset to introduce OAuthBearer
authentication and authorization.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 src/interfaces/libpq/fe-auth.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index cf8af4c62e..81ec08485d 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -425,7 +425,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	int			initialresponselen;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
-	char	   *password;
+	char	   *password = NULL;
 	SASLStatus	status;
 
 	initPQExpBuffer(&mechanism_buf);
@@ -446,8 +446,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	/*
 	 * Parse the list of SASL authentication mechanisms in the
 	 * AuthenticationSASL message, and select the best mechanism that we
-	 * support.  SCRAM-SHA-256-PLUS and SCRAM-SHA-256 are the only ones
-	 * supported at the moment, listed by order of decreasing importance.
+	 * support. Mechanisms are listed by order of decreasing importance.
 	 */
 	selected_mechanism = NULL;
 	for (;;)
@@ -487,6 +486,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 				{
 					selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
 					conn->sasl = &pg_scram_mech;
+					conn->password_needed = true;
 				}
 #else
 				/*
@@ -522,6 +522,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		{
 			selected_mechanism = SCRAM_SHA_256_NAME;
 			conn->sasl = &pg_scram_mech;
+			conn->password_needed = true;
 		}
 	}
 
@@ -545,18 +546,19 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	/*
 	 * First, select the password to use for the exchange, complaining if
-	 * there isn't one.  Currently, all supported SASL mechanisms require a
-	 * password, so we can just go ahead here without further distinction.
+	 * there isn't one and the selected SASL mechanism needs it.
 	 */
-	conn->password_needed = true;
-	password = conn->connhost[conn->whichhost].password;
-	if (password == NULL)
-		password = conn->pgpass;
-	if (password == NULL || password[0] == '\0')
+	if (conn->password_needed)
 	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 PQnoPasswordSupplied);
-		goto error;
+		password = conn->connhost[conn->whichhost].password;
+		if (password == NULL)
+			password = conn->pgpass;
+		if (password == NULL || password[0] == '\0')
+		{
+			appendPQExpBufferStr(&conn->errorMessage,
+								 PQnoPasswordSupplied);
+			goto error;
+		}
 	}
 
 	Assert(conn->sasl);
-- 
2.34.1

v19-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v19-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 12ae7c4355bffd95741b7f7990067c963d593b93 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v19 5/9] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external program: the oauth_validator_command.
This command must do the following:

1. Receive the bearer token by reading its contents from a file
   descriptor passed from the server. (The numeric value of this
   descriptor may be inserted into the oauth_validator_command using the
   %f specifier.)

   This MUST be the first action the command performs. The server will
   not begin reading stdout from the command until the token has been
   read in full, so if the command tries to print anything and hits a
   buffer limit, the backend will deadlock and time out.

2. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the command must exit with a
   non-zero status. Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The command should print the
      authenticated identity string to stdout, followed by a newline.

      If the user cannot be authenticated, the validator should not
      print anything to stdout. It should also exit with a non-zero
      status, unless the token may be used to authorize the connection
      through some other means (see below).

      On a success, the command may then exit with a zero success code.
      By default, the server will then check to make sure the identity
      string matches the role that is being used (or matches a usermap
      entry, if one is in use).

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below), the validator simply
      returns a zero exit code if the client should be allowed to
      connect with its presented role (which can be passed to the
      command using the %r specifier), or a non-zero code otherwise.

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the command may print
      the authenticated ID and then fail with a non-zero exit code.
      (This makes it easier to see what's going on in the Postgres
      logs.)

4. Token validators may optionally log to stderr. This will be printed
   verbatim into the Postgres server logs.

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Unlike the client, servers support OAuth without needing to be built
against libiddawc (since the responsibility for "speaking" OAuth/OIDC
correctly is delegated entirely to the oauth_validator_command).

Several TODOs:
- port to platforms other than "modern Linux/BSD"
- overhaul the communication with oauth_validator_command, which is
  currently a bad hack on OpenPipeStream()
- implement more sanity checks on the OAUTHBEARER message format and
  tokens sent by the client
- implement more helpful handling of HBA misconfigurations
- properly interpolate JSON when generating error responses
- use logdetail during auth failures
- deal with role names that can't be safely passed to system() without
  shell-escaping
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/backend/libpq/Makefile          |   1 +
 src/backend/libpq/auth-oauth.c      | 810 ++++++++++++++++++++++++++++
 src/backend/libpq/auth-sasl.c       |  10 +-
 src/backend/libpq/auth-scram.c      |   4 +-
 src/backend/libpq/auth.c            |  26 +-
 src/backend/libpq/hba.c             |  31 +-
 src/backend/libpq/meson.build       |   1 +
 src/backend/utils/misc/guc_tables.c |  12 +
 src/include/libpq/auth.h            |  17 +
 src/include/libpq/hba.h             |   6 +-
 src/include/libpq/oauth.h           |  24 +
 src/include/libpq/sasl.h            |  11 +
 src/tools/pgindent/typedefs.list    |   1 +
 13 files changed, 922 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..a9d2646023
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,810 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+
+/* GUC */
+char	   *oauth_validator_command;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth, const char **logdetail);
+static bool run_validator_command(Port *port, const char *token);
+static bool check_exit(FILE **fh, const char *command);
+static bool set_cloexec(int fd);
+static bool username_ok_for_shell(const char *username);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth, logdetail))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 *
+		 * TODO further validate the key/value grammar? empty keys, bad
+		 * chars...
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: JSON escaping
+	 */
+	appendStringInfo(&buf,
+					 "{ "
+					 "\"status\": \"invalid_token\", "
+					 "\"openid-configuration\": \"%s/.well-known/openid-configuration\", "
+					 "\"scope\": \"%s\" "
+					 "}",
+					 ctx->issuer, ctx->scope);
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+static bool
+validate(Port *port, const char *auth, const char **logdetail)
+{
+	static const char *const b64_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	const char *token;
+	size_t		span;
+	int			ret;
+
+	/* TODO: handle logdetail when the test framework can check it */
+
+	/*-----
+	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+	 * 2.1:
+	 *
+	 *      b64token    = 1*( ALPHA / DIGIT /
+	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+	 *      credentials = "Bearer" 1*SP b64token
+	 *
+	 * The "credentials" construction is what we receive in our auth value.
+	 *
+	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
+	 * it's pointed out in RFC 7628 Sec. 4.)
+	 *
+	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+	 */
+	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return false;
+
+	/* Pull the bearer token out of the auth value. */
+	token = auth + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/*
+	 * Before invoking the validator command, sanity-check the token format to
+	 * avoid any injection attacks later in the chain. Invalid formats are
+	 * technically a protocol violation, but don't reflect any information
+	 * about the sensitive Bearer token back to the client; log at COMMERROR
+	 * instead.
+	 */
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is empty.")));
+		return false;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return false;
+	}
+
+	/* Have the validator check the token. */
+	if (!run_validator_command(port, token))
+		return false;
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (!MyClientConnectionInfo.authn_id)
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	ret = check_usermap(port->hba->usermap, port->user_name,
+						MyClientConnectionInfo.authn_id, false);
+	return (ret == STATUS_OK);
+}
+
+static bool
+run_validator_command(Port *port, const char *token)
+{
+	bool		success = false;
+	int			rc;
+	int			pipefd[2];
+	int			rfd = -1;
+	int			wfd = -1;
+
+	StringInfoData command = {0};
+	char	   *p;
+	FILE	   *fh = NULL;
+
+	ssize_t		written;
+	char	   *line = NULL;
+	size_t		size = 0;
+	ssize_t		len;
+
+	Assert(oauth_validator_command);
+
+	if (!oauth_validator_command[0])
+	{
+		ereport(COMMERROR,
+				(errmsg("oauth_validator_command is not set"),
+				 errhint("To allow OAuth authenticated connections, set "
+						 "oauth_validator_command in postgresql.conf.")));
+		return false;
+	}
+
+	/*------
+	 * Since popen() is unidirectional, open up a pipe for the other
+	 * direction. Use CLOEXEC to ensure that our write end doesn't
+	 * accidentally get copied into child processes, which would prevent us
+	 * from closing it cleanly.
+	 *
+	 * XXX this is ugly. We should just read from the child process's stdout,
+	 * but that's a lot more code.
+	 * XXX by bypassing the popen API, we open the potential of process
+	 * deadlock. Clearly document child process requirements (i.e. the child
+	 * MUST read all data off of the pipe before writing anything).
+	 * TODO: port to Windows using _pipe().
+	 */
+	rc = pipe(pipefd);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not create child pipe: %m")));
+		return false;
+	}
+
+	rfd = pipefd[0];
+	wfd = pipefd[1];
+
+	if (!set_cloexec(wfd))
+	{
+		/* error message was already logged */
+		goto cleanup;
+	}
+
+	/*----------
+	 * Construct the command, substituting any recognized %-specifiers:
+	 *
+	 *   %f: the file descriptor of the input pipe
+	 *   %r: the role that the client wants to assume (port->user_name)
+	 *   %%: a literal '%'
+	 */
+	initStringInfo(&command);
+
+	for (p = oauth_validator_command; *p; p++)
+	{
+		if (p[0] == '%')
+		{
+			switch (p[1])
+			{
+				case 'f':
+					appendStringInfo(&command, "%d", rfd);
+					p++;
+					break;
+				case 'r':
+
+					/*
+					 * TODO: decide how this string should be escaped. The
+					 * role is controlled by the client, so if we don't escape
+					 * it, command injections are inevitable.
+					 *
+					 * This is probably an indication that the role name needs
+					 * to be communicated to the validator process in some
+					 * other way. For this proof of concept, just be
+					 * incredibly strict about the characters that are allowed
+					 * in user names.
+					 */
+					if (!username_ok_for_shell(port->user_name))
+						goto cleanup;
+
+					appendStringInfoString(&command, port->user_name);
+					p++;
+					break;
+				case '%':
+					appendStringInfoChar(&command, '%');
+					p++;
+					break;
+				default:
+					appendStringInfoChar(&command, p[0]);
+			}
+		}
+		else
+			appendStringInfoChar(&command, p[0]);
+	}
+
+	/* Execute the command. */
+	fh = OpenPipeStream(command.data, "r");
+	if (!fh)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("opening pipe to OAuth validator: %m")));
+		goto cleanup;
+	}
+
+	/* We don't need the read end of the pipe anymore. */
+	close(rfd);
+	rfd = -1;
+
+	/* Give the command the token to validate. */
+	written = write(wfd, token, strlen(token));
+	if (written != strlen(token))
+	{
+		/* TODO must loop for short writes, EINTR et al */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not write token to child pipe: %m")));
+		goto cleanup;
+	}
+
+	close(wfd);
+	wfd = -1;
+
+	/*-----
+	 * Read the command's response.
+	 *
+	 * TODO: getline() is probably too new to use, unfortunately.
+	 * TODO: loop over all lines
+	 */
+	if ((len = getline(&line, &size, fh)) >= 0)
+	{
+		/* TODO: fail if the authn_id doesn't end with a newline */
+		if (len > 0)
+			line[len - 1] = '\0';
+
+		set_authn_id(port, line);
+	}
+	else if (ferror(fh))
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not read from command \"%s\": %m",
+						command.data)));
+		goto cleanup;
+	}
+
+	/* Make sure the command exits cleanly. */
+	if (!check_exit(&fh, command.data))
+	{
+		/* error message already logged */
+		goto cleanup;
+	}
+
+	/* Done. */
+	success = true;
+
+cleanup:
+	if (line)
+		free(line);
+
+	/*
+	 * In the successful case, the pipe fds are already closed. For the error
+	 * case, always close out the pipe before waiting for the command, to
+	 * prevent deadlock.
+	 */
+	if (rfd >= 0)
+		close(rfd);
+	if (wfd >= 0)
+		close(wfd);
+
+	if (fh)
+	{
+		Assert(!success);
+		check_exit(&fh, command.data);
+	}
+
+	if (command.data)
+		pfree(command.data);
+
+	return success;
+}
+
+static bool
+check_exit(FILE **fh, const char *command)
+{
+	int			rc;
+
+	rc = ClosePipeStream(*fh);
+	*fh = NULL;
+
+	if (rc == -1)
+	{
+		/* pclose() itself failed. */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not close pipe to command \"%s\": %m",
+						command)));
+	}
+	else if (rc != 0)
+	{
+		char	   *reason = wait_result_to_str(rc);
+
+		ereport(COMMERROR,
+				(errmsg("failed to execute command \"%s\": %s",
+						command, reason)));
+
+		pfree(reason);
+	}
+
+	return (rc == 0);
+}
+
+static bool
+set_cloexec(int fd)
+{
+	int			flags;
+	int			rc;
+
+	flags = fcntl(fd, F_GETFD);
+	if (flags == -1)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not get fd flags for child pipe: %m")));
+		return false;
+	}
+
+	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * XXX This should go away eventually and be replaced with either a proper
+ * escape or a different strategy for communication with the validator command.
+ */
+static bool
+username_ok_for_shell(const char *username)
+{
+	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
+	static const char *const allowed =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-_./:";
+	size_t		span;
+
+	Assert(username && username[0]);	/* should have already been checked */
+
+	span = strspn(username, allowed);
+	if (username[span] != '\0')
+	{
+		ereport(COMMERROR,
+				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
+		return false;
+	}
+
+	return true;
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 2abb1a9b3a..aa6b5020dc 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -118,7 +118,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 9bbdc4beb0..db7c77da86 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -47,7 +48,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -203,22 +203,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -307,6 +291,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -342,7 +329,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -629,6 +616,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 8004d102ad..03c3f038c7 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -119,7 +119,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1748,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2067,8 +2070,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2451,6 +2455,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 93ded31ed9..0f83f0d870 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -48,6 +48,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4670,6 +4671,17 @@ struct config_string ConfigureNamesString[] =
 		check_debug_io_direct, assign_debug_io_direct, NULL
 	},
 
+	{
+		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&oauth_validator_command,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..5edab3b25a
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern char *oauth_validator_command;
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index c45aae6f9e..a0acd33ccf 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3543,6 +3543,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v19-0007-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v19-0007-Add-pytest-suite-for-OAuth.patchDownload
From 47236c5644f33b18b2b7379a35467eeed2177e80 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v19 7/9] Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |   22 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  137 ++
 src/test/python/client/test_client.py |  180 +++
 src/test/python/client/test_oauth.py  | 1720 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  732 +++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |    9 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  |  945 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  564 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5287 insertions(+), 7 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 3b5b54df58..2501743b31 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl load_balance python
 
 
 # What files to preserve in case tests fail
@@ -165,7 +165,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -177,6 +177,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -225,6 +226,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -237,6 +239,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -312,8 +315,11 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bullseye - Autoconf
@@ -368,6 +374,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -378,7 +386,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.32-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
@@ -676,8 +684,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/meson.build b/meson.build
index 45b20d11c1..8567355a25 100644
--- a/meson.build
+++ b/meson.build
@@ -3174,6 +3174,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3335,6 +3338,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index c3d0dfedf1..f401ec179e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -7,6 +7,7 @@ subdir('authentication')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..94f3620af3
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,137 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            self._pump_async(conn)
+            conn.close()
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..c4c946fda4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,180 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = "server closed the connection unexpectedly"
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..aee8ecdf5e
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1720 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": "application/json"}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept, openid_provider, asynchronous, retries, scope, secret, auth_data_cb
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+def test_oauth_retry_interval(accept, openid_provider, retries, error_code):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": expected_retry_interval,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, _ = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we cleaned up after ourselves.
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..a2d2812f0e
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,732 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload" / FixedSized(this.len - 4, Default(_payload, b"")),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..57ba1ced94
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,9 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+construct~=2.10.61
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..f8e6c1651b
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authenticated = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authenticated = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..2a2ca59e94
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,945 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + ".bak"
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"x" * (MAX_SASL_MESSAGE_LENGTH + 1),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if not isinstance(payload, dict):
+        payload = dict(payload_data=payload)
+    pq3.send(conn, type, **payload)
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..1cd43d22ee
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,564 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05\x00",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

v19-0008-XXX-temporary-patches-to-build-and-test.patchapplication/octet-stream; name=v19-0008-XXX-temporary-patches-to-build-and-test.patchDownload
From 116e17eeee18c0aa12617b3bd30fa5c8f6ba0328 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 20 Feb 2024 11:35:29 -0800
Subject: [PATCH v19 8/9] XXX temporary patches to build and test

- the new pg_combinebackup utility uses JSON in the frontend without
  0001; has something changed?
- construct 2.10.70 has some incompatibilities with the current tests
- temporarily skip the exit check (from Daniel Gustafsson); this needs
  to be turned into an exception for curl rather than a plain exit call
---
 src/bin/pg_combinebackup/Makefile    | 6 ++++--
 src/bin/pg_combinebackup/meson.build | 3 ++-
 src/bin/pg_verifybackup/Makefile     | 2 +-
 src/interfaces/libpq/Makefile        | 2 +-
 src/test/python/requirements.txt     | 4 +++-
 5 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index 4f24b1aff6..2f7dc1ed87 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -18,6 +18,8 @@ include $(top_builddir)/src/Makefile.global
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
 LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
@@ -30,8 +32,8 @@ OBJS = \
 
 all: pg_combinebackup
 
-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
-	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
 
 install: all installdirs
 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
diff --git a/src/bin/pg_combinebackup/meson.build b/src/bin/pg_combinebackup/meson.build
index 30dbbaa6cf..926f63f365 100644
--- a/src/bin/pg_combinebackup/meson.build
+++ b/src/bin/pg_combinebackup/meson.build
@@ -17,7 +17,8 @@ endif
 
 pg_combinebackup = executable('pg_combinebackup',
   pg_combinebackup_sources,
-  dependencies: [frontend_code],
+  # XXX linking against libpq isn't good, but how was JSON working?
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args,
 )
 bin_targets += pg_combinebackup
diff --git a/src/bin/pg_verifybackup/Makefile b/src/bin/pg_verifybackup/Makefile
index 7c045f142e..3372fada01 100644
--- a/src/bin/pg_verifybackup/Makefile
+++ b/src/bin/pg_verifybackup/Makefile
@@ -17,7 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 # We need libpq only because fe_utils does.
-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 2618c293af..e86d4803ff 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -124,7 +124,7 @@ libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
 	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
-		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
+		echo 'libpq must not be calling any function which invokes exit'; \
 	fi
 endif
 endif
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
index 57ba1ced94..0dfcffb83e 100644
--- a/src/test/python/requirements.txt
+++ b/src/test/python/requirements.txt
@@ -1,7 +1,9 @@
 black
 # cryptography 35.x and later add many platform/toolchain restrictions, beware
 cryptography~=3.4.8
-construct~=2.10.61
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
 isort~=5.6
 # TODO: update to psycopg[c] 3.1
 psycopg2~=2.9.7
-- 
2.34.1

v19-0006-Introduce-OAuth-validator-libraries.patchapplication/octet-stream; name=v19-0006-Introduce-OAuth-validator-libraries.patchDownload
From 707edf9314786cbc0e3ad387ccfae2e04149875b Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Wed, 21 Feb 2024 17:04:26 +0100
Subject: [PATCH v19 6/9] Introduce OAuth validator libraries

This replaces the serverside validation code with an module API
for loading in extensions for validating bearer tokens. A lot of
code is left to be written.

Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
---
 src/backend/libpq/auth-oauth.c                | 431 +++++-------------
 src/backend/utils/misc/guc_tables.c           |   6 +-
 src/bin/pg_combinebackup/Makefile             |   2 +-
 src/common/Makefile                           |   2 +-
 src/include/libpq/oauth.h                     |  29 +-
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  19 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  33 ++
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   |  53 +++
 src/test/modules/oauth_validator/validator.c  |  71 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  14 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 158 +++++++
 src/tools/pgindent/typedefs.list              |   2 +
 16 files changed, 500 insertions(+), 332 deletions(-)
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index a9d2646023..f5cf271566 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -6,7 +6,7 @@
  * See the following RFC for more details:
  * - RFC 7628: https://tools.ietf.org/html/rfc7628
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/backend/libpq/auth-oauth.c
@@ -19,21 +19,29 @@
 #include <fcntl.h>
 
 #include "common/oauth-common.h"
+#include "fmgr.h"
 #include "lib/stringinfo.h"
 #include "libpq/auth.h"
 #include "libpq/hba.h"
 #include "libpq/oauth.h"
 #include "libpq/sasl.h"
 #include "storage/fd.h"
+#include "storage/ipc.h"
 
 /* GUC */
-char	   *oauth_validator_command;
+char	   *OAuthValidatorLibrary = "";
 
 static void oauth_get_mechanisms(Port *port, StringInfo buf);
 static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
 static int	oauth_exchange(void *opaq, const char *input, int inputlen,
 						   char **output, int *outputlen, const char **logdetail);
 
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
 /* Mechanism declaration */
 const pg_be_sasl_mech pg_be_oauth_mech = {
 	oauth_get_mechanisms,
@@ -62,11 +70,7 @@ struct oauth_ctx
 static char *sanitize_char(char c);
 static char *parse_kvpairs_for_auth(char **input);
 static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
-static bool validate(Port *port, const char *auth, const char **logdetail);
-static bool run_validator_command(Port *port, const char *token);
-static bool check_exit(FILE **fh, const char *command);
-static bool set_cloexec(int fd);
-static bool username_ok_for_shell(const char *username);
+static bool validate(Port *port, const char *auth);
 
 #define KVSEP 0x01
 #define AUTH_KEY "auth"
@@ -99,6 +103,8 @@ oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 	ctx->issuer = port->hba->oauth_issuer;
 	ctx->scope = port->hba->oauth_scope;
 
+	load_validator_library();
+
 	return ctx;
 }
 
@@ -249,7 +255,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 				 errmsg("malformed OAUTHBEARER message"),
 				 errdetail("Message contains additional data after the final terminator.")));
 
-	if (!validate(ctx->port, auth, logdetail))
+	if (!validate(ctx->port, auth))
 	{
 		generate_error_response(ctx, output, outputlen);
 
@@ -416,70 +422,73 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	*outputlen = buf.len;
 }
 
-static bool
-validate(Port *port, const char *auth, const char **logdetail)
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
 {
-	static const char *const b64_set =
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
 		"abcdefghijklmnopqrstuvwxyz"
 		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
 		"0123456789-._~+/";
 
-	const char *token;
-	size_t		span;
-	int			ret;
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
 
-	/* TODO: handle logdetail when the test framework can check it */
-
-	/*-----
-	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
-	 * 2.1:
-	 *
-	 *      b64token    = 1*( ALPHA / DIGIT /
-	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
-	 *      credentials = "Bearer" 1*SP b64token
-	 *
-	 * The "credentials" construction is what we receive in our auth value.
-	 *
-	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
-	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
-	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
-	 * it's pointed out in RFC 7628 Sec. 4.)
-	 *
-	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
-	 */
-	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
-		return false;
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
 
 	/* Pull the bearer token out of the auth value. */
-	token = auth + strlen(BEARER_SCHEME);
+	token = header + strlen(BEARER_SCHEME);
 
 	/* Swallow any additional spaces. */
 	while (*token == ' ')
 		token++;
 
-	/*
-	 * Before invoking the validator command, sanity-check the token format to
-	 * avoid any injection attacks later in the chain. Invalid formats are
-	 * technically a protocol violation, but don't reflect any information
-	 * about the sensitive Bearer token back to the client; log at COMMERROR
-	 * instead.
-	 */
-
 	/* Tokens must not be empty. */
 	if (!*token)
 	{
 		ereport(COMMERROR,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
+				 errmsg("malformed OAuth bearer token 2"),
 				 errdetail("Bearer token is empty.")));
-		return false;
+		return NULL;
 	}
 
 	/*
 	 * Make sure the token contains only allowed characters. Tokens may end
 	 * with any number of '=' characters.
 	 */
-	span = strspn(token, b64_set);
+	span = strspn(token, b64token_allowed_set);
 	while (token[span] == '=')
 		span++;
 
@@ -492,15 +501,35 @@ validate(Port *port, const char *auth, const char **logdetail)
 		 */
 		ereport(COMMERROR,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
+				 errmsg("malformed OAuth bearer token 3"),
 				 errdetail("Bearer token is not in the correct format.")));
-		return false;
+		return NULL;
 	}
 
-	/* Have the validator check the token. */
-	if (!run_validator_command(port, token))
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
 		return false;
 
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authenticated)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
 	if (port->hba->oauth_skip_usermap)
 	{
 		/*
@@ -513,7 +542,7 @@ validate(Port *port, const char *auth, const char **logdetail)
 	}
 
 	/* Make sure the validator authenticated the user. */
-	if (!MyClientConnectionInfo.authn_id)
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
 		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
@@ -523,288 +552,42 @@ validate(Port *port, const char *auth, const char **logdetail)
 	}
 
 	/* Finally, check the user map. */
-	ret = check_usermap(port->hba->usermap, port->user_name,
-						MyClientConnectionInfo.authn_id, false);
-	return (ret == STATUS_OK);
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
 }
 
-static bool
-run_validator_command(Port *port, const char *token)
-{
-	bool		success = false;
-	int			rc;
-	int			pipefd[2];
-	int			rfd = -1;
-	int			wfd = -1;
-
-	StringInfoData command = {0};
-	char	   *p;
-	FILE	   *fh = NULL;
-
-	ssize_t		written;
-	char	   *line = NULL;
-	size_t		size = 0;
-	ssize_t		len;
-
-	Assert(oauth_validator_command);
-
-	if (!oauth_validator_command[0])
-	{
-		ereport(COMMERROR,
-				(errmsg("oauth_validator_command is not set"),
-				 errhint("To allow OAuth authenticated connections, set "
-						 "oauth_validator_command in postgresql.conf.")));
-		return false;
-	}
-
-	/*------
-	 * Since popen() is unidirectional, open up a pipe for the other
-	 * direction. Use CLOEXEC to ensure that our write end doesn't
-	 * accidentally get copied into child processes, which would prevent us
-	 * from closing it cleanly.
-	 *
-	 * XXX this is ugly. We should just read from the child process's stdout,
-	 * but that's a lot more code.
-	 * XXX by bypassing the popen API, we open the potential of process
-	 * deadlock. Clearly document child process requirements (i.e. the child
-	 * MUST read all data off of the pipe before writing anything).
-	 * TODO: port to Windows using _pipe().
-	 */
-	rc = pipe(pipefd);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not create child pipe: %m")));
-		return false;
-	}
-
-	rfd = pipefd[0];
-	wfd = pipefd[1];
-
-	if (!set_cloexec(wfd))
-	{
-		/* error message was already logged */
-		goto cleanup;
-	}
-
-	/*----------
-	 * Construct the command, substituting any recognized %-specifiers:
-	 *
-	 *   %f: the file descriptor of the input pipe
-	 *   %r: the role that the client wants to assume (port->user_name)
-	 *   %%: a literal '%'
-	 */
-	initStringInfo(&command);
-
-	for (p = oauth_validator_command; *p; p++)
-	{
-		if (p[0] == '%')
-		{
-			switch (p[1])
-			{
-				case 'f':
-					appendStringInfo(&command, "%d", rfd);
-					p++;
-					break;
-				case 'r':
-
-					/*
-					 * TODO: decide how this string should be escaped. The
-					 * role is controlled by the client, so if we don't escape
-					 * it, command injections are inevitable.
-					 *
-					 * This is probably an indication that the role name needs
-					 * to be communicated to the validator process in some
-					 * other way. For this proof of concept, just be
-					 * incredibly strict about the characters that are allowed
-					 * in user names.
-					 */
-					if (!username_ok_for_shell(port->user_name))
-						goto cleanup;
-
-					appendStringInfoString(&command, port->user_name);
-					p++;
-					break;
-				case '%':
-					appendStringInfoChar(&command, '%');
-					p++;
-					break;
-				default:
-					appendStringInfoChar(&command, p[0]);
-			}
-		}
-		else
-			appendStringInfoChar(&command, p[0]);
-	}
-
-	/* Execute the command. */
-	fh = OpenPipeStream(command.data, "r");
-	if (!fh)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("opening pipe to OAuth validator: %m")));
-		goto cleanup;
-	}
-
-	/* We don't need the read end of the pipe anymore. */
-	close(rfd);
-	rfd = -1;
-
-	/* Give the command the token to validate. */
-	written = write(wfd, token, strlen(token));
-	if (written != strlen(token))
-	{
-		/* TODO must loop for short writes, EINTR et al */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not write token to child pipe: %m")));
-		goto cleanup;
-	}
-
-	close(wfd);
-	wfd = -1;
-
-	/*-----
-	 * Read the command's response.
-	 *
-	 * TODO: getline() is probably too new to use, unfortunately.
-	 * TODO: loop over all lines
-	 */
-	if ((len = getline(&line, &size, fh)) >= 0)
-	{
-		/* TODO: fail if the authn_id doesn't end with a newline */
-		if (len > 0)
-			line[len - 1] = '\0';
-
-		set_authn_id(port, line);
-	}
-	else if (ferror(fh))
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not read from command \"%s\": %m",
-						command.data)));
-		goto cleanup;
-	}
-
-	/* Make sure the command exits cleanly. */
-	if (!check_exit(&fh, command.data))
-	{
-		/* error message already logged */
-		goto cleanup;
-	}
-
-	/* Done. */
-	success = true;
-
-cleanup:
-	if (line)
-		free(line);
-
-	/*
-	 * In the successful case, the pipe fds are already closed. For the error
-	 * case, always close out the pipe before waiting for the command, to
-	 * prevent deadlock.
-	 */
-	if (rfd >= 0)
-		close(rfd);
-	if (wfd >= 0)
-		close(wfd);
-
-	if (fh)
-	{
-		Assert(!success);
-		check_exit(&fh, command.data);
-	}
-
-	if (command.data)
-		pfree(command.data);
-
-	return success;
-}
-
-static bool
-check_exit(FILE **fh, const char *command)
+static void
+load_validator_library(void)
 {
-	int			rc;
+	OAuthValidatorModuleInit validator_init;
 
-	rc = ClosePipeStream(*fh);
-	*fh = NULL;
-
-	if (rc == -1)
-	{
-		/* pclose() itself failed. */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not close pipe to command \"%s\": %m",
-						command)));
-	}
-	else if (rc != 0)
-	{
-		char	   *reason = wait_result_to_str(rc);
-
-		ereport(COMMERROR,
-				(errmsg("failed to execute command \"%s\": %s",
-						command, reason)));
-
-		pfree(reason);
-	}
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
 
-	return (rc == 0);
-}
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
 
-static bool
-set_cloexec(int fd)
-{
-	int			flags;
-	int			rc;
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
 
-	flags = fcntl(fd, F_GETFD);
-	if (flags == -1)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not get fd flags for child pipe: %m")));
-		return false;
-	}
+	ValidatorCallbacks = (*validator_init) ();
 
-	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
-		return false;
-	}
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
 
-	return true;
+	before_shmem_exit(shutdown_validator_library, 0);
 }
 
-/*
- * XXX This should go away eventually and be replaced with either a proper
- * escape or a different strategy for communication with the validator command.
- */
-static bool
-username_ok_for_shell(const char *username)
+static void
+shutdown_validator_library(int code, Datum arg)
 {
-	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
-	static const char *const allowed =
-		"abcdefghijklmnopqrstuvwxyz"
-		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
-		"0123456789-_./:";
-	size_t		span;
-
-	Assert(username && username[0]);	/* should have already been checked */
-
-	span = strspn(username, allowed);
-	if (username[span] != '\0')
-	{
-		ereport(COMMERROR,
-				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
-		return false;
-	}
-
-	return true;
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
 }
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 0f83f0d870..a479b679d1 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -4672,12 +4672,12 @@ struct config_string ConfigureNamesString[] =
 	},
 
 	{
-		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
-			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
 			NULL,
 			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
 		},
-		&oauth_validator_command,
+		&OAuthValidatorLibrary,
 		"",
 		NULL, NULL, NULL
 	},
diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index c3729755ba..4f24b1aff6 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -31,7 +31,7 @@ OBJS = \
 all: pg_combinebackup
 
 pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
-	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
 
 install: all installdirs
 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
diff --git a/src/common/Makefile b/src/common/Makefile
index bbb5c3ab11..00e30e6bfe 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -41,7 +41,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
 override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
-LIBS += $(PTHREAD_LIBS)
+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
 
 OBJS_COMMON = \
 	archive.o \
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
index 5edab3b25a..5c081abfae 100644
--- a/src/include/libpq/oauth.h
+++ b/src/include/libpq/oauth.h
@@ -3,7 +3,7 @@
  * oauth.h
  *	  Interface to libpq/auth-oauth.c
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/include/libpq/oauth.h
@@ -16,7 +16,32 @@
 #include "libpq/libpq-be.h"
 #include "libpq/sasl.h"
 
-extern char *oauth_validator_command;
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authenticated;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
 
 /* Implementation */
 extern const pg_be_sasl_mech pg_be_oauth_mech;
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 8fbe742d38..dc54ce7189 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..1f874cd7f2
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,19 @@
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..d9c1d1d577
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..49e04b0afe
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,53 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+# Delete pg_hba.conf from the given node, add a new entry to it
+# and then execute a reload to refresh it.
+# XXX: this is copied from authentication/t/001_password and should be made
+# generic functionality if we end up using it.
+sub reset_pg_hba
+{
+	my $node = shift;
+	my $database = shift;
+	my $role = shift;
+	my $hba_method = shift;
+
+	unlink($node->data_dir . '/pg_hba.conf');
+	# just for testing purposes, use a continuation line
+	$node->append_conf('pg_hba.conf',
+		"local $database $role\\\n $hba_method");
+	$node->reload;
+	return;
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:18080" scope="openid postgres"');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
+
+my $port = $webserver->port();
+
+is($port, 18080, "Port is 18080");
+
+$webserver->setup();
+$webserver->run();
+
+$node->connect_ok("dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..c76d0599c5
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,71 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "XXX: validating %s for %s", token, role);
+
+	res->authenticated = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 44c1bb5afd..b758ad01cc 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2302,6 +2302,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2345,7 +2350,14 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..3ac90c3d0f
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,158 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use threads;
+use Socket;
+use IO::Select;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+	my $port = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	$self->{'port'} = $port;
+
+	return $self;
+}
+
+sub setup
+{
+	my $self = shift;
+	my $tcp = getprotobyname('tcp');
+
+	socket($self->{'socket'}, PF_INET, SOCK_STREAM, $tcp)
+		or die "no socket";
+	setsockopt($self->{'socket'}, SOL_SOCKET, SO_REUSEADDR, pack("l", 1));
+	bind($self->{'socket'}, sockaddr_in($self->{'port'}, INADDR_ANY));
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+
+	my $server_thread = threads->create(\&_listen, $self);
+	$server_thread->detach();
+}
+
+sub _listen
+{
+	my $self = shift;
+
+	listen($self->{'socket'}, SOMAXCONN) or die "fail to listen: $!";
+
+	while (1)
+	{
+		my $fh;
+		my %request;
+		my $remote = accept($fh, $self->{'socket'});
+		binmode $fh;
+
+		my ($method, $object, $prot) = split(/ /, <$fh>);
+		$request{'method'} = $method;
+		$request{'object'} = $object;
+		chomp($request{'object'});
+
+		local $/ = Socket::CRLF;
+		my $c = 0;
+		while(<$fh>)
+		{
+			chomp;
+			# Headers
+			if (/:/)
+			{
+				my ($field, $value) = split(/:/, $_, 2);
+				$value =~ s/^\s+//;
+				$request{'headers'}{lc $field} = $value;
+			}
+			# POST data
+			elsif (/^$/)
+			{
+				read($fh, $request{'content'}, $request{'headers'}{'content-length'})
+					if defined $request{'headers'}{'content-length'};
+				last;
+			}
+		}
+
+		# Debug printing
+		# print ": read ".$request{'method'} . ";" . $request{'object'}.";\n";
+		# foreach my $h (keys(%{$request{'headers'}}))
+		#{
+		#	printf ": headers: " . $request{'headers'}{$h} . "\n";
+		#}
+		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
+
+		if ($request{'object'} eq '/.well-known/openid-configuration')
+		{
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"issuer": "http://localhost:$self->{'port'}",
+				"token_endpoint": "http://localhost:$self->{'port'}/token",
+				"device_authorization_endpoint": "http://localhost:$self->{'port'}/authorize",
+				"response_types_supported": ["token"],
+				"subject_types_supported": ["public"],
+				"id_token_signing_alg_values_supported": ["RS256"],
+				"grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"]
+			}
+EOR
+		}
+		elsif ($request{'object'} eq '/authorize')
+		{
+			print ": returning device_code\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"device_code": "postgres",
+				"user_code" : "postgresuser",
+				"interval" : 0,
+				"verification_uri" : "https://example.com/",
+				"expires-in": 5
+			}
+EOR
+		}
+		elsif ($request{'object'} eq '/token')
+		{
+			print ": returning token\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"access_token": "9243959234",
+				"token_type": "bearer"
+			}
+EOR
+		}
+		else
+		{
+			print ": returning default\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: text/html\r\n";
+			print $fh "\r\n";
+			print $fh "Ok\n";
+		}
+
+		close($fh);
+	}
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a0acd33ccf..74d2bd316f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1657,6 +1657,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -2980,6 +2981,7 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
 ValuesScan
 ValuesScanState
 Var
-- 
2.34.1

v19-0009-WIP-Python-OAuth-provider-implementation.patchapplication/octet-stream; name=v19-0009-WIP-Python-OAuth-provider-implementation.patchDownload
From 28756eda1c5fab37187d942494d68968966a882d Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 26 Feb 2024 16:24:32 -0800
Subject: [PATCH v19 9/9] WIP: Python OAuth provider implementation

---
 src/test/modules/oauth_validator/Makefile     |   2 +
 src/test/modules/oauth_validator/meson.build  |   3 +
 .../modules/oauth_validator/t/001_server.pl   |  12 +-
 .../modules/oauth_validator/t/oauth_server.py |  91 +++++++++++
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 141 +++---------------
 5 files changed, 124 insertions(+), 125 deletions(-)
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py

diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
index 1f874cd7f2..e93e01455a 100644
--- a/src/test/modules/oauth_validator/Makefile
+++ b/src/test/modules/oauth_validator/Makefile
@@ -1,3 +1,5 @@
+export PYTHON
+
 MODULES = validator
 PGFILEDESC = "validator - test OAuth validator module"
 
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index d9c1d1d577..3feba6f826 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -29,5 +29,8 @@ tests += {
     'tests': [
       't/001_server.pl',
     ],
+    'env': {
+      'PYTHON': python.path(),
+    },
   },
 }
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 49e04b0afe..bbfa69e442 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -34,20 +34,16 @@ $node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n"
 $node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
 $node->start;
 
-reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:18080" scope="openid postgres"');
-
-my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
 
 my $port = $webserver->port();
-
-is($port, 18080, "Port is 18080");
-
-$webserver->setup();
-$webserver->run();
+reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:' . $port . '" scope="openid postgres"');
 
 $node->connect_ok("dbname=postgres oauth_client_id=f02c6361-0635", "connect",
 				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
 
+$webserver->stop();
 $node->stop;
 
 done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..7fa0b05a18
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,91 @@
+#! /usr/bin/env python3
+
+import http.server
+import json
+import os
+import sys
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def do_GET(self):
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def do_POST(self):
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        """
+
+        resp = json.dumps(js).encode("ascii")
+
+        self.send_response(200, "OK")
+        self.send_header("Content-Type", "application/json")
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        return {
+            "issuer": f"http://localhost:{port}",
+            "token_endpoint": f"http://localhost:{port}/token",
+            "device_authorization_endpoint": f"http://localhost:{port}/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    def authorization(self) -> JsonObject:
+        return {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            "interval": 0,
+            "verification_uri": "https://example.com/",
+            "expires-in": 5,
+        }
+
+    def token(self) -> JsonObject:
+        return {
+            "access_token": "9243959234",
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
index 3ac90c3d0f..d96733f531 100644
--- a/src/test/perl/PostgreSQL/Test/OAuthServer.pm
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -5,6 +5,7 @@ package PostgreSQL::Test::OAuthServer;
 use warnings;
 use strict;
 use threads;
+use Scalar::Util;
 use Socket;
 use IO::Select;
 
@@ -13,27 +14,13 @@ local *server_socket;
 sub new
 {
 	my $class = shift;
-	my $port = shift;
 
 	my $self = {};
 	bless($self, $class);
 
-	$self->{'port'} = $port;
-
 	return $self;
 }
 
-sub setup
-{
-	my $self = shift;
-	my $tcp = getprotobyname('tcp');
-
-	socket($self->{'socket'}, PF_INET, SOCK_STREAM, $tcp)
-		or die "no socket";
-	setsockopt($self->{'socket'}, SOL_SOCKET, SO_REUSEADDR, pack("l", 1));
-	bind($self->{'socket'}, sockaddr_in($self->{'port'}, INADDR_ANY));
-}
-
 sub port
 {
 	my $self = shift;
@@ -44,115 +31,35 @@ sub port
 sub run
 {
 	my $self = shift;
+	my $port;
 
-	my $server_thread = threads->create(\&_listen, $self);
-	$server_thread->detach();
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		// die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	print("# OAuth provider (PID $pid) is listening on port $port\n");
 }
 
-sub _listen
+sub stop
 {
 	my $self = shift;
 
-	listen($self->{'socket'}, SOMAXCONN) or die "fail to listen: $!";
-
-	while (1)
-	{
-		my $fh;
-		my %request;
-		my $remote = accept($fh, $self->{'socket'});
-		binmode $fh;
-
-		my ($method, $object, $prot) = split(/ /, <$fh>);
-		$request{'method'} = $method;
-		$request{'object'} = $object;
-		chomp($request{'object'});
-
-		local $/ = Socket::CRLF;
-		my $c = 0;
-		while(<$fh>)
-		{
-			chomp;
-			# Headers
-			if (/:/)
-			{
-				my ($field, $value) = split(/:/, $_, 2);
-				$value =~ s/^\s+//;
-				$request{'headers'}{lc $field} = $value;
-			}
-			# POST data
-			elsif (/^$/)
-			{
-				read($fh, $request{'content'}, $request{'headers'}{'content-length'})
-					if defined $request{'headers'}{'content-length'};
-				last;
-			}
-		}
-
-		# Debug printing
-		# print ": read ".$request{'method'} . ";" . $request{'object'}.";\n";
-		# foreach my $h (keys(%{$request{'headers'}}))
-		#{
-		#	printf ": headers: " . $request{'headers'}{$h} . "\n";
-		#}
-		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
-
-		if ($request{'object'} eq '/.well-known/openid-configuration')
-		{
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"issuer": "http://localhost:$self->{'port'}",
-				"token_endpoint": "http://localhost:$self->{'port'}/token",
-				"device_authorization_endpoint": "http://localhost:$self->{'port'}/authorize",
-				"response_types_supported": ["token"],
-				"subject_types_supported": ["public"],
-				"id_token_signing_alg_values_supported": ["RS256"],
-				"grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"]
-			}
-EOR
-		}
-		elsif ($request{'object'} eq '/authorize')
-		{
-			print ": returning device_code\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"device_code": "postgres",
-				"user_code" : "postgresuser",
-				"interval" : 0,
-				"verification_uri" : "https://example.com/",
-				"expires-in": 5
-			}
-EOR
-		}
-		elsif ($request{'object'} eq '/token')
-		{
-			print ": returning token\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"access_token": "9243959234",
-				"token_type": "bearer"
-			}
-EOR
-		}
-		else
-		{
-			print ": returning default\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: text/html\r\n";
-			print $fh "\r\n";
-			print $fh "Ok\n";
-		}
-
-		close($fh);
-	}
+	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
 }
 
 1;
-- 
2.34.1

#99Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#98)
10 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Mar 1, 2024 at 9:46 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

v19 gets us a bit closer by adding a missed import for Windows. I've
also removed iddawc support, so the client patch is lighter.

v20 fixes a bunch more TODOs:
1) the client initial response is validated more closely
2) the server's invalid_token parameters are properly escaped into the
containing JSON (though, eventually, we probably want to just reject
invalid HBA settings instead of passing them through to the client)
3) Windows-specific responses have been recorded in the test suite

While poking at item 2, I was reminded that there's an alternative way
to get OAuth parameters from the server, and it's subtly incompatible
with the OpenID spec because OpenID didn't follow the rules for
.well-known URI construction [1]https://www.rfc-editor.org/rfc/rfc8414.html#section-5. :( Some sort of knob will be
required to switch the behaviors.

I renamed the API for the validator module from res->authenticated to
res->authorized. Authentication is optional, but a validator *must*
check that the client it's talking to was authorized by the user to
access the server, whether or not the user is authenticated. (It may
additionally verify that the user is authorized to access the
database, or it may simply authenticate the user and defer to the
usermap.) Documenting that particular subtlety is going to be
interesting...

The tests now exercise different issuers for different users, which
will also be a good way to signal the server to respond in different
ways during the validator tests. It does raise the question: if a
third party provides an issuer-specific module, how do we switch
between that and some other module for a different user?

Andrew asked over at [2]/messages/by-id/682c8fff-355c-a04f-57ac-81055c4ccda8@dunslane.net if we could perhaps get 0001 in as well. I
think the main thing to figure out there is, is requiring linkage
against libpq (see 0008) going to be okay for the frontend binaries
that need JSON support? Or do we need to do something like moving
PQExpBuffer into src/common to simplify the dependency tree?

--Jacob

[1]: https://www.rfc-editor.org/rfc/rfc8414.html#section-5
[2]: /messages/by-id/682c8fff-355c-a04f-57ac-81055c4ccda8@dunslane.net

Attachments:

since-v19.diff.txttext/plain; charset=US-ASCII; name=since-v19.diff.txtDownload
 1:  ce06c03e2b =  1:  231c6fb165 common/jsonapi: support FRONTEND clients
 2:  6989b75153 =  2:  f78c79ea68 Refactor SASL exchange to return tri-state status
 3:  783bfe0b95 =  3:  10b6d2a6b9 Explicitly require password for SCRAM exchange
 4:  77550a47db =  4:  2a55d9c806 libpq: add OAUTHBEARER SASL mechanism
 5:  12ae7c4355 !  5:  5488ac25f5 backend: add OAUTHBEARER SASL mechanism
    @@ Commit message
         - port to platforms other than "modern Linux/BSD"
         - overhaul the communication with oauth_validator_command, which is
           currently a bad hack on OpenPipeStream()
    -    - implement more sanity checks on the OAUTHBEARER message format and
    -      tokens sent by the client
         - implement more helpful handling of HBA misconfigurations
    -    - properly interpolate JSON when generating error responses
         - use logdetail during auth failures
         - deal with role names that can't be safely passed to system() without
           shell-escaping
    @@ src/backend/libpq/auth-oauth.c (new)
     +#include "libpq/oauth.h"
     +#include "libpq/sasl.h"
     +#include "storage/fd.h"
    ++#include "utils/json.h"
     +
     +/* GUC */
     +char	   *oauth_validator_command;
    @@ src/backend/libpq/auth-oauth.c (new)
     +}
     +
     +/*
    ++ * Performs syntactic validation of a key and value from the initial client
    ++ * response. (Semantic validation of interesting values must be performed
    ++ * later.)
    ++ */
    ++static void
    ++validate_kvpair(const char *key, const char *val)
    ++{
    ++	/*-----
    ++	 * From Sec 3.1:
    ++	 *     key            = 1*(ALPHA)
    ++	 */
    ++	static const char *key_allowed_set =
    ++		"abcdefghijklmnopqrstuvwxyz"
    ++		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
    ++
    ++	size_t		span;
    ++
    ++	if (!key[0])
    ++		ereport(ERROR,
    ++				(errcode(ERRCODE_PROTOCOL_VIOLATION),
    ++				 errmsg("malformed OAUTHBEARER message"),
    ++				 errdetail("Message contains an empty key name.")));
    ++
    ++	span = strspn(key, key_allowed_set);
    ++	if (key[span] != '\0')
    ++		ereport(ERROR,
    ++				(errcode(ERRCODE_PROTOCOL_VIOLATION),
    ++				 errmsg("malformed OAUTHBEARER message"),
    ++				 errdetail("Message contains an invalid key name.")));
    ++
    ++	/*-----
    ++	 * From Sec 3.1:
    ++	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
    ++	 *
    ++	 * The VCHAR (visible character) class is large; a loop is more
    ++	 * straightforward than strspn().
    ++	 */
    ++	for (; *val; ++val)
    ++	{
    ++		if (0x21 <= *val && *val <= 0x7E)
    ++			continue;			/* VCHAR */
    ++
    ++		switch (*val)
    ++		{
    ++			case ' ':
    ++			case '\t':
    ++			case '\r':
    ++			case '\n':
    ++				continue;		/* SP, HTAB, CR, LF */
    ++
    ++			default:
    ++				ereport(ERROR,
    ++						(errcode(ERRCODE_PROTOCOL_VIOLATION),
    ++						 errmsg("malformed OAUTHBEARER message"),
    ++						 errdetail("Message contains an invalid value.")));
    ++		}
    ++	}
    ++}
    ++
    ++/*
     + * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
     + * found, its value is returned.
     + */
    @@ src/backend/libpq/auth-oauth.c (new)
     +
     +		/*
     +		 * Find the end of the key name.
    -+		 *
    -+		 * TODO further validate the key/value grammar? empty keys, bad
    -+		 * chars...
     +		 */
     +		sep = strchr(pos, '=');
     +		if (!sep)
    @@ src/backend/libpq/auth-oauth.c (new)
     +		/* Both key and value are now safely terminated. */
     +		key = pos;
     +		value = sep + 1;
    ++		validate_kvpair(key, value);
     +
     +		if (!strcmp(key, AUTH_KEY))
     +		{
    @@ src/backend/libpq/auth-oauth.c (new)
     +generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
     +{
     +	StringInfoData buf;
    ++	StringInfoData issuer;
     +
     +	/*
     +	 * The admin needs to set an issuer and scope for OAuth to work. There's
    @@ src/backend/libpq/auth-oauth.c (new)
     +				 errmsg("OAuth is not properly configured for this user"),
     +				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
     +
    ++	/*------
    ++	 * Build the .well-known URI based on our issuer.
    ++	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
    ++	 * have to make this configurable too.
    ++	 */
    ++	initStringInfo(&issuer);
    ++	appendStringInfoString(&issuer, ctx->issuer);
    ++	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
     +
     +	initStringInfo(&buf);
     +
     +	/*
    -+	 * TODO: JSON escaping
    ++	 * TODO: note that escaping here should be belt-and-suspenders, since
    ++	 * escapable characters aren't valid in either the issuer URI or the scope
    ++	 * list, but the HBA doesn't enforce that yet.
     +	 */
    -+	appendStringInfo(&buf,
    -+					 "{ "
    -+					 "\"status\": \"invalid_token\", "
    -+					 "\"openid-configuration\": \"%s/.well-known/openid-configuration\", "
    -+					 "\"scope\": \"%s\" "
    -+					 "}",
    -+					 ctx->issuer, ctx->scope);
    ++	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
    ++
    ++	appendStringInfoString(&buf, "\"openid-configuration\": ");
    ++	escape_json(&buf, issuer.data);
    ++	pfree(issuer.data);
    ++
    ++	appendStringInfoString(&buf, ", \"scope\": ");
    ++	escape_json(&buf, ctx->scope);
    ++
    ++	appendStringInfoString(&buf, " }");
     +
     +	*output = buf.data;
     +	*outputlen = buf.len;
 6:  707edf9314 !  6:  fdbad1976a Introduce OAuth validator libraries
    @@ src/backend/libpq/auth-oauth.c
      #include "libpq/sasl.h"
      #include "storage/fd.h"
     +#include "storage/ipc.h"
    + #include "utils/json.h"
      
      /* GUC */
     -char	   *oauth_validator_command;
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
     +
     +	/* Ensure that we have a correct token to validate */
     +	if (!(token = validate_token_format(auth)))
    - 		return false;
    - 
    ++		return false;
    ++
     +	/* Call the validation function from the validator module */
     +	ret = ValidatorCallbacks->validate_cb(validator_module_state,
     +										  token, port->user_name);
     +
    -+	if (!ret->authenticated)
    -+		return false;
    -+
    ++	if (!ret->authorized)
    + 		return false;
    + 
     +	if (ret->authn_id)
     +		set_authn_id(port, ret->authn_id);
     +
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
     -	ret = check_usermap(port->hba->usermap, port->user_name,
     -						MyClientConnectionInfo.authn_id, false);
     -	return (ret == STATUS_OK);
    -+	map_status = check_usermap(port->hba->usermap, port->user_name,
    -+							   MyClientConnectionInfo.authn_id, false);
    -+	return (map_status == STATUS_OK);
    - }
    - 
    +-}
    +-
     -static bool
     -run_validator_command(Port *port, const char *token)
     -{
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
     -		pfree(command.data);
     -
     -	return success;
    --}
    --
    ++	map_status = check_usermap(port->hba->usermap, port->user_name,
    ++							   MyClientConnectionInfo.authn_id, false);
    ++	return (map_status == STATUS_OK);
    + }
    + 
     -static bool
     -check_exit(FILE **fh, const char *command)
     +static void
     +load_validator_library(void)
      {
     -	int			rc;
    -+	OAuthValidatorModuleInit validator_init;
    - 
    +-
     -	rc = ClosePipeStream(*fh);
     -	*fh = NULL;
     -
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
     -	else if (rc != 0)
     -	{
     -		char	   *reason = wait_result_to_str(rc);
    --
    ++	OAuthValidatorModuleInit validator_init;
    + 
     -		ereport(COMMERROR,
     -				(errmsg("failed to execute command \"%s\": %s",
     -						command, reason)));
    --
    --		pfree(reason);
    --	}
     +	if (OAuthValidatorLibrary[0] == '\0')
     +		ereport(ERROR,
     +				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
     +				 errmsg("oauth_validator_library is not set")));
      
    --	return (rc == 0);
    --}
    +-		pfree(reason);
    +-	}
     +	validator_init = (OAuthValidatorModuleInit)
     +		load_external_function(OAuthValidatorLibrary,
     +							   "_PG_oauth_validator_module_init", false, NULL);
      
    --static bool
    --set_cloexec(int fd)
    --{
    --	int			flags;
    --	int			rc;
    +-	return (rc == 0);
    +-}
     +	if (validator_init == NULL)
     +		ereport(ERROR,
     +				(errmsg("%s module \"%s\" have to define the symbol %s",
     +						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
      
    +-static bool
    +-set_cloexec(int fd)
    +-{
    +-	int			flags;
    +-	int			rc;
    ++	ValidatorCallbacks = (*validator_init) ();
    + 
     -	flags = fcntl(fd, F_GETFD);
     -	if (flags == -1)
     -	{
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth, const cha
     -				 errmsg("could not get fd flags for child pipe: %m")));
     -		return false;
     -	}
    -+	ValidatorCallbacks = (*validator_init) ();
    - 
    --	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
    --	if (rc < 0)
    --	{
    --		ereport(COMMERROR,
    --				(errcode_for_file_access(),
    --				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
    --		return false;
    --	}
     +	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
     +	if (ValidatorCallbacks->startup_cb != NULL)
     +		ValidatorCallbacks->startup_cb(validator_module_state);
      
    +-	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
    +-	if (rc < 0)
    +-	{
    +-		ereport(COMMERROR,
    +-				(errcode_for_file_access(),
    +-				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
    +-		return false;
    +-	}
    +-
     -	return true;
     +	before_shmem_exit(shutdown_validator_library, 0);
      }
    @@ src/include/libpq/oauth.h
     +
     +typedef struct ValidatorModuleResult
     +{
    -+	bool		authenticated;
    ++	bool		authorized;
     +	char	   *authn_id;
     +} ValidatorModuleResult;
     +
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +use PostgreSQL::Test::OAuthServer;
     +use Test::More;
     +
    -+# Delete pg_hba.conf from the given node, add a new entry to it
    -+# and then execute a reload to refresh it.
    -+# XXX: this is copied from authentication/t/001_password and should be made
    -+# generic functionality if we end up using it.
    -+sub reset_pg_hba
    -+{
    -+	my $node = shift;
    -+	my $database = shift;
    -+	my $role = shift;
    -+	my $hba_method = shift;
    -+
    -+	unlink($node->data_dir . '/pg_hba.conf');
    -+	# just for testing purposes, use a continuation line
    -+	$node->append_conf('pg_hba.conf',
    -+		"local $database $role\\\n $hba_method");
    -+	$node->reload;
    -+	return;
    -+}
    -+
     +my $node = PostgreSQL::Test::Cluster->new('primary');
     +$node->init;
    ++$node->append_conf('postgresql.conf', "log_connections = on\n");
     +$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
     +$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
     +$node->start;
     +
    -+reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:18080" scope="openid postgres"');
    ++$node->safe_psql('postgres', 'CREATE USER test;');
    ++$node->safe_psql('postgres', 'CREATE USER testalt;');
    ++
    ++my $issuer = "127.0.0.1:18080";
    ++
    ++unlink($node->data_dir . '/pg_hba.conf');
    ++$node->append_conf('pg_hba.conf', qq{
    ++local all test    oauth issuer="$issuer"           scope="openid postgres"
    ++local all testalt oauth issuer="$issuer/alternate" scope="openid postgres alt"
    ++});
    ++$node->reload;
     +
     +my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
     +
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +$webserver->setup();
     +$webserver->run();
     +
    -+$node->connect_ok("dbname=postgres oauth_client_id=f02c6361-0635", "connect",
    ++my ($log_start, $log_end);
    ++$log_start = $node->wait_for_log(qr/reloading configuration files/);
    ++
    ++my $user = "test";
    ++$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
     +				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +
    ++$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
    ++$node->log_check("user $user: validator receives correct parameters", $log_start,
    ++				 log_like => [
    ++					 qr/oauth_validator: token="9243959234", role="$user"/,
    ++					 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
    ++				 ]);
    ++$node->log_check("user $user: validator sets authenticated identity", $log_start,
    ++				 log_like => [
    ++					 qr/connection authenticated: identity="test" method=oauth/,
    ++				 ]);
    ++$log_start = $log_end;
    ++
    ++# The /alternate issuer uses slightly different parameters.
    ++$user = "testalt";
    ++$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
    ++				  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@);
    ++
    ++$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
    ++$node->log_check("user $user: validator receives correct parameters", $log_start,
    ++				 log_like => [
    ++					 qr/oauth_validator: token="9243959234-alt", role="$user"/,
    ++					 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
    ++				 ]);
    ++$node->log_check("user $user: validator sets authenticated identity", $log_start,
    ++				 log_like => [
    ++					 qr/connection authenticated: identity="testalt" method=oauth/,
    ++				 ]);
    ++$log_start = $log_end;
    ++
     +$node->stop;
     +
     +done_testing();
    @@ src/test/modules/oauth_validator/validator.c (new)
     +
     +#include "fmgr.h"
     +#include "libpq/oauth.h"
    ++#include "miscadmin.h"
     +#include "utils/memutils.h"
     +
     +PG_MODULE_MAGIC;
    @@ src/test/modules/oauth_validator/validator.c (new)
     +	return &validator_callbacks;
     +}
     +
    ++#define PRIVATE_COOKIE ((void *) 13579)
    ++
     +static void
     +validator_startup(ValidatorModuleState *state)
     +{
    -+	/* do nothing */
    ++	state->private_data = PRIVATE_COOKIE;
     +}
     +
     +static void
    @@ src/test/modules/oauth_validator/validator.c (new)
     +{
     +	ValidatorModuleResult *res;
     +
    ++	/* Check to make sure our private state still exists. */
    ++	if (state->private_data != PRIVATE_COOKIE)
    ++		elog(ERROR, "oauth_validator: private state cookie changed to %p",
    ++				state->private_data);
    ++
     +	res = palloc(sizeof(ValidatorModuleResult));
     +
    -+	elog(LOG, "XXX: validating %s for %s", token, role);
    ++	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
    ++	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
    ++		 MyProcPort->hba->oauth_issuer,
    ++		 MyProcPort->hba->oauth_scope);
     +
    -+	res->authenticated = true;
    ++	res->authorized = true;
     +	res->authn_id = pstrdup(role);
     +
     +	return res;
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm (new)
     +		#}
     +		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
     +
    ++		my $alternate = 0;
    ++		if ($request{'object'} =~ qr|^/alternate(/.*)$|)
    ++		{
    ++			$alternate = 1;
    ++			$request{'object'} = $1;
    ++		}
    ++
     +		if ($request{'object'} eq '/.well-known/openid-configuration')
     +		{
    ++			my $issuer = "http://localhost:$self->{'port'}";
    ++			if ($alternate)
    ++			{
    ++				$issuer .= "/alternate";
    ++			}
    ++
     +			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
     +			print $fh "Content-Type: application/json\r\n";
     +			print $fh "\r\n";
     +			print $fh <<EOR;
     +			{
    -+				"issuer": "http://localhost:$self->{'port'}",
    -+				"token_endpoint": "http://localhost:$self->{'port'}/token",
    -+				"device_authorization_endpoint": "http://localhost:$self->{'port'}/authorize",
    ++				"issuer": "$issuer",
    ++				"token_endpoint": "$issuer/token",
    ++				"device_authorization_endpoint": "$issuer/authorize",
     +				"response_types_supported": ["token"],
     +				"subject_types_supported": ["public"],
     +				"id_token_signing_alg_values_supported": ["RS256"],
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm (new)
     +		}
     +		elsif ($request{'object'} eq '/authorize')
     +		{
    ++			my $uri = "https://example.com/";
    ++			if ($alternate)
    ++			{
    ++				$uri = "https://example.org/";
    ++			}
    ++
     +			print ": returning device_code\n";
     +			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
     +			print $fh "Content-Type: application/json\r\n";
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm (new)
     +				"device_code": "postgres",
     +				"user_code" : "postgresuser",
     +				"interval" : 0,
    -+				"verification_uri" : "https://example.com/",
    ++				"verification_uri" : "$uri",
     +				"expires-in": 5
     +			}
     +EOR
     +		}
     +		elsif ($request{'object'} eq '/token')
     +		{
    ++			my $token = "9243959234";
    ++			if ($alternate)
    ++			{
    ++				$token .= "-alt";
    ++			}
    ++
     +			print ": returning token\n";
     +			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
     +			print $fh "Content-Type: application/json\r\n";
     +			print $fh "\r\n";
     +			print $fh <<EOR;
     +			{
    -+				"access_token": "9243959234",
    ++				"access_token": "$token",
     +				"token_type": "bearer"
     +			}
     +EOR
 7:  47236c5644 !  7:  e1da97fb50 Add pytest suite for OAuth
    @@ src/test/python/client/test_client.py (new)
     +
     +import pq3
     +
    ++from .test_oauth import alt_patterns
    ++
     +
     +def finish_handshake(conn):
     +    """
    @@ src/test/python/client/test_client.py (new)
     +    sock, client = accept()
     +    sock.close()
     +
    -+    expected = "server closed the connection unexpectedly"
    ++    expected = alt_patterns(
    ++        "server closed the connection unexpectedly",
    ++        # On some platforms, ECONNABORTED gets set instead.
    ++        "Software caused connection abort",
    ++    )
     +    with pytest.raises(psycopg2.OperationalError, match=expected):
     +        client.check_completed()
     +
    @@ src/test/python/pq3.py (new)
     +Pq3 = Struct(
     +    "type" / types,
     +    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
    -+    "payload" / FixedSized(this.len - 4, Default(_payload, b"")),
    ++    "payload"
    ++    / IfThenElse(
    ++        # Allow tests to explicitly pass an incorrect length during testing, by
    ++        # not enforcing a FixedSized during build. (The len calculation above
    ++        # defaults to the correct size.)
    ++        this._building,
    ++        Optional(_payload),
    ++        FixedSized(this.len - 4, Default(_payload, b"")),
    ++    ),
     +)
     +
     +
    @@ src/test/python/server/oauthtest.c (new)
     +
     +	if (reflect_role)
     +	{
    -+		res->authenticated = true;
    ++		res->authorized = true;
     +		res->authn_id = pstrdup(role);	/* TODO: constify? */
     +	}
     +	else
     +	{
     +		if (*expected_bearer && !strcmp(token, expected_bearer))
    -+			res->authenticated = true;
    ++			res->authorized = true;
     +		if (set_authn_id)
     +			res->authn_id = authn_id;
     +	}
    @@ src/test/python/server/test_oauth.py (new)
     +
     +import psycopg2
     +import pytest
    ++from construct import Container
     +from psycopg2 import sql
     +
     +import pq3
    @@ src/test/python/server/test_oauth.py (new)
     +
     +
     +@contextlib.contextmanager
    -+def prepend_file(path, lines):
    ++def prepend_file(path, lines, *, suffix=".bak"):
     +    """
     +    A context manager that prepends a file on disk with the desired lines of
     +    text. When the context manager is exited, the file will be restored to its
     +    original contents.
     +    """
     +    # First make a backup of the original file.
    -+    bak = path + ".bak"
    ++    bak = path + suffix
     +    shutil.copy2(path, bak)
     +
     +    try:
    @@ src/test/python/server/test_oauth.py (new)
     +        b"Bearer trailingtab\t",
     +        b"Bearer me@example.com",
     +        b"Beare abcd",
    ++        b" Bearer leadingspace",
     +        b'OAuth realm="Example"',
     +        b"",
     +    ],
    @@ src/test/python/server/test_oauth.py (new)
     +            id="error response in initial message",
     +        ),
     +        pytest.param(
    -+            pq3.types.PasswordMessage,
    -+            b"x" * (MAX_SASL_MESSAGE_LENGTH + 1),
    ++            None,
    ++            # Sending an actual 65k packet results in ECONNRESET on Windows, and
    ++            # it floods the tests' connection log uselessly, so just fake the
    ++            # length and send a smaller number of bytes.
    ++            dict(
    ++                type=pq3.types.PasswordMessage,
    ++                len=MAX_SASL_MESSAGE_LENGTH + 1,
    ++                payload=b"x" * 512,
    ++            ),
     +            ExpectedError(
     +                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
     +            ),
    @@ src/test/python/server/test_oauth.py (new)
     +            ),
     +            id="multiple auth values",
     +        ),
    ++        pytest.param(
    ++            pq3.types.PasswordMessage,
    ++            pq3.SASLInitialResponse.build(
    ++                dict(
    ++                    name=b"OAUTHBEARER",
    ++                    data=b"y,,\x01=\x01\x01",
    ++                )
    ++            ),
    ++            ExpectedError(
    ++                PROTOCOL_VIOLATION_ERRCODE,
    ++                "malformed OAUTHBEARER message",
    ++                "empty key name",
    ++            ),
    ++            id="empty key",
    ++        ),
    ++        pytest.param(
    ++            pq3.types.PasswordMessage,
    ++            pq3.SASLInitialResponse.build(
    ++                dict(
    ++                    name=b"OAUTHBEARER",
    ++                    data=b"y,,\x01my key= \x01\x01",
    ++                )
    ++            ),
    ++            ExpectedError(
    ++                PROTOCOL_VIOLATION_ERRCODE,
    ++                "malformed OAUTHBEARER message",
    ++                "invalid key name",
    ++            ),
    ++            id="whitespace in key name",
    ++        ),
    ++        pytest.param(
    ++            pq3.types.PasswordMessage,
    ++            pq3.SASLInitialResponse.build(
    ++                dict(
    ++                    name=b"OAUTHBEARER",
    ++                    data=b"y,,\x01key=a\x05b\x01\x01",
    ++                )
    ++            ),
    ++            ExpectedError(
    ++                PROTOCOL_VIOLATION_ERRCODE,
    ++                "malformed OAUTHBEARER message",
    ++                "invalid value",
    ++            ),
    ++            id="junk in value",
    ++        ),
     +    ],
     +)
     +def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
     +    begin_oauth_handshake(conn, oauth_ctx)
     +
     +    # The server expects a SASL response; give it something else instead.
    -+    if not isinstance(payload, dict):
    -+        payload = dict(payload_data=payload)
    -+    pq3.send(conn, type, **payload)
    ++    if type is not None:
    ++        # Build a new packet of the desired type.
    ++        if not isinstance(payload, dict):
    ++            payload = dict(payload_data=payload)
    ++        pq3.send(conn, type, **payload)
    ++    else:
    ++        # The test has a custom packet to send. (The only reason to do this is
    ++        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
    ++        # don't use the standard pq3.send().)
    ++        conn.write(pq3.Pq3.build(payload))
    ++        conn.end_packet(Container(payload))
     +
     +    resp = pq3.recv1(conn)
     +    err.match(resp)
    @@ src/test/python/server/test_oauth.py (new)
     +    row = resp.payload
     +    expected = b"oauth:" + username.encode("utf-8")
     +    assert row.columns == [expected]
    ++
    ++
    ++@pytest.fixture
    ++def odd_oauth_ctx(postgres_instance, oauth_ctx):
    ++    """
    ++    Adds an HBA entry with messed up issuer/scope settings, to pin the server
    ++    behavior.
    ++
    ++    TODO: these should really be rejected in the HBA rather than passed through
    ++    by the server.
    ++    """
    ++    id = secrets.token_hex(4)
    ++
    ++    class Context:
    ++        user = oauth_ctx.user
    ++        dbname = oauth_ctx.dbname
    ++
    ++        # Both of these embedded double-quotes are invalid; they're prohibited
    ++        # in both URLs and OAuth scope identifiers.
    ++        issuer = oauth_ctx.issuer + '/"/'
    ++        scope = oauth_ctx.scope + ' quo"ted'
    ++
    ++    ctx = Context()
    ++    hba_issuer = ctx.issuer.replace('"', '""')
    ++    hba_scope = ctx.scope.replace('"', '""')
    ++    hba_lines = [
    ++        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
    ++    ]
    ++
    ++    if platform.system() == "Windows":
    ++        # XXX why is 'samehost' not behaving as expected on Windows?
    ++        for l in list(hba_lines):
    ++            hba_lines.append(l.replace("samehost", "::1/128"))
    ++
    ++    host, port = postgres_instance
    ++    conn = psycopg2.connect(host=host, port=port)
    ++    conn.autocommit = True
    ++
    ++    with contextlib.closing(conn):
    ++        c = conn.cursor()
    ++
    ++        # Replace pg_hba. Note that it's already been replaced once by
    ++        # oauth_ctx, so use a different backup prefix in prepend_file().
    ++        c.execute("SHOW hba_file;")
    ++        hba = c.fetchone()[0]
    ++
    ++        with prepend_file(hba, hba_lines, suffix=".bak2"):
    ++            c.execute("SELECT pg_reload_conf();")
    ++
    ++            yield ctx
    ++
    ++        # Put things back the way they were.
    ++        c.execute("SELECT pg_reload_conf();")
    ++
    ++
    ++def test_odd_server_response(odd_oauth_ctx, connect):
    ++    """
    ++    Verifies that the server is correctly escaping the JSON in its failure
    ++    response.
    ++    """
    ++    conn = connect()
    ++    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
    ++
    ++    # Send an empty auth initial response, which will force an authn failure.
    ++    send_initial_response(conn, auth=b"")
    ++
    ++    expect_handshake_failure(conn, odd_oauth_ctx)
     
      ## src/test/python/server/test_server.py (new) ##
     @@
    @@ src/test/python/test_pq3.py (new)
     +    [
     +        pytest.param(
     +            dict(type=b"*", len=5),
    -+            b"*\x00\x00\x00\x05\x00",
    ++            b"*\x00\x00\x00\x05",
     +            id="type and len set explicitly",
     +        ),
     +        pytest.param(
    @@ src/test/python/test_pq3.py (new)
     +            id="implied len with payload",
     +        ),
     +        pytest.param(
    ++            dict(type=b"*", len=12, payload=b"1234"),
    ++            b"*\x00\x00\x00\x0C1234",
    ++            id="overridden len (payload underflow)",
    ++        ),
    ++        pytest.param(
    ++            dict(type=b"*", len=5, payload=b"1234"),
    ++            b"*\x00\x00\x00\x051234",
    ++            id="overridden len (payload overflow)",
    ++        ),
    ++        pytest.param(
     +            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
     +            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
     +            id="implied len/type for AuthenticationOK",
 8:  116e17eeee =  8:  185f9902fd XXX temporary patches to build and test
 9:  28756eda1c !  9:  c4d850a7c4 WIP: Python OAuth provider implementation
    @@ src/test/modules/oauth_validator/meson.build: tests += {
      }
     
      ## src/test/modules/oauth_validator/t/001_server.pl ##
    -@@ src/test/modules/oauth_validator/t/001_server.pl: $node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n"
    - $node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
    - $node->start;
    +@@ src/test/modules/oauth_validator/t/001_server.pl: $node->start;
    + $node->safe_psql('postgres', 'CREATE USER test;');
    + $node->safe_psql('postgres', 'CREATE USER testalt;');
      
    --reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:18080" scope="openid postgres"');
    --
    --my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
    +-my $issuer = "127.0.0.1:18080";
     +my $webserver = PostgreSQL::Test::OAuthServer->new();
     +$webserver->run();
    ++
    ++my $port = $webserver->port();
    ++my $issuer = "127.0.0.1:$port";
      
    - my $port = $webserver->port();
    + unlink($node->data_dir . '/pg_hba.conf');
    + $node->append_conf('pg_hba.conf', qq{
    +@@ src/test/modules/oauth_validator/t/001_server.pl: local all testalt oauth issuer="$issuer/alternate" scope="openid postgres alt"
    + });
    + $node->reload;
    + 
    +-my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
    +-
    +-my $port = $webserver->port();
     -
     -is($port, 18080, "Port is 18080");
     -
     -$webserver->setup();
     -$webserver->run();
    -+reset_pg_hba($node, 'all', 'all', 'oauth issuer="127.0.0.1:' . $port . '" scope="openid postgres"');
    +-
    + my ($log_start, $log_end);
    + $log_start = $node->wait_for_log(qr/reloading configuration files/);
      
    - $node->connect_ok("dbname=postgres oauth_client_id=f02c6361-0635", "connect",
    - 				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
    +@@ src/test/modules/oauth_validator/t/001_server.pl: $node->log_check("user $user: validator sets authenticated identity", $log_start
    + 				 ]);
    + $log_start = $log_end;
      
     +$webserver->stop();
      $node->stop;
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +class OAuthHandler(http.server.BaseHTTPRequestHandler):
     +    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
     +
    ++    def _check_issuer(self):
    ++        """
    ++        Switches the behavior of the provider depending on the issuer URI.
    ++        """
    ++        self._alt_issuer = self.path.startswith("/alternate/")
    ++        if self._alt_issuer:
    ++            self.path = self.path.removeprefix("/alternate")
    ++
     +    def do_GET(self):
    ++        self._check_issuer()
    ++
     +        if self.path == "/.well-known/openid-configuration":
     +            resp = self.config()
     +        else:
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        self._send_json(resp)
     +
     +    def do_POST(self):
    ++        self._check_issuer()
    ++
     +        if self.path == "/authorize":
     +            resp = self.authorization()
     +        elif self.path == "/token":
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +
     +    def config(self) -> JsonObject:
     +        port = self.server.socket.getsockname()[1]
    ++        issuer = f"http://localhost:{port}"
    ++        if self._alt_issuer:
    ++            issuer += "/alternate"
     +
     +        return {
    -+            "issuer": f"http://localhost:{port}",
    -+            "token_endpoint": f"http://localhost:{port}/token",
    -+            "device_authorization_endpoint": f"http://localhost:{port}/authorize",
    ++            "issuer": issuer,
    ++            "token_endpoint": issuer + "/token",
    ++            "device_authorization_endpoint": issuer + "/authorize",
     +            "response_types_supported": ["token"],
     +            "subject_types_supported": ["public"],
     +            "id_token_signing_alg_values_supported": ["RS256"],
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        }
     +
     +    def authorization(self) -> JsonObject:
    ++        uri = "https://example.com/"
    ++        if self._alt_issuer:
    ++            uri = "https://example.org/"
    ++
     +        return {
     +            "device_code": "postgres",
     +            "user_code": "postgresuser",
     +            "interval": 0,
    -+            "verification_uri": "https://example.com/",
    ++            "verification_uri": uri,
     +            "expires-in": 5,
     +        }
     +
     +    def token(self) -> JsonObject:
    ++        token = "9243959234"
    ++        if self._alt_issuer:
    ++            token += "-alt"
    ++
     +        return {
    -+            "access_token": "9243959234",
    ++            "access_token": token,
     +            "token_type": "bearer",
     +        }
     +
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm: sub port
     -		#}
     -		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
     -
    +-		my $alternate = 0;
    +-		if ($request{'object'} =~ qr|^/alternate(/.*)$|)
    +-		{
    +-			$alternate = 1;
    +-			$request{'object'} = $1;
    +-		}
    +-
     -		if ($request{'object'} eq '/.well-known/openid-configuration')
     -		{
    +-			my $issuer = "http://localhost:$self->{'port'}";
    +-			if ($alternate)
    +-			{
    +-				$issuer .= "/alternate";
    +-			}
    +-
     -			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
     -			print $fh "Content-Type: application/json\r\n";
     -			print $fh "\r\n";
     -			print $fh <<EOR;
     -			{
    --				"issuer": "http://localhost:$self->{'port'}",
    --				"token_endpoint": "http://localhost:$self->{'port'}/token",
    --				"device_authorization_endpoint": "http://localhost:$self->{'port'}/authorize",
    +-				"issuer": "$issuer",
    +-				"token_endpoint": "$issuer/token",
    +-				"device_authorization_endpoint": "$issuer/authorize",
     -				"response_types_supported": ["token"],
     -				"subject_types_supported": ["public"],
     -				"id_token_signing_alg_values_supported": ["RS256"],
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm: sub port
     -		}
     -		elsif ($request{'object'} eq '/authorize')
     -		{
    +-			my $uri = "https://example.com/";
    +-			if ($alternate)
    +-			{
    +-				$uri = "https://example.org/";
    +-			}
    +-
     -			print ": returning device_code\n";
     -			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
     -			print $fh "Content-Type: application/json\r\n";
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm: sub port
     -				"device_code": "postgres",
     -				"user_code" : "postgresuser",
     -				"interval" : 0,
    --				"verification_uri" : "https://example.com/",
    +-				"verification_uri" : "$uri",
     -				"expires-in": 5
     -			}
     -EOR
     -		}
     -		elsif ($request{'object'} eq '/token')
     -		{
    +-			my $token = "9243959234";
    +-			if ($alternate)
    +-			{
    +-				$token .= "-alt";
    +-			}
    +-
     -			print ": returning token\n";
     -			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
     -			print $fh "Content-Type: application/json\r\n";
     -			print $fh "\r\n";
     -			print $fh <<EOR;
     -			{
    --				"access_token": "9243959234",
    +-				"access_token": "$token",
     -				"token_type": "bearer"
     -			}
     -EOR
v20-0003-Explicitly-require-password-for-SCRAM-exchange.patchapplication/octet-stream; name=v20-0003-Explicitly-require-password-for-SCRAM-exchange.patchDownload
From 10b6d2a6b9bc81357bda8438371821996ba5065d Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Fri, 23 Feb 2024 11:19:55 +0100
Subject: [PATCH v20 3/9] Explicitly require password for SCRAM exchange

This refactors the SASL init flow to set password_needed on the two
SCRAM exchanges currently supported. The code already required this
but was set up in such a way that all SASL exchanges required using
a password, a restriction which may not hold for all exchanges (the
example at hand being the proposed OAuthbearer exchange).

This was extracted from a larger patchset to introduce OAuthBearer
authentication and authorization.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 src/interfaces/libpq/fe-auth.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index cf8af4c62e..81ec08485d 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -425,7 +425,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	int			initialresponselen;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
-	char	   *password;
+	char	   *password = NULL;
 	SASLStatus	status;
 
 	initPQExpBuffer(&mechanism_buf);
@@ -446,8 +446,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 	/*
 	 * Parse the list of SASL authentication mechanisms in the
 	 * AuthenticationSASL message, and select the best mechanism that we
-	 * support.  SCRAM-SHA-256-PLUS and SCRAM-SHA-256 are the only ones
-	 * supported at the moment, listed by order of decreasing importance.
+	 * support. Mechanisms are listed by order of decreasing importance.
 	 */
 	selected_mechanism = NULL;
 	for (;;)
@@ -487,6 +486,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 				{
 					selected_mechanism = SCRAM_SHA_256_PLUS_NAME;
 					conn->sasl = &pg_scram_mech;
+					conn->password_needed = true;
 				}
 #else
 				/*
@@ -522,6 +522,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		{
 			selected_mechanism = SCRAM_SHA_256_NAME;
 			conn->sasl = &pg_scram_mech;
+			conn->password_needed = true;
 		}
 	}
 
@@ -545,18 +546,19 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	/*
 	 * First, select the password to use for the exchange, complaining if
-	 * there isn't one.  Currently, all supported SASL mechanisms require a
-	 * password, so we can just go ahead here without further distinction.
+	 * there isn't one and the selected SASL mechanism needs it.
 	 */
-	conn->password_needed = true;
-	password = conn->connhost[conn->whichhost].password;
-	if (password == NULL)
-		password = conn->pgpass;
-	if (password == NULL || password[0] == '\0')
+	if (conn->password_needed)
 	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 PQnoPasswordSupplied);
-		goto error;
+		password = conn->connhost[conn->whichhost].password;
+		if (password == NULL)
+			password = conn->pgpass;
+		if (password == NULL || password[0] == '\0')
+		{
+			appendPQExpBufferStr(&conn->errorMessage,
+								 PQnoPasswordSupplied);
+			goto error;
+		}
 	}
 
 	Assert(conn->sasl);
-- 
2.34.1

v20-0002-Refactor-SASL-exchange-to-return-tri-state-statu.patchapplication/octet-stream; name=v20-0002-Refactor-SASL-exchange-to-return-tri-state-statu.patchDownload
From f78c79ea68dd714864609e2362533ccf5c0934d1 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Fri, 23 Feb 2024 11:09:54 +0100
Subject: [PATCH v20 2/9] Refactor SASL exchange to return tri-state status

The SASL exchange callback returned state in to output variables:
done and success.  This refactors that logic by introducing a new
return variable of type SASLStatus which makes the code easier to
read and understand, and prepares for future SASL exchanges which
operate asynchronously.

This was extracted from a larger patchset to introduce OAuthBearer
authentication and authorization.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 src/interfaces/libpq/fe-auth-sasl.h  | 31 +++++++----
 src/interfaces/libpq/fe-auth-scram.c | 78 +++++++++++++---------------
 src/interfaces/libpq/fe-auth.c       | 28 +++++-----
 src/tools/pgindent/typedefs.list     |  1 +
 4 files changed, 71 insertions(+), 67 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index ee5d1525b5..4eecf53a15 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -21,6 +21,17 @@
 
 #include "libpq-fe.h"
 
+/*
+ * Possible states for the SASL exchange, see the comment on exchange for an
+ * explanation of these.
+ */
+typedef enum
+{
+	SASL_COMPLETE = 0,
+	SASL_FAILED,
+	SASL_CONTINUE,
+} SASLStatus;
+
 /*
  * Frontend SASL mechanism callbacks.
  *
@@ -59,7 +70,8 @@ typedef struct pg_fe_sasl_mech
 	 * Produces a client response to a server challenge.  As a special case
 	 * for client-first SASL mechanisms, exchange() is called with a NULL
 	 * server response once at the start of the authentication exchange to
-	 * generate an initial response.
+	 * generate an initial response. Returns a SASLStatus indicating the
+	 * state and status of the exchange.
 	 *
 	 * Input parameters:
 	 *
@@ -79,22 +91,23 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	output:	   A malloc'd buffer containing the client's response to
 	 *			   the server (can be empty), or NULL if the exchange should
-	 *			   be aborted.  (*success should be set to false in the
+	 *			   be aborted.  (The callback should return SASL_FAILED in the
 	 *			   latter case.)
 	 *
 	 *	outputlen: The length (0 or higher) of the client response buffer,
 	 *			   ignored if output is NULL.
 	 *
-	 *	done:      Set to true if the SASL exchange should not continue,
-	 *			   because the exchange is either complete or failed
+	 * Return value:
 	 *
-	 *	success:   Set to true if the SASL exchange completed successfully.
-	 *			   Ignored if *done is false.
+	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
+	 *					Additional server challenge is expected
+	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
+	 *	SASL_FAILED:	The exchange has failed and the connection should be
+	 *					dropped.
 	 *--------
 	 */
-	void		(*exchange) (void *state, char *input, int inputlen,
-							 char **output, int *outputlen,
-							 bool *done, bool *success);
+	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+							 char **output, int *outputlen);
 
 	/*--------
 	 * channel_bound()
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 04f0e5713d..0bb820e0d9 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,9 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static void scram_exchange(void *opaq, char *input, int inputlen,
-						   char **output, int *outputlen,
-						   bool *done, bool *success);
+static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
 
@@ -202,17 +201,14 @@ scram_free(void *opaq)
 /*
  * Exchange a SCRAM message with backend.
  */
-static void
+static SASLStatus
 scram_exchange(void *opaq, char *input, int inputlen,
-			   char **output, int *outputlen,
-			   bool *done, bool *success)
+			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
 	PGconn	   *conn = state->conn;
 	const char *errstr = NULL;
 
-	*done = false;
-	*success = false;
 	*output = NULL;
 	*outputlen = 0;
 
@@ -225,12 +221,12 @@ scram_exchange(void *opaq, char *input, int inputlen,
 		if (inputlen == 0)
 		{
 			libpq_append_conn_error(conn, "malformed SCRAM message (empty message)");
-			goto error;
+			return SASL_FAILED;
 		}
 		if (inputlen != strlen(input))
 		{
 			libpq_append_conn_error(conn, "malformed SCRAM message (length mismatch)");
-			goto error;
+			return SASL_FAILED;
 		}
 	}
 
@@ -240,61 +236,59 @@ scram_exchange(void *opaq, char *input, int inputlen,
 			/* Begin the SCRAM handshake, by sending client nonce */
 			*output = build_client_first_message(state);
 			if (*output == NULL)
-				goto error;
+				return SASL_FAILED;
 
 			*outputlen = strlen(*output);
-			*done = false;
 			state->state = FE_SCRAM_NONCE_SENT;
-			break;
+			return SASL_CONTINUE;
 
 		case FE_SCRAM_NONCE_SENT:
 			/* Receive salt and server nonce, send response. */
 			if (!read_server_first_message(state, input))
-				goto error;
+				return SASL_FAILED;
 
 			*output = build_client_final_message(state);
 			if (*output == NULL)
-				goto error;
+				return SASL_FAILED;
 
 			*outputlen = strlen(*output);
-			*done = false;
 			state->state = FE_SCRAM_PROOF_SENT;
-			break;
+			return SASL_CONTINUE;
 
 		case FE_SCRAM_PROOF_SENT:
-			/* Receive server signature */
-			if (!read_server_final_message(state, input))
-				goto error;
-
-			/*
-			 * Verify server signature, to make sure we're talking to the
-			 * genuine server.
-			 */
-			if (!verify_server_signature(state, success, &errstr))
-			{
-				libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
-				goto error;
-			}
-
-			if (!*success)
 			{
-				libpq_append_conn_error(conn, "incorrect server signature");
+				bool		match;
+
+				/* Receive server signature */
+				if (!read_server_final_message(state, input))
+					return SASL_FAILED;
+
+				/*
+				 * Verify server signature, to make sure we're talking to the
+				 * genuine server.
+				 */
+				if (!verify_server_signature(state, &match, &errstr))
+				{
+					libpq_append_conn_error(conn, "could not verify server signature: %s", errstr);
+					return SASL_FAILED;
+				}
+
+				if (!match)
+				{
+					libpq_append_conn_error(conn, "incorrect server signature");
+				}
+				state->state = FE_SCRAM_FINISHED;
+				state->conn->client_finished_auth = true;
+				return match ? SASL_COMPLETE : SASL_FAILED;
 			}
-			*done = true;
-			state->state = FE_SCRAM_FINISHED;
-			state->conn->client_finished_auth = true;
-			break;
 
 		default:
 			/* shouldn't happen */
 			libpq_append_conn_error(conn, "invalid SCRAM exchange state");
-			goto error;
+			break;
 	}
-	return;
 
-error:
-	*done = true;
-	*success = false;
+	return SASL_FAILED;
 }
 
 /*
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 1a8e4f6fbf..cf8af4c62e 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -423,11 +423,10 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
-	bool		done;
-	bool		success;
 	const char *selected_mechanism;
 	PQExpBufferData mechanism_buf;
 	char	   *password;
+	SASLStatus	status;
 
 	initPQExpBuffer(&mechanism_buf);
 
@@ -575,12 +574,11 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto oom_error;
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	conn->sasl->exchange(conn->sasl_state,
-						 NULL, -1,
-						 &initialresponse, &initialresponselen,
-						 &done, &success);
+	status = conn->sasl->exchange(conn->sasl_state,
+								  NULL, -1,
+								  &initialresponse, &initialresponselen);
 
-	if (done && !success)
+	if (status == SASL_FAILED)
 		goto error;
 
 	/*
@@ -629,10 +627,9 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 {
 	char	   *output;
 	int			outputlen;
-	bool		done;
-	bool		success;
 	int			res;
 	char	   *challenge;
+	SASLStatus	status;
 
 	/* Read the SASL challenge from the AuthenticationSASLContinue message. */
 	challenge = malloc(payloadlen + 1);
@@ -651,13 +648,12 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	conn->sasl->exchange(conn->sasl_state,
-						 challenge, payloadlen,
-						 &output, &outputlen,
-						 &done, &success);
+	status = conn->sasl->exchange(conn->sasl_state,
+								  challenge, payloadlen,
+								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
-	if (final && !done)
+	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
 			free(output);
@@ -670,7 +666,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	 * If the exchange is not completed yet, we need to make sure that the
 	 * SASL mechanism has generated a message to send back.
 	 */
-	if (output == NULL && !done)
+	if (output == NULL && status == SASL_CONTINUE)
 	{
 		libpq_append_conn_error(conn, "no client response found after SASL exchange success");
 		return STATUS_ERROR;
@@ -692,7 +688,7 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 			return STATUS_ERROR;
 	}
 
-	if (done && !success)
+	if (status == SASL_FAILED)
 		return STATUS_ERROR;
 
 	return STATUS_OK;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 95ae7845d8..86d68f7a7a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2423,6 +2423,7 @@ RuleLock
 RuleStmt
 RunningTransactions
 RunningTransactionsData
+SASLStatus
 SC_HANDLE
 SECURITY_ATTRIBUTES
 SECURITY_STATUS
-- 
2.34.1

v20-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v20-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 5488ac25f5396017c4f1a3c78bc4f807b558e83d Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v20 5/9] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external program: the oauth_validator_command.
This command must do the following:

1. Receive the bearer token by reading its contents from a file
   descriptor passed from the server. (The numeric value of this
   descriptor may be inserted into the oauth_validator_command using the
   %f specifier.)

   This MUST be the first action the command performs. The server will
   not begin reading stdout from the command until the token has been
   read in full, so if the command tries to print anything and hits a
   buffer limit, the backend will deadlock and time out.

2. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the command must exit with a
   non-zero status. Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The command should print the
      authenticated identity string to stdout, followed by a newline.

      If the user cannot be authenticated, the validator should not
      print anything to stdout. It should also exit with a non-zero
      status, unless the token may be used to authorize the connection
      through some other means (see below).

      On a success, the command may then exit with a zero success code.
      By default, the server will then check to make sure the identity
      string matches the role that is being used (or matches a usermap
      entry, if one is in use).

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below), the validator simply
      returns a zero exit code if the client should be allowed to
      connect with its presented role (which can be passed to the
      command using the %r specifier), or a non-zero code otherwise.

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the command may print
      the authenticated ID and then fail with a non-zero exit code.
      (This makes it easier to see what's going on in the Postgres
      logs.)

4. Token validators may optionally log to stderr. This will be printed
   verbatim into the Postgres server logs.

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Unlike the client, servers support OAuth without needing to be built
against libiddawc (since the responsibility for "speaking" OAuth/OIDC
correctly is delegated entirely to the oauth_validator_command).

Several TODOs:
- port to platforms other than "modern Linux/BSD"
- overhaul the communication with oauth_validator_command, which is
  currently a bad hack on OpenPipeStream()
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- deal with role names that can't be safely passed to system() without
  shell-escaping
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/backend/libpq/Makefile          |   1 +
 src/backend/libpq/auth-oauth.c      | 883 ++++++++++++++++++++++++++++
 src/backend/libpq/auth-sasl.c       |  10 +-
 src/backend/libpq/auth-scram.c      |   4 +-
 src/backend/libpq/auth.c            |  26 +-
 src/backend/libpq/hba.c             |  31 +-
 src/backend/libpq/meson.build       |   1 +
 src/backend/utils/misc/guc_tables.c |  12 +
 src/include/libpq/auth.h            |  17 +
 src/include/libpq/hba.h             |   6 +-
 src/include/libpq/oauth.h           |  24 +
 src/include/libpq/sasl.h            |  11 +
 src/tools/pgindent/typedefs.list    |   1 +
 13 files changed, 995 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..16596c089a
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,883 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *oauth_validator_command;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth, const char **logdetail);
+static bool run_validator_command(Port *port, const char *token);
+static bool check_exit(FILE **fh, const char *command);
+static bool set_cloexec(int fd);
+static bool username_ok_for_shell(const char *username);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth, logdetail))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: note that escaping here should be belt-and-suspenders, since
+	 * escapable characters aren't valid in either the issuer URI or the scope
+	 * list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+static bool
+validate(Port *port, const char *auth, const char **logdetail)
+{
+	static const char *const b64_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	const char *token;
+	size_t		span;
+	int			ret;
+
+	/* TODO: handle logdetail when the test framework can check it */
+
+	/*-----
+	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+	 * 2.1:
+	 *
+	 *      b64token    = 1*( ALPHA / DIGIT /
+	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+	 *      credentials = "Bearer" 1*SP b64token
+	 *
+	 * The "credentials" construction is what we receive in our auth value.
+	 *
+	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
+	 * it's pointed out in RFC 7628 Sec. 4.)
+	 *
+	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+	 */
+	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return false;
+
+	/* Pull the bearer token out of the auth value. */
+	token = auth + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/*
+	 * Before invoking the validator command, sanity-check the token format to
+	 * avoid any injection attacks later in the chain. Invalid formats are
+	 * technically a protocol violation, but don't reflect any information
+	 * about the sensitive Bearer token back to the client; log at COMMERROR
+	 * instead.
+	 */
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is empty.")));
+		return false;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return false;
+	}
+
+	/* Have the validator check the token. */
+	if (!run_validator_command(port, token))
+		return false;
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (!MyClientConnectionInfo.authn_id)
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	ret = check_usermap(port->hba->usermap, port->user_name,
+						MyClientConnectionInfo.authn_id, false);
+	return (ret == STATUS_OK);
+}
+
+static bool
+run_validator_command(Port *port, const char *token)
+{
+	bool		success = false;
+	int			rc;
+	int			pipefd[2];
+	int			rfd = -1;
+	int			wfd = -1;
+
+	StringInfoData command = {0};
+	char	   *p;
+	FILE	   *fh = NULL;
+
+	ssize_t		written;
+	char	   *line = NULL;
+	size_t		size = 0;
+	ssize_t		len;
+
+	Assert(oauth_validator_command);
+
+	if (!oauth_validator_command[0])
+	{
+		ereport(COMMERROR,
+				(errmsg("oauth_validator_command is not set"),
+				 errhint("To allow OAuth authenticated connections, set "
+						 "oauth_validator_command in postgresql.conf.")));
+		return false;
+	}
+
+	/*------
+	 * Since popen() is unidirectional, open up a pipe for the other
+	 * direction. Use CLOEXEC to ensure that our write end doesn't
+	 * accidentally get copied into child processes, which would prevent us
+	 * from closing it cleanly.
+	 *
+	 * XXX this is ugly. We should just read from the child process's stdout,
+	 * but that's a lot more code.
+	 * XXX by bypassing the popen API, we open the potential of process
+	 * deadlock. Clearly document child process requirements (i.e. the child
+	 * MUST read all data off of the pipe before writing anything).
+	 * TODO: port to Windows using _pipe().
+	 */
+	rc = pipe(pipefd);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not create child pipe: %m")));
+		return false;
+	}
+
+	rfd = pipefd[0];
+	wfd = pipefd[1];
+
+	if (!set_cloexec(wfd))
+	{
+		/* error message was already logged */
+		goto cleanup;
+	}
+
+	/*----------
+	 * Construct the command, substituting any recognized %-specifiers:
+	 *
+	 *   %f: the file descriptor of the input pipe
+	 *   %r: the role that the client wants to assume (port->user_name)
+	 *   %%: a literal '%'
+	 */
+	initStringInfo(&command);
+
+	for (p = oauth_validator_command; *p; p++)
+	{
+		if (p[0] == '%')
+		{
+			switch (p[1])
+			{
+				case 'f':
+					appendStringInfo(&command, "%d", rfd);
+					p++;
+					break;
+				case 'r':
+
+					/*
+					 * TODO: decide how this string should be escaped. The
+					 * role is controlled by the client, so if we don't escape
+					 * it, command injections are inevitable.
+					 *
+					 * This is probably an indication that the role name needs
+					 * to be communicated to the validator process in some
+					 * other way. For this proof of concept, just be
+					 * incredibly strict about the characters that are allowed
+					 * in user names.
+					 */
+					if (!username_ok_for_shell(port->user_name))
+						goto cleanup;
+
+					appendStringInfoString(&command, port->user_name);
+					p++;
+					break;
+				case '%':
+					appendStringInfoChar(&command, '%');
+					p++;
+					break;
+				default:
+					appendStringInfoChar(&command, p[0]);
+			}
+		}
+		else
+			appendStringInfoChar(&command, p[0]);
+	}
+
+	/* Execute the command. */
+	fh = OpenPipeStream(command.data, "r");
+	if (!fh)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("opening pipe to OAuth validator: %m")));
+		goto cleanup;
+	}
+
+	/* We don't need the read end of the pipe anymore. */
+	close(rfd);
+	rfd = -1;
+
+	/* Give the command the token to validate. */
+	written = write(wfd, token, strlen(token));
+	if (written != strlen(token))
+	{
+		/* TODO must loop for short writes, EINTR et al */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not write token to child pipe: %m")));
+		goto cleanup;
+	}
+
+	close(wfd);
+	wfd = -1;
+
+	/*-----
+	 * Read the command's response.
+	 *
+	 * TODO: getline() is probably too new to use, unfortunately.
+	 * TODO: loop over all lines
+	 */
+	if ((len = getline(&line, &size, fh)) >= 0)
+	{
+		/* TODO: fail if the authn_id doesn't end with a newline */
+		if (len > 0)
+			line[len - 1] = '\0';
+
+		set_authn_id(port, line);
+	}
+	else if (ferror(fh))
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not read from command \"%s\": %m",
+						command.data)));
+		goto cleanup;
+	}
+
+	/* Make sure the command exits cleanly. */
+	if (!check_exit(&fh, command.data))
+	{
+		/* error message already logged */
+		goto cleanup;
+	}
+
+	/* Done. */
+	success = true;
+
+cleanup:
+	if (line)
+		free(line);
+
+	/*
+	 * In the successful case, the pipe fds are already closed. For the error
+	 * case, always close out the pipe before waiting for the command, to
+	 * prevent deadlock.
+	 */
+	if (rfd >= 0)
+		close(rfd);
+	if (wfd >= 0)
+		close(wfd);
+
+	if (fh)
+	{
+		Assert(!success);
+		check_exit(&fh, command.data);
+	}
+
+	if (command.data)
+		pfree(command.data);
+
+	return success;
+}
+
+static bool
+check_exit(FILE **fh, const char *command)
+{
+	int			rc;
+
+	rc = ClosePipeStream(*fh);
+	*fh = NULL;
+
+	if (rc == -1)
+	{
+		/* pclose() itself failed. */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not close pipe to command \"%s\": %m",
+						command)));
+	}
+	else if (rc != 0)
+	{
+		char	   *reason = wait_result_to_str(rc);
+
+		ereport(COMMERROR,
+				(errmsg("failed to execute command \"%s\": %s",
+						command, reason)));
+
+		pfree(reason);
+	}
+
+	return (rc == 0);
+}
+
+static bool
+set_cloexec(int fd)
+{
+	int			flags;
+	int			rc;
+
+	flags = fcntl(fd, F_GETFD);
+	if (flags == -1)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not get fd flags for child pipe: %m")));
+		return false;
+	}
+
+	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * XXX This should go away eventually and be replaced with either a proper
+ * escape or a different strategy for communication with the validator command.
+ */
+static bool
+username_ok_for_shell(const char *username)
+{
+	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
+	static const char *const allowed =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-_./:";
+	size_t		span;
+
+	Assert(username && username[0]);	/* should have already been checked */
+
+	span = strspn(username, allowed);
+	if (username[span] != '\0')
+	{
+		ereport(COMMERROR,
+				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
+		return false;
+	}
+
+	return true;
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 4161959914..486a34e719 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2b607c5270..0a5a8640fc 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index d506c3c0b7..e592bedf9f 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1743,6 +1744,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2062,8 +2065,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2446,6 +2450,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 45013582a7..d28209901f 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -48,6 +48,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4670,6 +4671,17 @@ struct config_string ConfigureNamesString[] =
 		check_debug_io_direct, assign_debug_io_direct, NULL
 	},
 
+	{
+		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&oauth_validator_command,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..5edab3b25a
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern char *oauth_validator_command;
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 291c80e3ff..8119523cd9 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3543,6 +3543,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v20-0001-common-jsonapi-support-FRONTEND-clients.patchapplication/octet-stream; name=v20-0001-common-jsonapi-support-FRONTEND-clients.patchDownload
From 231c6fb165956553ebe42ba187a4c287151e5203 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v20 1/9] common/jsonapi: support FRONTEND clients

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed. json_errdetail() now allocates its error message inside
memory owned by the JsonLexContext, so clients don't need to worry about
freeing it.

We can now partially revert b44669b2ca, now that json_errdetail() works
correctly.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/bin/pg_verifybackup/t/005_bad_manifest.pl |   3 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 268 +++++++++++++-----
 src/common/meson.build                        |   8 +-
 src/common/parse_manifest.c                   |   2 +-
 src/common/stringinfo.c                       |   7 +
 src/include/common/jsonapi.h                  |  18 +-
 src/include/lib/stringinfo.h                  |   2 +
 8 files changed, 225 insertions(+), 85 deletions(-)

diff --git a/src/bin/pg_verifybackup/t/005_bad_manifest.pl b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
index e278ccea5a..e2a297930e 100644
--- a/src/bin/pg_verifybackup/t/005_bad_manifest.pl
+++ b/src/bin/pg_verifybackup/t/005_bad_manifest.pl
@@ -13,7 +13,8 @@ use Test::More;
 my $tempdir = PostgreSQL::Test::Utils::tempdir;
 
 test_bad_manifest('input string ended unexpectedly',
-	qr/could not parse backup manifest: parsing failed/, <<EOM);
+	qr/could not parse backup manifest: The input string ended unexpectedly/,
+	<<EOM);
 {
 EOM
 
diff --git a/src/common/Makefile b/src/common/Makefile
index 2ba5069dca..bbb5c3ab11 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 OBJS_COMMON = \
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 32931ded82..2d1f30353a 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,43 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+
+#define appendStrVal		appendPQExpBuffer
+#define appendBinaryStrVal  appendBinaryPQExpBuffer
+#define appendStrValChar	appendPQExpBufferChar
+#define createStrVal		createPQExpBuffer
+#define resetStrVal			resetPQExpBuffer
+#define destroyStrVal		destroyPQExpBuffer
+
+#else							/* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+
+#define appendStrVal		appendStringInfo
+#define appendBinaryStrVal  appendBinaryStringInfo
+#define appendStrValChar	appendStringInfoChar
+#define createStrVal		makeStringInfo
+#define resetStrVal			resetStringInfo
+#define destroyStrVal		destroyStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -167,9 +200,16 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+	lex->errormsg = NULL;
 
 	return lex;
 }
@@ -182,13 +222,18 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-	{
-		pfree(lex->strval->data);
-		pfree(lex->strval);
-	}
+		destroyStrVal(lex->strval);
+
+	if (lex->errormsg)
+		destroyStrVal(lex->errormsg);
+
 	if (lex->flags & JSONLEX_FREE_STRUCT)
 		pfree(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -254,7 +299,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -316,14 +361,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -357,8 +409,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -414,6 +470,11 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -762,8 +823,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -800,7 +868,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -857,19 +925,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -879,22 +947,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -929,7 +997,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to appendBinaryStrVal.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -953,8 +1021,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				appendBinaryStrVal(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -970,6 +1038,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -1145,72 +1218,93 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 	return JSON_SUCCESS;		/* silence stupider compilers */
 }
 
-
-#ifndef FRONTEND
-/*
- * Extract the current token from a lexing context, for error reporting.
- */
-static char *
-extract_token(JsonLexContext *lex)
-{
-	int			toklen = lex->token_terminator - lex->token_start;
-	char	   *token = palloc(toklen + 1);
-
-	memcpy(token, lex->token_start, toklen);
-	token[toklen] = '\0';
-	return token;
-}
-
 /*
  * Construct an (already translated) detail message for a JSON error.
  *
- * Note that the error message generated by this routine may not be
- * palloc'd, making it unsafe for frontend code as there is no way to
- * know if this can be safely pfree'd or not.
+ * The returned allocation is either static or owned by the JsonLexContext and
+ * should not be freed.
  */
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	int			toklen = lex->token_terminator - lex->token_start;
+
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
+	if (lex->errormsg)
+		resetStrVal(lex->errormsg);
+	else
+		lex->errormsg = createStrVal();
+
 	switch (error)
 	{
 		case JSON_SUCCESS:
 			/* fall through to the error code after switch */
 			break;
 		case JSON_ESCAPING_INVALID:
-			return psprintf(_("Escape sequence \"\\%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Escape sequence \"\\%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_ESCAPING_REQUIRED:
-			return psprintf(_("Character with value 0x%02x must be escaped."),
-							(unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
+			break;
 		case JSON_EXPECTED_END:
-			return psprintf(_("Expected end of input, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected end of input, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_FIRST:
-			return psprintf(_("Expected array element or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected array element or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_ARRAY_NEXT:
-			return psprintf(_("Expected \",\" or \"]\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"]\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_COLON:
-			return psprintf(_("Expected \":\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \":\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_JSON:
-			return psprintf(_("Expected JSON value, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected JSON value, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_MORE:
 			return _("The input string ended unexpectedly.");
 		case JSON_EXPECTED_OBJECT_FIRST:
-			return psprintf(_("Expected string or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_OBJECT_NEXT:
-			return psprintf(_("Expected \",\" or \"}\", but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected \",\" or \"}\", but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_EXPECTED_STRING:
-			return psprintf(_("Expected string, but found \"%s\"."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Expected string, but found \"%.*s\"."),
+						 toklen, lex->token_start);
+			break;
 		case JSON_INVALID_TOKEN:
-			return psprintf(_("Token \"%s\" is invalid."),
-							extract_token(lex));
+			appendStrVal(lex->errormsg,
+						 _("Token \"%.*s\" is invalid."),
+						 toklen, lex->token_start);
+			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -1219,9 +1313,19 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			/* note: this case is only reachable in frontend not backend */
 			return _("Unicode escape values cannot be used for code point values above 007F when the encoding is not UTF8.");
 		case JSON_UNICODE_UNTRANSLATABLE:
-			/* note: this case is only reachable in backend not frontend */
+
+			/*
+			 * note: this case is only reachable in backend not frontend.
+			 * #ifdef it away so the frontend doesn't try to link against
+			 * backend functionality.
+			 */
+#ifndef FRONTEND
 			return psprintf(_("Unicode escape value could not be translated to the server's encoding %s."),
 							GetDatabaseEncodingName());
+#else
+			Assert(false);
+			break;
+#endif
 		case JSON_UNICODE_HIGH_SURROGATE:
 			return _("Unicode high surrogate must not follow a high surrogate.");
 		case JSON_UNICODE_LOW_SURROGATE:
@@ -1231,12 +1335,22 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			break;
 	}
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	elog(ERROR, "unexpected json parse error type: %d", (int) error);
-	return NULL;
-}
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && !lex->errormsg->data[0])
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d", (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
 #endif
+
+	return lex->errormsg->data;
+}
diff --git a/src/common/meson.build b/src/common/meson.build
index 4eb16024cb..5d2c7abaa6 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -124,13 +124,18 @@ common_sources_frontend_static += files(
 # least cryptohash_openssl.c, hmac_openssl.c depend on it.
 # controldata_utils.c depends on wait_event_types_h. That's arguably a
 # layering violation, but ...
+#
+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
+# appropriately. This seems completely broken.
 pgcommon = {}
 pgcommon_variants = {
   '_srv': internal_lib_args + {
+    'include_directories': include_directories('.'),
     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
     'dependencies': [backend_common_code],
   },
   '': default_lib_args + {
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_static,
     'dependencies': [frontend_common_code],
     # Files in libpgcommon.a should use/export the "xxx_private" versions
@@ -139,6 +144,7 @@ pgcommon_variants = {
   },
   '_shlib': default_lib_args + {
     'pic': true,
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
   },
@@ -156,7 +162,6 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'sources': sources,
         'c_args': c_args,
@@ -169,7 +174,6 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'dependencies': opts['dependencies'] + [ssl],
       }
diff --git a/src/common/parse_manifest.c b/src/common/parse_manifest.c
index 92a97714f3..62d93989be 100644
--- a/src/common/parse_manifest.c
+++ b/src/common/parse_manifest.c
@@ -147,7 +147,7 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 	/* Run the actual JSON parser. */
 	json_error = pg_parse_json(lex, &sem);
 	if (json_error != JSON_SUCCESS)
-		json_manifest_parse_failure(context, "parsing failed");
+		json_manifest_parse_failure(context, json_errdetail(json_error, lex));
 	if (parse.state != JM_EXPECT_EOF)
 		json_manifest_parse_failure(context, "manifest ended unexpectedly");
 
diff --git a/src/common/stringinfo.c b/src/common/stringinfo.c
index c61d5c58f3..09419f6042 100644
--- a/src/common/stringinfo.c
+++ b/src/common/stringinfo.c
@@ -350,3 +350,10 @@ enlargeStringInfo(StringInfo str, int needed)
 
 	str->maxlen = newlen;
 }
+
+void
+destroyStringInfo(StringInfo str)
+{
+	pfree(str->data);
+	pfree(str);
+}
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 02943cdad8..75d444c17a 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -48,6 +46,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -57,6 +56,17 @@ typedef enum JsonParseErrorType
 	JSON_SEM_ACTION_FAILED,		/* error should already be reported */
 } JsonParseErrorType;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
 
 /*
  * All the fields in this structure should be treated as read-only.
@@ -88,7 +98,9 @@ typedef struct JsonLexContext
 	bits32		flags;
 	int			line_number;	/* line number, starting from 1 */
 	char	   *line_start;		/* where that line starts within input */
-	StringInfo	strval;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
diff --git a/src/include/lib/stringinfo.h b/src/include/lib/stringinfo.h
index 2cd636b01c..64ec6419af 100644
--- a/src/include/lib/stringinfo.h
+++ b/src/include/lib/stringinfo.h
@@ -233,4 +233,6 @@ extern void appendBinaryStringInfoNT(StringInfo str,
  */
 extern void enlargeStringInfo(StringInfo str, int needed);
 
+
+extern void destroyStringInfo(StringInfo str);
 #endif							/* STRINGINFO_H */
-- 
2.34.1

v20-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v20-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 2a55d9c8062f31b3d902900301fd5f1fe8db1bb1 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v20 4/9] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 configure                                 |  110 ++
 configure.ac                              |   28 +
 meson.build                               |   29 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   10 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 1982 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 +++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   77 +-
 src/interfaces/libpq/libpq-int.h          |   14 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 23 files changed, 3200 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/configure b/configure
index 46859a4244..2ccfb01b7a 100755
--- a/configure
+++ b/configure
@@ -712,6 +712,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -858,6 +859,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8485,6 +8488,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13037,6 +13086,56 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14062,6 +14161,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 88b75a7696..4a80c97d5b 100644
--- a/configure.ac
+++ b/configure.ac
@@ -927,6 +927,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1423,6 +1443,10 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1614,6 +1638,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/meson.build b/meson.build
index a198eca25d..45b20d11c1 100644
--- a/meson.build
+++ b/meson.build
@@ -830,6 +830,33 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  oauth = dependency('libcurl', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -2834,6 +2861,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3435,6 +3463,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index 249ecc5ffd..3248b9cc1c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -121,6 +121,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 8b3f8c24e0..79b3647834 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..5ff3488bfb
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07e73567dc..a5e6f99ba4 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -243,6 +243,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -711,6 +714,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index fe2af575c5..2618c293af 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -61,6 +61,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -79,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 088592deb1..0f8f5e3125 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -193,3 +193,6 @@ PQsendClosePrepared       190
 PQsendClosePortal         191
 PQchangePassword          192
 PQsendPipelineSync        193
+PQsetAuthDataHook         194
+PQgetAuthDataHook         195
+PQdefaultAuthDataHook     196
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..0504f96e4e
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,1982 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls to
+ * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by cURL, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by cURL to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two cURL handles,
+ * so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	char	   *content_type;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	/* Make sure the server thinks it's given us JSON. */
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		goto cleanup;
+	}
+	else if (strcasecmp(content_type, "application/json") != 0)
+	{
+		actx_error(actx, "unexpected content type \"%s\"", content_type);
+		goto cleanup;
+	}
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		goto cleanup;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.)
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return 1;				/* TODO this slows down the tests
+								 * considerably... */
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(authz->interval_str);
+	else
+	{
+		/* TODO: handle default interval of 5 seconds */
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * cURL Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * cURL multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the `data` field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * cURL multi handle. Rather than continually adding and removing the timer,
+ * we keep it in the set at all times and just disarm it when it's not
+ * needed.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means cURL wants us to call back immediately. That's
+		 * not technically an option for timerfd, but we can make the timeout
+		 * ridiculously short.
+		 *
+		 * TODO: maybe just signal drive_request() to immediately call back in
+		 * this case?
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Initializes the two cURL handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data	*curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * cURL for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create cURL multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create cURL handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
+	 */
+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS, return false);
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from cURL; appends the response body into actx->work_data.
+ * See start_request().
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * Sanity check.
+	 *
+	 * TODO: even though this is nominally an asynchronous process, there are
+	 * apparently operations that can synchronously fail by this point, such
+	 * as connections to closed local ports. Maybe we need to let this case
+	 * fall through to drive_request instead, or else perform a
+	 * curl_multi_info_read immediately.
+	 */
+	if (running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	err = curl_multi_socket_all(actx->curlm, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return PGRES_POLLING_FAILED;
+	}
+
+	if (running)
+	{
+		/* We'll come back again. */
+		return PGRES_POLLING_READING;
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future cURL versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "cURL easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * The top-level, nonblocking entry point for the cURL implementation. This will
+ * be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	struct token tok = {0};
+
+	/*
+	 * XXX This is not safe. cURL has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized cURL,
+	 * which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell cURL to initialize
+	 * everything else, because other pieces of our client executable may
+	 * already be using cURL for their own purposes. If we initialize libcurl
+	 * first, with only a subset of its features, we could break those other
+	 * clients nondeterministically, and that would probably be a nightmare to
+	 * debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	/* By default, the multiplexer is the altsock. Reassign as desired. */
+	*altsock = actx->mux;
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				PostgresPollingStatusType status;
+
+				status = drive_request(actx);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+				else if (status != PGRES_POLLING_OK)
+				{
+					/* not done yet */
+					free_token(&tok);
+					return status;
+				}
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			/* TODO check that the timer has expired */
+			break;
+	}
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			actx->errctx = "failed to fetch OpenID discovery document";
+			if (!start_discovery(actx, conn->oauth_discovery_uri))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DISCOVERY;
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+			if (!finish_discovery(actx))
+				goto error_return;
+
+			/* TODO: check issuer */
+
+			actx->errctx = "cannot run OAuth device authorization";
+			if (!check_for_device_flow(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain device authorization";
+			if (!start_device_authz(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+			break;
+
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			if (!finish_device_authz(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				const struct token_error *err;
+#ifdef HAVE_SYS_EPOLL_H
+				struct itimerspec spec = {0};
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				struct kevent ev = {0};
+#endif
+
+				if (!finish_token_request(actx, &tok))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					int			res;
+					PQpromptOAuthDevice prompt = {
+						.verification_uri = actx->authz.verification_uri,
+						.user_code = actx->authz.user_code,
+						/* TODO: optional fields */
+					};
+
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
+										 &prompt);
+
+					if (!res)
+					{
+						fprintf(stderr, "Visit %s and enter the code: %s",
+								prompt.verification_uri, prompt.user_code);
+					}
+					else if (res < 0)
+					{
+						actx_error(actx, "device prompt failed");
+						goto error_return;
+					}
+
+					actx->user_prompted = true;
+				}
+
+				if (tok.access_token)
+				{
+					/* Construct our Bearer token. */
+					resetPQExpBuffer(&actx->work_data);
+					appendPQExpBuffer(&actx->work_data, "Bearer %s",
+									  tok.access_token);
+
+					if (PQExpBufferDataBroken(actx->work_data))
+					{
+						actx_error(actx, "out of memory");
+						goto error_return;
+					}
+
+					state->token = strdup(actx->work_data.data);
+					break;
+				}
+
+				/*
+				 * authorization_pending and slow_down are the only acceptable
+				 * errors; anything else and we bail.
+				 */
+				err = &tok.err;
+				if (!err->error || (strcmp(err->error, "authorization_pending")
+									&& strcmp(err->error, "slow_down")))
+				{
+					/* TODO handle !err->error */
+					if (err->error_description)
+						appendPQExpBuffer(&actx->errbuf, "%s ",
+										  err->error_description);
+
+					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+
+					goto error_return;
+				}
+
+				/*
+				 * A slow_down error requires us to permanently increase our
+				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
+				 */
+				if (strcmp(err->error, "slow_down") == 0)
+				{
+					actx->authz.interval += 5;	/* TODO check for overflow? */
+				}
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				Assert(actx->authz.interval > 0);
+#ifdef HAVE_SYS_EPOLL_H
+				spec.it_value.tv_sec = actx->authz.interval;
+
+				if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+				{
+					actx_error(actx, "failed to set timerfd: %m");
+					goto error_return;
+				}
+
+				*altsock = actx->timerfd;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				/* XXX: I guess this wants to be hidden in a routine */
+				EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
+					   actx->authz.interval * 1000, 0);
+				if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
+				{
+					actx_error(actx, "failed to set kqueue timer: %m");
+					goto error_return;
+				}
+				/* XXX: why did we change the altsock in the epoll version? */
+#endif
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				break;
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+	}
+
+	free_token(&tok);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	free_token(&tok);
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..66ee8ff076
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..8d4ea45aa8
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2023, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 81ec08485d..9cd5c8cfb1 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -419,7 +420,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -437,7 +438,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -524,6 +525,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -563,26 +573,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -625,7 +657,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -650,11 +682,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -955,12 +997,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1118,7 +1166,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1135,7 +1183,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1451,3 +1500,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d4e10a0c4f..15ceb73d01 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -359,6 +359,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -618,6 +635,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_err_msg = NULL;
 	conn->be_pid = 0;
 	conn->be_key = 0;
+	/* conn->oauth_want_retry = false; TODO */
 }
 
 
@@ -2536,6 +2554,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3517,6 +3536,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3672,6 +3692,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 #ifdef ENABLE_GSS
 
 					/*
@@ -3753,7 +3783,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/* OK, we have processed the message; mark data consumed */
 				conn->inStart = conn->inCursor;
@@ -3786,6 +3826,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4285,6 +4360,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4400,6 +4476,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -6868,6 +6949,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index f2fc78a481..663b1c1acf 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1039,10 +1039,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1059,7 +1062,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = pqSocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = pqSocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index defc415fa3..d095351c66 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -38,6 +38,8 @@ extern "C"
 #define LIBPQ_HAS_TRACE_FLAGS 1
 /* Indicates that PQsslAttribute(NULL, "library") is useful */
 #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -78,7 +80,9 @@ typedef enum
 	CONNECTION_CONSUME,			/* Consuming any extra messages. */
 	CONNECTION_GSS_STARTUP,		/* Negotiating GSSAPI. */
 	CONNECTION_CHECK_TARGET,	/* Checking target server properties. */
-	CONNECTION_CHECK_STANDBY	/* Checking if server is in standby mode. */
+	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -160,6 +164,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -658,10 +669,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 82c18f870d..cf26c693e3 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -351,6 +351,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -409,6 +411,15 @@ struct pg_conn
 	char	   *require_auth;	/* name of the expected auth method */
 	char	   *load_balance_hosts; /* load balance over hosts */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -477,6 +488,9 @@ struct pg_conn
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
 
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index be6fadaea2..0d4b7ac17d 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index b0f4178b3d..f803c1200b 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -231,6 +231,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 86d68f7a7a..291c80e3ff 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -353,6 +354,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1653,6 +1656,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1718,6 +1722,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1877,11 +1882,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3343,6 +3351,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v20-0006-Introduce-OAuth-validator-libraries.patchapplication/octet-stream; name=v20-0006-Introduce-OAuth-validator-libraries.patchDownload
From fdbad1976a78d179b104138c31ac106e20338b0f Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Wed, 21 Feb 2024 17:04:26 +0100
Subject: [PATCH v20 6/9] Introduce OAuth validator libraries

This replaces the serverside validation code with an module API
for loading in extensions for validating bearer tokens. A lot of
code is left to be written.

Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
---
 src/backend/libpq/auth-oauth.c                | 431 +++++-------------
 src/backend/utils/misc/guc_tables.c           |   6 +-
 src/bin/pg_combinebackup/Makefile             |   2 +-
 src/common/Makefile                           |   2 +-
 src/include/libpq/oauth.h                     |  29 +-
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  19 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  33 ++
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   |  78 ++++
 src/test/modules/oauth_validator/validator.c  |  82 ++++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  14 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 183 ++++++++
 src/tools/pgindent/typedefs.list              |   2 +
 16 files changed, 561 insertions(+), 332 deletions(-)
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 16596c089a..024f304e4d 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -6,7 +6,7 @@
  * See the following RFC for more details:
  * - RFC 7628: https://tools.ietf.org/html/rfc7628
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/backend/libpq/auth-oauth.c
@@ -19,22 +19,30 @@
 #include <fcntl.h>
 
 #include "common/oauth-common.h"
+#include "fmgr.h"
 #include "lib/stringinfo.h"
 #include "libpq/auth.h"
 #include "libpq/hba.h"
 #include "libpq/oauth.h"
 #include "libpq/sasl.h"
 #include "storage/fd.h"
+#include "storage/ipc.h"
 #include "utils/json.h"
 
 /* GUC */
-char	   *oauth_validator_command;
+char	   *OAuthValidatorLibrary = "";
 
 static void oauth_get_mechanisms(Port *port, StringInfo buf);
 static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
 static int	oauth_exchange(void *opaq, const char *input, int inputlen,
 						   char **output, int *outputlen, const char **logdetail);
 
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
 /* Mechanism declaration */
 const pg_be_sasl_mech pg_be_oauth_mech = {
 	oauth_get_mechanisms,
@@ -63,11 +71,7 @@ struct oauth_ctx
 static char *sanitize_char(char c);
 static char *parse_kvpairs_for_auth(char **input);
 static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
-static bool validate(Port *port, const char *auth, const char **logdetail);
-static bool run_validator_command(Port *port, const char *token);
-static bool check_exit(FILE **fh, const char *command);
-static bool set_cloexec(int fd);
-static bool username_ok_for_shell(const char *username);
+static bool validate(Port *port, const char *auth);
 
 #define KVSEP 0x01
 #define AUTH_KEY "auth"
@@ -100,6 +104,8 @@ oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 	ctx->issuer = port->hba->oauth_issuer;
 	ctx->scope = port->hba->oauth_scope;
 
+	load_validator_library();
+
 	return ctx;
 }
 
@@ -250,7 +256,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 				 errmsg("malformed OAUTHBEARER message"),
 				 errdetail("Message contains additional data after the final terminator.")));
 
-	if (!validate(ctx->port, auth, logdetail))
+	if (!validate(ctx->port, auth))
 	{
 		generate_error_response(ctx, output, outputlen);
 
@@ -489,70 +495,73 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	*outputlen = buf.len;
 }
 
-static bool
-validate(Port *port, const char *auth, const char **logdetail)
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
 {
-	static const char *const b64_set =
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
 		"abcdefghijklmnopqrstuvwxyz"
 		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
 		"0123456789-._~+/";
 
-	const char *token;
-	size_t		span;
-	int			ret;
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
 
-	/* TODO: handle logdetail when the test framework can check it */
-
-	/*-----
-	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
-	 * 2.1:
-	 *
-	 *      b64token    = 1*( ALPHA / DIGIT /
-	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
-	 *      credentials = "Bearer" 1*SP b64token
-	 *
-	 * The "credentials" construction is what we receive in our auth value.
-	 *
-	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
-	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
-	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
-	 * it's pointed out in RFC 7628 Sec. 4.)
-	 *
-	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
-	 */
-	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
-		return false;
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
 
 	/* Pull the bearer token out of the auth value. */
-	token = auth + strlen(BEARER_SCHEME);
+	token = header + strlen(BEARER_SCHEME);
 
 	/* Swallow any additional spaces. */
 	while (*token == ' ')
 		token++;
 
-	/*
-	 * Before invoking the validator command, sanity-check the token format to
-	 * avoid any injection attacks later in the chain. Invalid formats are
-	 * technically a protocol violation, but don't reflect any information
-	 * about the sensitive Bearer token back to the client; log at COMMERROR
-	 * instead.
-	 */
-
 	/* Tokens must not be empty. */
 	if (!*token)
 	{
 		ereport(COMMERROR,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
+				 errmsg("malformed OAuth bearer token 2"),
 				 errdetail("Bearer token is empty.")));
-		return false;
+		return NULL;
 	}
 
 	/*
 	 * Make sure the token contains only allowed characters. Tokens may end
 	 * with any number of '=' characters.
 	 */
-	span = strspn(token, b64_set);
+	span = strspn(token, b64token_allowed_set);
 	while (token[span] == '=')
 		span++;
 
@@ -565,15 +574,35 @@ validate(Port *port, const char *auth, const char **logdetail)
 		 */
 		ereport(COMMERROR,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
+				 errmsg("malformed OAuth bearer token 3"),
 				 errdetail("Bearer token is not in the correct format.")));
-		return false;
+		return NULL;
 	}
 
-	/* Have the validator check the token. */
-	if (!run_validator_command(port, token))
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
 		return false;
 
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
 	if (port->hba->oauth_skip_usermap)
 	{
 		/*
@@ -586,7 +615,7 @@ validate(Port *port, const char *auth, const char **logdetail)
 	}
 
 	/* Make sure the validator authenticated the user. */
-	if (!MyClientConnectionInfo.authn_id)
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
 		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
@@ -596,288 +625,42 @@ validate(Port *port, const char *auth, const char **logdetail)
 	}
 
 	/* Finally, check the user map. */
-	ret = check_usermap(port->hba->usermap, port->user_name,
-						MyClientConnectionInfo.authn_id, false);
-	return (ret == STATUS_OK);
-}
-
-static bool
-run_validator_command(Port *port, const char *token)
-{
-	bool		success = false;
-	int			rc;
-	int			pipefd[2];
-	int			rfd = -1;
-	int			wfd = -1;
-
-	StringInfoData command = {0};
-	char	   *p;
-	FILE	   *fh = NULL;
-
-	ssize_t		written;
-	char	   *line = NULL;
-	size_t		size = 0;
-	ssize_t		len;
-
-	Assert(oauth_validator_command);
-
-	if (!oauth_validator_command[0])
-	{
-		ereport(COMMERROR,
-				(errmsg("oauth_validator_command is not set"),
-				 errhint("To allow OAuth authenticated connections, set "
-						 "oauth_validator_command in postgresql.conf.")));
-		return false;
-	}
-
-	/*------
-	 * Since popen() is unidirectional, open up a pipe for the other
-	 * direction. Use CLOEXEC to ensure that our write end doesn't
-	 * accidentally get copied into child processes, which would prevent us
-	 * from closing it cleanly.
-	 *
-	 * XXX this is ugly. We should just read from the child process's stdout,
-	 * but that's a lot more code.
-	 * XXX by bypassing the popen API, we open the potential of process
-	 * deadlock. Clearly document child process requirements (i.e. the child
-	 * MUST read all data off of the pipe before writing anything).
-	 * TODO: port to Windows using _pipe().
-	 */
-	rc = pipe(pipefd);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not create child pipe: %m")));
-		return false;
-	}
-
-	rfd = pipefd[0];
-	wfd = pipefd[1];
-
-	if (!set_cloexec(wfd))
-	{
-		/* error message was already logged */
-		goto cleanup;
-	}
-
-	/*----------
-	 * Construct the command, substituting any recognized %-specifiers:
-	 *
-	 *   %f: the file descriptor of the input pipe
-	 *   %r: the role that the client wants to assume (port->user_name)
-	 *   %%: a literal '%'
-	 */
-	initStringInfo(&command);
-
-	for (p = oauth_validator_command; *p; p++)
-	{
-		if (p[0] == '%')
-		{
-			switch (p[1])
-			{
-				case 'f':
-					appendStringInfo(&command, "%d", rfd);
-					p++;
-					break;
-				case 'r':
-
-					/*
-					 * TODO: decide how this string should be escaped. The
-					 * role is controlled by the client, so if we don't escape
-					 * it, command injections are inevitable.
-					 *
-					 * This is probably an indication that the role name needs
-					 * to be communicated to the validator process in some
-					 * other way. For this proof of concept, just be
-					 * incredibly strict about the characters that are allowed
-					 * in user names.
-					 */
-					if (!username_ok_for_shell(port->user_name))
-						goto cleanup;
-
-					appendStringInfoString(&command, port->user_name);
-					p++;
-					break;
-				case '%':
-					appendStringInfoChar(&command, '%');
-					p++;
-					break;
-				default:
-					appendStringInfoChar(&command, p[0]);
-			}
-		}
-		else
-			appendStringInfoChar(&command, p[0]);
-	}
-
-	/* Execute the command. */
-	fh = OpenPipeStream(command.data, "r");
-	if (!fh)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("opening pipe to OAuth validator: %m")));
-		goto cleanup;
-	}
-
-	/* We don't need the read end of the pipe anymore. */
-	close(rfd);
-	rfd = -1;
-
-	/* Give the command the token to validate. */
-	written = write(wfd, token, strlen(token));
-	if (written != strlen(token))
-	{
-		/* TODO must loop for short writes, EINTR et al */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not write token to child pipe: %m")));
-		goto cleanup;
-	}
-
-	close(wfd);
-	wfd = -1;
-
-	/*-----
-	 * Read the command's response.
-	 *
-	 * TODO: getline() is probably too new to use, unfortunately.
-	 * TODO: loop over all lines
-	 */
-	if ((len = getline(&line, &size, fh)) >= 0)
-	{
-		/* TODO: fail if the authn_id doesn't end with a newline */
-		if (len > 0)
-			line[len - 1] = '\0';
-
-		set_authn_id(port, line);
-	}
-	else if (ferror(fh))
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not read from command \"%s\": %m",
-						command.data)));
-		goto cleanup;
-	}
-
-	/* Make sure the command exits cleanly. */
-	if (!check_exit(&fh, command.data))
-	{
-		/* error message already logged */
-		goto cleanup;
-	}
-
-	/* Done. */
-	success = true;
-
-cleanup:
-	if (line)
-		free(line);
-
-	/*
-	 * In the successful case, the pipe fds are already closed. For the error
-	 * case, always close out the pipe before waiting for the command, to
-	 * prevent deadlock.
-	 */
-	if (rfd >= 0)
-		close(rfd);
-	if (wfd >= 0)
-		close(wfd);
-
-	if (fh)
-	{
-		Assert(!success);
-		check_exit(&fh, command.data);
-	}
-
-	if (command.data)
-		pfree(command.data);
-
-	return success;
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
 }
 
-static bool
-check_exit(FILE **fh, const char *command)
+static void
+load_validator_library(void)
 {
-	int			rc;
-
-	rc = ClosePipeStream(*fh);
-	*fh = NULL;
-
-	if (rc == -1)
-	{
-		/* pclose() itself failed. */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not close pipe to command \"%s\": %m",
-						command)));
-	}
-	else if (rc != 0)
-	{
-		char	   *reason = wait_result_to_str(rc);
+	OAuthValidatorModuleInit validator_init;
 
-		ereport(COMMERROR,
-				(errmsg("failed to execute command \"%s\": %s",
-						command, reason)));
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
 
-		pfree(reason);
-	}
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
 
-	return (rc == 0);
-}
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
 
-static bool
-set_cloexec(int fd)
-{
-	int			flags;
-	int			rc;
+	ValidatorCallbacks = (*validator_init) ();
 
-	flags = fcntl(fd, F_GETFD);
-	if (flags == -1)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not get fd flags for child pipe: %m")));
-		return false;
-	}
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
 
-	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
-		return false;
-	}
-
-	return true;
+	before_shmem_exit(shutdown_validator_library, 0);
 }
 
-/*
- * XXX This should go away eventually and be replaced with either a proper
- * escape or a different strategy for communication with the validator command.
- */
-static bool
-username_ok_for_shell(const char *username)
+static void
+shutdown_validator_library(int code, Datum arg)
 {
-	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
-	static const char *const allowed =
-		"abcdefghijklmnopqrstuvwxyz"
-		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
-		"0123456789-_./:";
-	size_t		span;
-
-	Assert(username && username[0]);	/* should have already been checked */
-
-	span = strspn(username, allowed);
-	if (username[span] != '\0')
-	{
-		ereport(COMMERROR,
-				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
-		return false;
-	}
-
-	return true;
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
 }
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d28209901f..99bb89d54e 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -4672,12 +4672,12 @@ struct config_string ConfigureNamesString[] =
 	},
 
 	{
-		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
-			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
 			NULL,
 			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
 		},
-		&oauth_validator_command,
+		&OAuthValidatorLibrary,
 		"",
 		NULL, NULL, NULL
 	},
diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index c3729755ba..4f24b1aff6 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -31,7 +31,7 @@ OBJS = \
 all: pg_combinebackup
 
 pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
-	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
+	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
 
 install: all installdirs
 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
diff --git a/src/common/Makefile b/src/common/Makefile
index bbb5c3ab11..00e30e6bfe 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -41,7 +41,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
 override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
-LIBS += $(PTHREAD_LIBS)
+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
 
 OBJS_COMMON = \
 	archive.o \
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
index 5edab3b25a..6f98e84cc9 100644
--- a/src/include/libpq/oauth.h
+++ b/src/include/libpq/oauth.h
@@ -3,7 +3,7 @@
  * oauth.h
  *	  Interface to libpq/auth-oauth.c
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/include/libpq/oauth.h
@@ -16,7 +16,32 @@
 #include "libpq/libpq-be.h"
 #include "libpq/sasl.h"
 
-extern char *oauth_validator_command;
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
 
 /* Implementation */
 extern const pg_be_sasl_mech pg_be_oauth_mech;
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 8fbe742d38..dc54ce7189 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..1f874cd7f2
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,19 @@
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..d9c1d1d577
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..14c7778298
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,78 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+
+my $issuer = "127.0.0.1:18080";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test    oauth issuer="$issuer"           scope="openid postgres"
+local all testalt oauth issuer="$issuer/alternate" scope="openid postgres alt"
+});
+$node->reload;
+
+my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
+
+my $port = $webserver->port();
+
+is($port, 18080, "Port is 18080");
+
+$webserver->setup();
+$webserver->run();
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+my $user = "test";
+$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+$node->log_check("user $user: validator receives correct parameters", $log_start,
+				 log_like => [
+					 qr/oauth_validator: token="9243959234", role="$user"/,
+					 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+				 ]);
+$node->log_check("user $user: validator sets authenticated identity", $log_start,
+				 log_like => [
+					 qr/connection authenticated: identity="test" method=oauth/,
+				 ]);
+$log_start = $log_end;
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+				  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+$node->log_check("user $user: validator receives correct parameters", $log_start,
+				 log_like => [
+					 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+					 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+				 ]);
+$node->log_check("user $user: validator sets authenticated identity", $log_start,
+				 log_like => [
+					 qr/connection authenticated: identity="testalt" method=oauth/,
+				 ]);
+$log_start = $log_end;
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..09a4bf61d2
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,82 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+				state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 4fec417f6f..b291bbf8ee 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2302,6 +2302,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2345,7 +2350,14 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..5c195efb79
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,183 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use threads;
+use Socket;
+use IO::Select;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+	my $port = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	$self->{'port'} = $port;
+
+	return $self;
+}
+
+sub setup
+{
+	my $self = shift;
+	my $tcp = getprotobyname('tcp');
+
+	socket($self->{'socket'}, PF_INET, SOCK_STREAM, $tcp)
+		or die "no socket";
+	setsockopt($self->{'socket'}, SOL_SOCKET, SO_REUSEADDR, pack("l", 1));
+	bind($self->{'socket'}, sockaddr_in($self->{'port'}, INADDR_ANY));
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+
+	my $server_thread = threads->create(\&_listen, $self);
+	$server_thread->detach();
+}
+
+sub _listen
+{
+	my $self = shift;
+
+	listen($self->{'socket'}, SOMAXCONN) or die "fail to listen: $!";
+
+	while (1)
+	{
+		my $fh;
+		my %request;
+		my $remote = accept($fh, $self->{'socket'});
+		binmode $fh;
+
+		my ($method, $object, $prot) = split(/ /, <$fh>);
+		$request{'method'} = $method;
+		$request{'object'} = $object;
+		chomp($request{'object'});
+
+		local $/ = Socket::CRLF;
+		my $c = 0;
+		while(<$fh>)
+		{
+			chomp;
+			# Headers
+			if (/:/)
+			{
+				my ($field, $value) = split(/:/, $_, 2);
+				$value =~ s/^\s+//;
+				$request{'headers'}{lc $field} = $value;
+			}
+			# POST data
+			elsif (/^$/)
+			{
+				read($fh, $request{'content'}, $request{'headers'}{'content-length'})
+					if defined $request{'headers'}{'content-length'};
+				last;
+			}
+		}
+
+		# Debug printing
+		# print ": read ".$request{'method'} . ";" . $request{'object'}.";\n";
+		# foreach my $h (keys(%{$request{'headers'}}))
+		#{
+		#	printf ": headers: " . $request{'headers'}{$h} . "\n";
+		#}
+		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
+
+		my $alternate = 0;
+		if ($request{'object'} =~ qr|^/alternate(/.*)$|)
+		{
+			$alternate = 1;
+			$request{'object'} = $1;
+		}
+
+		if ($request{'object'} eq '/.well-known/openid-configuration')
+		{
+			my $issuer = "http://localhost:$self->{'port'}";
+			if ($alternate)
+			{
+				$issuer .= "/alternate";
+			}
+
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"issuer": "$issuer",
+				"token_endpoint": "$issuer/token",
+				"device_authorization_endpoint": "$issuer/authorize",
+				"response_types_supported": ["token"],
+				"subject_types_supported": ["public"],
+				"id_token_signing_alg_values_supported": ["RS256"],
+				"grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"]
+			}
+EOR
+		}
+		elsif ($request{'object'} eq '/authorize')
+		{
+			my $uri = "https://example.com/";
+			if ($alternate)
+			{
+				$uri = "https://example.org/";
+			}
+
+			print ": returning device_code\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"device_code": "postgres",
+				"user_code" : "postgresuser",
+				"interval" : 0,
+				"verification_uri" : "$uri",
+				"expires-in": 5
+			}
+EOR
+		}
+		elsif ($request{'object'} eq '/token')
+		{
+			my $token = "9243959234";
+			if ($alternate)
+			{
+				$token .= "-alt";
+			}
+
+			print ": returning token\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"access_token": "$token",
+				"token_type": "bearer"
+			}
+EOR
+		}
+		else
+		{
+			print ": returning default\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: text/html\r\n";
+			print $fh "\r\n";
+			print $fh "Ok\n";
+		}
+
+		close($fh);
+	}
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8119523cd9..8558f3b3e6 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1657,6 +1657,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -2980,6 +2981,7 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
 ValuesScan
 ValuesScanState
 Var
-- 
2.34.1

v20-0007-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v20-0007-Add-pytest-suite-for-OAuth.patchDownload
From e1da97fb501d18d2e7557e47acde431233ad1342 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v20 7/9] Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |   22 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  137 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1720 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 +++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |    9 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1074 +++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 +++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5440 insertions(+), 7 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 3b5b54df58..2501743b31 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl load_balance python
 
 
 # What files to preserve in case tests fail
@@ -165,7 +165,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -177,6 +177,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -225,6 +226,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -237,6 +239,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -312,8 +315,11 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bullseye - Autoconf
@@ -368,6 +374,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -378,7 +386,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.32-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
@@ -676,8 +684,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/meson.build b/meson.build
index 45b20d11c1..8567355a25 100644
--- a/meson.build
+++ b/meson.build
@@ -3174,6 +3174,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3335,6 +3338,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index c3d0dfedf1..f401ec179e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -7,6 +7,7 @@ subdir('authentication')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..94f3620af3
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,137 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            self._pump_async(conn)
+            conn.close()
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..aee8ecdf5e
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1720 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": "application/json"}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept, openid_provider, asynchronous, retries, scope, secret, auth_data_cb
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+def test_oauth_retry_interval(accept, openid_provider, retries, error_code):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": expected_retry_interval,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, _ = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we cleaned up after ourselves.
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..57ba1ced94
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,9 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+construct~=2.10.61
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..dbb8b8823c
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..427ab063e6
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1074 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

v20-0009-WIP-Python-OAuth-provider-implementation.patchapplication/octet-stream; name=v20-0009-WIP-Python-OAuth-provider-implementation.patchDownload
From c4d850a7c43c82de562b22b4ec1ad107adbe0cec Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 26 Feb 2024 16:24:32 -0800
Subject: [PATCH v20 9/9] WIP: Python OAuth provider implementation

---
 src/test/modules/oauth_validator/Makefile     |   2 +
 src/test/modules/oauth_validator/meson.build  |   3 +
 .../modules/oauth_validator/t/001_server.pl   |  16 +-
 .../modules/oauth_validator/t/oauth_server.py | 114 ++++++++++++
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 166 +++---------------
 5 files changed, 149 insertions(+), 152 deletions(-)
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py

diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
index 1f874cd7f2..e93e01455a 100644
--- a/src/test/modules/oauth_validator/Makefile
+++ b/src/test/modules/oauth_validator/Makefile
@@ -1,3 +1,5 @@
+export PYTHON
+
 MODULES = validator
 PGFILEDESC = "validator - test OAuth validator module"
 
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index d9c1d1d577..3feba6f826 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -29,5 +29,8 @@ tests += {
     'tests': [
       't/001_server.pl',
     ],
+    'env': {
+      'PYTHON': python.path(),
+    },
   },
 }
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 14c7778298..ea610fcd28 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -19,7 +19,11 @@ $node->start;
 $node->safe_psql('postgres', 'CREATE USER test;');
 $node->safe_psql('postgres', 'CREATE USER testalt;');
 
-my $issuer = "127.0.0.1:18080";
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
 
 unlink($node->data_dir . '/pg_hba.conf');
 $node->append_conf('pg_hba.conf', qq{
@@ -28,15 +32,6 @@ local all testalt oauth issuer="$issuer/alternate" scope="openid postgres alt"
 });
 $node->reload;
 
-my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
-
-my $port = $webserver->port();
-
-is($port, 18080, "Port is 18080");
-
-$webserver->setup();
-$webserver->run();
-
 my ($log_start, $log_end);
 $log_start = $node->wait_for_log(qr/reloading configuration files/);
 
@@ -73,6 +68,7 @@ $node->log_check("user $user: validator sets authenticated identity", $log_start
 				 ]);
 $log_start = $log_end;
 
+$webserver->stop();
 $node->stop;
 
 done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..77e3883a81
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,114 @@
+#! /usr/bin/env python3
+
+import http.server
+import json
+import os
+import sys
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+
+    def do_GET(self):
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def do_POST(self):
+        self._check_issuer()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        """
+
+        resp = json.dumps(js).encode("ascii")
+
+        self.send_response(200, "OK")
+        self.send_header("Content-Type", "application/json")
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        return {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            "interval": 0,
+            "verification_uri": uri,
+            "expires-in": 5,
+        }
+
+    def token(self) -> JsonObject:
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return {
+            "access_token": token,
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
index 5c195efb79..d96733f531 100644
--- a/src/test/perl/PostgreSQL/Test/OAuthServer.pm
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -5,6 +5,7 @@ package PostgreSQL::Test::OAuthServer;
 use warnings;
 use strict;
 use threads;
+use Scalar::Util;
 use Socket;
 use IO::Select;
 
@@ -13,27 +14,13 @@ local *server_socket;
 sub new
 {
 	my $class = shift;
-	my $port = shift;
 
 	my $self = {};
 	bless($self, $class);
 
-	$self->{'port'} = $port;
-
 	return $self;
 }
 
-sub setup
-{
-	my $self = shift;
-	my $tcp = getprotobyname('tcp');
-
-	socket($self->{'socket'}, PF_INET, SOCK_STREAM, $tcp)
-		or die "no socket";
-	setsockopt($self->{'socket'}, SOL_SOCKET, SO_REUSEADDR, pack("l", 1));
-	bind($self->{'socket'}, sockaddr_in($self->{'port'}, INADDR_ANY));
-}
-
 sub port
 {
 	my $self = shift;
@@ -44,140 +31,35 @@ sub port
 sub run
 {
 	my $self = shift;
+	my $port;
 
-	my $server_thread = threads->create(\&_listen, $self);
-	$server_thread->detach();
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		// die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	print("# OAuth provider (PID $pid) is listening on port $port\n");
 }
 
-sub _listen
+sub stop
 {
 	my $self = shift;
 
-	listen($self->{'socket'}, SOMAXCONN) or die "fail to listen: $!";
-
-	while (1)
-	{
-		my $fh;
-		my %request;
-		my $remote = accept($fh, $self->{'socket'});
-		binmode $fh;
-
-		my ($method, $object, $prot) = split(/ /, <$fh>);
-		$request{'method'} = $method;
-		$request{'object'} = $object;
-		chomp($request{'object'});
-
-		local $/ = Socket::CRLF;
-		my $c = 0;
-		while(<$fh>)
-		{
-			chomp;
-			# Headers
-			if (/:/)
-			{
-				my ($field, $value) = split(/:/, $_, 2);
-				$value =~ s/^\s+//;
-				$request{'headers'}{lc $field} = $value;
-			}
-			# POST data
-			elsif (/^$/)
-			{
-				read($fh, $request{'content'}, $request{'headers'}{'content-length'})
-					if defined $request{'headers'}{'content-length'};
-				last;
-			}
-		}
-
-		# Debug printing
-		# print ": read ".$request{'method'} . ";" . $request{'object'}.";\n";
-		# foreach my $h (keys(%{$request{'headers'}}))
-		#{
-		#	printf ": headers: " . $request{'headers'}{$h} . "\n";
-		#}
-		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
-
-		my $alternate = 0;
-		if ($request{'object'} =~ qr|^/alternate(/.*)$|)
-		{
-			$alternate = 1;
-			$request{'object'} = $1;
-		}
-
-		if ($request{'object'} eq '/.well-known/openid-configuration')
-		{
-			my $issuer = "http://localhost:$self->{'port'}";
-			if ($alternate)
-			{
-				$issuer .= "/alternate";
-			}
-
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"issuer": "$issuer",
-				"token_endpoint": "$issuer/token",
-				"device_authorization_endpoint": "$issuer/authorize",
-				"response_types_supported": ["token"],
-				"subject_types_supported": ["public"],
-				"id_token_signing_alg_values_supported": ["RS256"],
-				"grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"]
-			}
-EOR
-		}
-		elsif ($request{'object'} eq '/authorize')
-		{
-			my $uri = "https://example.com/";
-			if ($alternate)
-			{
-				$uri = "https://example.org/";
-			}
-
-			print ": returning device_code\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"device_code": "postgres",
-				"user_code" : "postgresuser",
-				"interval" : 0,
-				"verification_uri" : "$uri",
-				"expires-in": 5
-			}
-EOR
-		}
-		elsif ($request{'object'} eq '/token')
-		{
-			my $token = "9243959234";
-			if ($alternate)
-			{
-				$token .= "-alt";
-			}
-
-			print ": returning token\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"access_token": "$token",
-				"token_type": "bearer"
-			}
-EOR
-		}
-		else
-		{
-			print ": returning default\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: text/html\r\n";
-			print $fh "\r\n";
-			print $fh "Ok\n";
-		}
-
-		close($fh);
-	}
+	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
 }
 
 1;
-- 
2.34.1

v20-0008-XXX-temporary-patches-to-build-and-test.patchapplication/octet-stream; name=v20-0008-XXX-temporary-patches-to-build-and-test.patchDownload
From 185f9902fd85d1409d4ea749938991bb61ac4b21 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 20 Feb 2024 11:35:29 -0800
Subject: [PATCH v20 8/9] XXX temporary patches to build and test

- the new pg_combinebackup utility uses JSON in the frontend without
  0001; has something changed?
- construct 2.10.70 has some incompatibilities with the current tests
- temporarily skip the exit check (from Daniel Gustafsson); this needs
  to be turned into an exception for curl rather than a plain exit call
---
 src/bin/pg_combinebackup/Makefile    | 6 ++++--
 src/bin/pg_combinebackup/meson.build | 3 ++-
 src/bin/pg_verifybackup/Makefile     | 2 +-
 src/interfaces/libpq/Makefile        | 2 +-
 src/test/python/requirements.txt     | 4 +++-
 5 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index 4f24b1aff6..2f7dc1ed87 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -18,6 +18,8 @@ include $(top_builddir)/src/Makefile.global
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
 LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
@@ -30,8 +32,8 @@ OBJS = \
 
 all: pg_combinebackup
 
-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
-	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
+	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
 
 install: all installdirs
 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
diff --git a/src/bin/pg_combinebackup/meson.build b/src/bin/pg_combinebackup/meson.build
index 1d4b9c218f..6639987a58 100644
--- a/src/bin/pg_combinebackup/meson.build
+++ b/src/bin/pg_combinebackup/meson.build
@@ -17,7 +17,8 @@ endif
 
 pg_combinebackup = executable('pg_combinebackup',
   pg_combinebackup_sources,
-  dependencies: [frontend_code],
+  # XXX linking against libpq isn't good, but how was JSON working?
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args,
 )
 bin_targets += pg_combinebackup
diff --git a/src/bin/pg_verifybackup/Makefile b/src/bin/pg_verifybackup/Makefile
index 7c045f142e..3372fada01 100644
--- a/src/bin/pg_verifybackup/Makefile
+++ b/src/bin/pg_verifybackup/Makefile
@@ -17,7 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 # We need libpq only because fe_utils does.
-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 2618c293af..e86d4803ff 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -124,7 +124,7 @@ libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
 	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
-		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
+		echo 'libpq must not be calling any function which invokes exit'; \
 	fi
 endif
 endif
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
index 57ba1ced94..0dfcffb83e 100644
--- a/src/test/python/requirements.txt
+++ b/src/test/python/requirements.txt
@@ -1,7 +1,9 @@
 black
 # cryptography 35.x and later add many platform/toolchain restrictions, beware
 cryptography~=3.4.8
-construct~=2.10.61
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
 isort~=5.6
 # TODO: update to psycopg[c] 3.1
 psycopg2~=2.9.7
-- 
2.34.1

#100Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#99)
8 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

v21 is a quick rebase over HEAD, which has adopted a few pieces of
v20. I've also fixed a race condition in the tests.

On Mon, Mar 11, 2024 at 3:51 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Andrew asked over at [2] if we could perhaps get 0001 in as well. I
think the main thing to figure out there is, is requiring linkage
against libpq (see 0008) going to be okay for the frontend binaries
that need JSON support? Or do we need to do something like moving
PQExpBuffer into src/common to simplify the dependency tree?

0001 has been pared down to the part that teaches jsonapi.c to use
PQExpBuffer and track out-of-memory conditions; the linkage questions
remain.

Thanks,
--Jacob

Attachments:

v21-0007-WIP-Python-OAuth-provider-implementation.patchapplication/octet-stream; name=v21-0007-WIP-Python-OAuth-provider-implementation.patchDownload
From decc90579a73dd3073ddb479c317d52f31f26d67 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 26 Feb 2024 16:24:32 -0800
Subject: [PATCH v21 7/7] WIP: Python OAuth provider implementation

---
 src/test/modules/oauth_validator/Makefile     |   2 +
 src/test/modules/oauth_validator/meson.build  |   3 +
 .../modules/oauth_validator/t/001_server.pl   |  16 +-
 .../modules/oauth_validator/t/oauth_server.py | 114 ++++++++++++
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 166 +++---------------
 5 files changed, 149 insertions(+), 152 deletions(-)
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py

diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
index 1f874cd7f2..e93e01455a 100644
--- a/src/test/modules/oauth_validator/Makefile
+++ b/src/test/modules/oauth_validator/Makefile
@@ -1,3 +1,5 @@
+export PYTHON
+
 MODULES = validator
 PGFILEDESC = "validator - test OAuth validator module"
 
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index d9c1d1d577..3feba6f826 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -29,5 +29,8 @@ tests += {
     'tests': [
       't/001_server.pl',
     ],
+    'env': {
+      'PYTHON': python.path(),
+    },
   },
 }
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 14c7778298..ea610fcd28 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -19,7 +19,11 @@ $node->start;
 $node->safe_psql('postgres', 'CREATE USER test;');
 $node->safe_psql('postgres', 'CREATE USER testalt;');
 
-my $issuer = "127.0.0.1:18080";
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
 
 unlink($node->data_dir . '/pg_hba.conf');
 $node->append_conf('pg_hba.conf', qq{
@@ -28,15 +32,6 @@ local all testalt oauth issuer="$issuer/alternate" scope="openid postgres alt"
 });
 $node->reload;
 
-my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
-
-my $port = $webserver->port();
-
-is($port, 18080, "Port is 18080");
-
-$webserver->setup();
-$webserver->run();
-
 my ($log_start, $log_end);
 $log_start = $node->wait_for_log(qr/reloading configuration files/);
 
@@ -73,6 +68,7 @@ $node->log_check("user $user: validator sets authenticated identity", $log_start
 				 ]);
 $log_start = $log_end;
 
+$webserver->stop();
 $node->stop;
 
 done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..77e3883a81
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,114 @@
+#! /usr/bin/env python3
+
+import http.server
+import json
+import os
+import sys
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+
+    def do_GET(self):
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def do_POST(self):
+        self._check_issuer()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        """
+
+        resp = json.dumps(js).encode("ascii")
+
+        self.send_response(200, "OK")
+        self.send_header("Content-Type", "application/json")
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        return {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            "interval": 0,
+            "verification_uri": uri,
+            "expires-in": 5,
+        }
+
+    def token(self) -> JsonObject:
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return {
+            "access_token": token,
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
index 5c195efb79..d96733f531 100644
--- a/src/test/perl/PostgreSQL/Test/OAuthServer.pm
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -5,6 +5,7 @@ package PostgreSQL::Test::OAuthServer;
 use warnings;
 use strict;
 use threads;
+use Scalar::Util;
 use Socket;
 use IO::Select;
 
@@ -13,27 +14,13 @@ local *server_socket;
 sub new
 {
 	my $class = shift;
-	my $port = shift;
 
 	my $self = {};
 	bless($self, $class);
 
-	$self->{'port'} = $port;
-
 	return $self;
 }
 
-sub setup
-{
-	my $self = shift;
-	my $tcp = getprotobyname('tcp');
-
-	socket($self->{'socket'}, PF_INET, SOCK_STREAM, $tcp)
-		or die "no socket";
-	setsockopt($self->{'socket'}, SOL_SOCKET, SO_REUSEADDR, pack("l", 1));
-	bind($self->{'socket'}, sockaddr_in($self->{'port'}, INADDR_ANY));
-}
-
 sub port
 {
 	my $self = shift;
@@ -44,140 +31,35 @@ sub port
 sub run
 {
 	my $self = shift;
+	my $port;
 
-	my $server_thread = threads->create(\&_listen, $self);
-	$server_thread->detach();
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		// die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	print("# OAuth provider (PID $pid) is listening on port $port\n");
 }
 
-sub _listen
+sub stop
 {
 	my $self = shift;
 
-	listen($self->{'socket'}, SOMAXCONN) or die "fail to listen: $!";
-
-	while (1)
-	{
-		my $fh;
-		my %request;
-		my $remote = accept($fh, $self->{'socket'});
-		binmode $fh;
-
-		my ($method, $object, $prot) = split(/ /, <$fh>);
-		$request{'method'} = $method;
-		$request{'object'} = $object;
-		chomp($request{'object'});
-
-		local $/ = Socket::CRLF;
-		my $c = 0;
-		while(<$fh>)
-		{
-			chomp;
-			# Headers
-			if (/:/)
-			{
-				my ($field, $value) = split(/:/, $_, 2);
-				$value =~ s/^\s+//;
-				$request{'headers'}{lc $field} = $value;
-			}
-			# POST data
-			elsif (/^$/)
-			{
-				read($fh, $request{'content'}, $request{'headers'}{'content-length'})
-					if defined $request{'headers'}{'content-length'};
-				last;
-			}
-		}
-
-		# Debug printing
-		# print ": read ".$request{'method'} . ";" . $request{'object'}.";\n";
-		# foreach my $h (keys(%{$request{'headers'}}))
-		#{
-		#	printf ": headers: " . $request{'headers'}{$h} . "\n";
-		#}
-		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
-
-		my $alternate = 0;
-		if ($request{'object'} =~ qr|^/alternate(/.*)$|)
-		{
-			$alternate = 1;
-			$request{'object'} = $1;
-		}
-
-		if ($request{'object'} eq '/.well-known/openid-configuration')
-		{
-			my $issuer = "http://localhost:$self->{'port'}";
-			if ($alternate)
-			{
-				$issuer .= "/alternate";
-			}
-
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"issuer": "$issuer",
-				"token_endpoint": "$issuer/token",
-				"device_authorization_endpoint": "$issuer/authorize",
-				"response_types_supported": ["token"],
-				"subject_types_supported": ["public"],
-				"id_token_signing_alg_values_supported": ["RS256"],
-				"grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"]
-			}
-EOR
-		}
-		elsif ($request{'object'} eq '/authorize')
-		{
-			my $uri = "https://example.com/";
-			if ($alternate)
-			{
-				$uri = "https://example.org/";
-			}
-
-			print ": returning device_code\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"device_code": "postgres",
-				"user_code" : "postgresuser",
-				"interval" : 0,
-				"verification_uri" : "$uri",
-				"expires-in": 5
-			}
-EOR
-		}
-		elsif ($request{'object'} eq '/token')
-		{
-			my $token = "9243959234";
-			if ($alternate)
-			{
-				$token .= "-alt";
-			}
-
-			print ": returning token\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: application/json\r\n";
-			print $fh "\r\n";
-			print $fh <<EOR;
-			{
-				"access_token": "$token",
-				"token_type": "bearer"
-			}
-EOR
-		}
-		else
-		{
-			print ": returning default\n";
-			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
-			print $fh "Content-Type: text/html\r\n";
-			print $fh "\r\n";
-			print $fh "Ok\n";
-		}
-
-		close($fh);
-	}
+	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
 }
 
 1;
-- 
2.34.1

since-v20.diff.txttext/plain; charset=US-ASCII; name=since-v20.diff.txtDownload
 1:  231c6fb165 !  1:  557370eabb common/jsonapi: support FRONTEND clients
    @@ Metadata
     Author: Jacob Champion <pchampion@vmware.com>
     
      ## Commit message ##
    -    common/jsonapi: support FRONTEND clients
    +    common/jsonapi: support libpq as a client
     
         Based on a patch by Michael Paquier.
     
         For frontend code, use PQExpBuffer instead of StringInfo. This requires
         us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
    -    as needed. json_errdetail() now allocates its error message inside
    -    memory owned by the JsonLexContext, so clients don't need to worry about
    -    freeing it.
    +    as needed rather than exit()ing.
     
    -    We can now partially revert b44669b2ca, now that json_errdetail() works
    -    correctly.
    + ## src/bin/pg_combinebackup/Makefile ##
    +@@ src/bin/pg_combinebackup/Makefile: include $(top_builddir)/src/Makefile.global
      
    -    Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
    + override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
    + LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
    ++# TODO: fix this properly
    ++LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
      
    - ## src/bin/pg_verifybackup/t/005_bad_manifest.pl ##
    -@@ src/bin/pg_verifybackup/t/005_bad_manifest.pl: use Test::More;
    - my $tempdir = PostgreSQL::Test::Utils::tempdir;
    + OBJS = \
    + 	$(WIN32RES) \
    +@@ src/bin/pg_combinebackup/Makefile: OBJS = \
      
    - test_bad_manifest('input string ended unexpectedly',
    --	qr/could not parse backup manifest: parsing failed/, <<EOM);
    -+	qr/could not parse backup manifest: The input string ended unexpectedly/,
    -+	<<EOM);
    - {
    - EOM
    + all: pg_combinebackup
    + 
    +-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
    ++pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
    + 	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
    + 
    + install: all installdirs
    +
    + ## src/bin/pg_combinebackup/meson.build ##
    +@@ src/bin/pg_combinebackup/meson.build: endif
    + 
    + pg_combinebackup = executable('pg_combinebackup',
    +   pg_combinebackup_sources,
    +-  dependencies: [frontend_code],
    ++  dependencies: [frontend_code, libpq],
    +   kwargs: default_bin_args,
    + )
    + bin_targets += pg_combinebackup
     
    + ## src/bin/pg_verifybackup/Makefile ##
    +@@ src/bin/pg_verifybackup/Makefile: top_builddir = ../../..
    + include $(top_builddir)/src/Makefile.global
    + 
    + # We need libpq only because fe_utils does.
    +-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
    ++LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
    + 
    + OBJS = \
    + 	$(WIN32RES) \
     
      ## src/common/Makefile ##
     @@ src/common/Makefile: override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, char *js
     +	static const JsonLexContext empty = {0};
     +
      	if (lex->flags & JSONLEX_FREE_STRVAL)
    --	{
    --		pfree(lex->strval->data);
    --		pfree(lex->strval);
    --	}
    +-		destroyStringInfo(lex->strval);
     +		destroyStrVal(lex->strval);
    -+
    -+	if (lex->errormsg)
    + 
    + 	if (lex->errormsg)
    +-		destroyStringInfo(lex->errormsg);
     +		destroyStrVal(lex->errormsg);
    -+
    + 
      	if (lex->flags & JSONLEX_FREE_STRUCT)
      		pfree(lex);
     +	else
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      	lex->prev_token_terminator = lex->token_terminator;
      	lex->token_terminator = s + 1;
     @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
    - 	return JSON_SUCCESS;		/* silence stupider compilers */
    - }
    - 
    --
    --#ifndef FRONTEND
    --/*
    -- * Extract the current token from a lexing context, for error reporting.
    -- */
    --static char *
    --extract_token(JsonLexContext *lex)
    --{
    --	int			toklen = lex->token_terminator - lex->token_start;
    --	char	   *token = palloc(toklen + 1);
    --
    --	memcpy(token, lex->token_start, toklen);
    --	token[toklen] = '\0';
    --	return token;
    --}
    --
    - /*
    -  * Construct an (already translated) detail message for a JSON error.
    -  *
    -- * Note that the error message generated by this routine may not be
    -- * palloc'd, making it unsafe for frontend code as there is no way to
    -- * know if this can be safely pfree'd or not.
    -+ * The returned allocation is either static or owned by the JsonLexContext and
    -+ * should not be freed.
    -  */
      char *
      json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
      {
    -+	int			toklen = lex->token_terminator - lex->token_start;
    -+
     +	if (error == JSON_OUT_OF_MEMORY)
     +	{
     +		/* Short circuit. Allocating anything for this case is unhelpful. */
     +		return _("out of memory");
     +	}
     +
    -+	if (lex->errormsg)
    + 	if (lex->errormsg)
    +-		resetStringInfo(lex->errormsg);
     +		resetStrVal(lex->errormsg);
    -+	else
    + 	else
    +-		lex->errormsg = makeStringInfo();
     +		lex->errormsg = createStrVal();
    -+
    + 
    + 	/*
    + 	 * A helper for error messages that should print the current token. The
    + 	 * format must contain exactly one %.*s specifier.
    + 	 */
    + #define token_error(lex, format) \
    +-	appendStringInfo((lex)->errormsg, _(format), \
    +-					 (int) ((lex)->token_terminator - (lex)->token_start), \
    +-					 (lex)->token_start);
    ++	appendStrVal((lex)->errormsg, _(format), \
    ++				 (int) ((lex)->token_terminator - (lex)->token_start), \
    ++				 (lex)->token_start);
    + 
      	switch (error)
      	{
    - 		case JSON_SUCCESS:
    - 			/* fall through to the error code after switch */
    +@@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
    + 			token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
      			break;
    - 		case JSON_ESCAPING_INVALID:
    --			return psprintf(_("Escape sequence \"\\%s\" is invalid."),
    --							extract_token(lex));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Escape sequence \"\\%.*s\" is invalid."),
    -+						 toklen, lex->token_start);
    -+			break;
      		case JSON_ESCAPING_REQUIRED:
    --			return psprintf(_("Character with value 0x%02x must be escaped."),
    +-			appendStringInfo(lex->errormsg,
    +-							 _("Character with value 0x%02x must be escaped."),
     -							 (unsigned char) *(lex->token_terminator));
     +			appendStrVal(lex->errormsg,
     +						 _("Character with value 0x%02x must be escaped."),
     +						 (unsigned char) *(lex->token_terminator));
    -+			break;
    + 			break;
      		case JSON_EXPECTED_END:
    --			return psprintf(_("Expected end of input, but found \"%s\"."),
    --							extract_token(lex));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Expected end of input, but found \"%.*s\"."),
    -+						 toklen, lex->token_start);
    -+			break;
    - 		case JSON_EXPECTED_ARRAY_FIRST:
    --			return psprintf(_("Expected array element or \"]\", but found \"%s\"."),
    --							extract_token(lex));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Expected array element or \"]\", but found \"%.*s\"."),
    -+						 toklen, lex->token_start);
    -+			break;
    - 		case JSON_EXPECTED_ARRAY_NEXT:
    --			return psprintf(_("Expected \",\" or \"]\", but found \"%s\"."),
    --							extract_token(lex));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Expected \",\" or \"]\", but found \"%.*s\"."),
    -+						 toklen, lex->token_start);
    -+			break;
    - 		case JSON_EXPECTED_COLON:
    --			return psprintf(_("Expected \":\", but found \"%s\"."),
    --							extract_token(lex));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Expected \":\", but found \"%.*s\"."),
    -+						 toklen, lex->token_start);
    -+			break;
    - 		case JSON_EXPECTED_JSON:
    --			return psprintf(_("Expected JSON value, but found \"%s\"."),
    --							extract_token(lex));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Expected JSON value, but found \"%.*s\"."),
    -+						 toklen, lex->token_start);
    -+			break;
    - 		case JSON_EXPECTED_MORE:
    - 			return _("The input string ended unexpectedly.");
    - 		case JSON_EXPECTED_OBJECT_FIRST:
    --			return psprintf(_("Expected string or \"}\", but found \"%s\"."),
    --							extract_token(lex));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Expected string or \"}\", but found \"%.*s\"."),
    -+						 toklen, lex->token_start);
    -+			break;
    - 		case JSON_EXPECTED_OBJECT_NEXT:
    --			return psprintf(_("Expected \",\" or \"}\", but found \"%s\"."),
    --							extract_token(lex));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Expected \",\" or \"}\", but found \"%.*s\"."),
    -+						 toklen, lex->token_start);
    -+			break;
    - 		case JSON_EXPECTED_STRING:
    --			return psprintf(_("Expected string, but found \"%s\"."),
    --							extract_token(lex));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Expected string, but found \"%.*s\"."),
    -+						 toklen, lex->token_start);
    -+			break;
    + 			token_error(lex, "Expected end of input, but found \"%.*s\".");
    +@@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
      		case JSON_INVALID_TOKEN:
    --			return psprintf(_("Token \"%s\" is invalid."),
    --							extract_token(lex));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Token \"%.*s\" is invalid."),
    -+						 toklen, lex->token_start);
    -+			break;
    + 			token_error(lex, "Token \"%.*s\" is invalid.");
    + 			break;
     +		case JSON_OUT_OF_MEMORY:
     +			/* should have been handled above; use the error path */
     +			break;
    @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *l
      			return _("\\u0000 cannot be converted to text.");
      		case JSON_UNICODE_ESCAPE_FORMAT:
     @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
    - 			/* note: this case is only reachable in frontend not backend */
    - 			return _("Unicode escape values cannot be used for code point values above 007F when the encoding is not UTF8.");
    - 		case JSON_UNICODE_UNTRANSLATABLE:
    --			/* note: this case is only reachable in backend not frontend */
    -+
    -+			/*
    -+			 * note: this case is only reachable in backend not frontend.
    -+			 * #ifdef it away so the frontend doesn't try to link against
    -+			 * backend functionality.
    -+			 */
    -+#ifndef FRONTEND
    - 			return psprintf(_("Unicode escape value could not be translated to the server's encoding %s."),
    - 							GetDatabaseEncodingName());
    -+#else
    -+			Assert(false);
    -+			break;
    -+#endif
    - 		case JSON_UNICODE_HIGH_SURROGATE:
    - 			return _("Unicode high surrogate must not follow a high surrogate.");
    - 		case JSON_UNICODE_LOW_SURROGATE:
    -@@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
    - 			break;
      	}
    + #undef token_error
      
     -	/*
     -	 * We don't use a default: case, so that the compiler will warn about
     -	 * unhandled enum values.  But this needs to be here anyway to cover the
     -	 * possibility of an incorrect input.
     -	 */
    --	elog(ERROR, "unexpected json parse error type: %d", (int) error);
    --	return NULL;
    --}
    +-	if (lex->errormsg->len == 0)
    +-		appendStringInfo(lex->errormsg,
    +-						 _("unexpected json parse error type: %d"),
    +-						 (int) error);
     +	/* Note that lex->errormsg can be NULL in FRONTEND code. */
    -+	if (lex->errormsg && !lex->errormsg->data[0])
    ++	if (lex->errormsg && lex->errormsg->len == 0)
     +	{
     +		/*
     +		 * We don't use a default: case, so that the compiler will warn about
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
     +		 * the possibility of an incorrect input.
     +		 */
     +		appendStrVal(lex->errormsg,
    -+					 "unexpected json parse error type: %d", (int) error);
    ++					 _("unexpected json parse error type: %d"),
    ++					 (int) error);
     +	}
     +
     +#ifdef FRONTEND
     +	if (PQExpBufferBroken(lex->errormsg))
     +		return _("out of memory while constructing error description");
    - #endif
    -+
    -+	return lex->errormsg->data;
    -+}
    ++#endif
    + 
    + 	return lex->errormsg->data;
    + }
     
      ## src/common/meson.build ##
     @@ src/common/meson.build: common_sources_frontend_static += files(
    @@ src/common/meson.build: foreach name, opts : pgcommon_variants
              'dependencies': opts['dependencies'] + [ssl],
            }
     
    - ## src/common/parse_manifest.c ##
    -@@ src/common/parse_manifest.c: json_parse_manifest(JsonManifestParseContext *context, char *buffer,
    - 	/* Run the actual JSON parser. */
    - 	json_error = pg_parse_json(lex, &sem);
    - 	if (json_error != JSON_SUCCESS)
    --		json_manifest_parse_failure(context, "parsing failed");
    -+		json_manifest_parse_failure(context, json_errdetail(json_error, lex));
    - 	if (parse.state != JM_EXPECT_EOF)
    - 		json_manifest_parse_failure(context, "manifest ended unexpectedly");
    - 
    -
    - ## src/common/stringinfo.c ##
    -@@ src/common/stringinfo.c: enlargeStringInfo(StringInfo str, int needed)
    - 
    - 	str->maxlen = newlen;
    - }
    -+
    -+void
    -+destroyStringInfo(StringInfo str)
    -+{
    -+	pfree(str->data);
    -+	pfree(str);
    -+}
    -
      ## src/include/common/jsonapi.h ##
     @@
      #ifndef JSONAPI_H
    @@ src/include/common/jsonapi.h: typedef struct JsonLexContext
      	int			line_number;	/* line number, starting from 1 */
      	char	   *line_start;		/* where that line starts within input */
     -	StringInfo	strval;
    +-	StringInfo	errormsg;
     +	bool		parse_strval;
     +	StrValType *strval;			/* only used if parse_strval == true */
     +	StrValType *errormsg;
      } JsonLexContext;
      
      typedef JsonParseErrorType (*json_struct_action) (void *state);
    -
    - ## src/include/lib/stringinfo.h ##
    -@@ src/include/lib/stringinfo.h: extern void appendBinaryStringInfoNT(StringInfo str,
    -  */
    - extern void enlargeStringInfo(StringInfo str, int needed);
    - 
    -+
    -+extern void destroyStringInfo(StringInfo str);
    - #endif							/* STRINGINFO_H */
 2:  f78c79ea68 <  -:  ---------- Refactor SASL exchange to return tri-state status
 3:  10b6d2a6b9 <  -:  ---------- Explicitly require password for SCRAM exchange
 4:  2a55d9c806 !  2:  b754a2f8fb libpq: add OAUTHBEARER SASL mechanism
    @@ src/interfaces/libpq/Makefile: endif
      endif
     
      ## src/interfaces/libpq/exports.txt ##
    -@@ src/interfaces/libpq/exports.txt: PQsendClosePrepared       190
    - PQsendClosePortal         191
    - PQchangePassword          192
    - PQsendPipelineSync        193
    -+PQsetAuthDataHook         194
    -+PQgetAuthDataHook         195
    -+PQdefaultAuthDataHook     196
    +@@ src/interfaces/libpq/exports.txt: PQcancelSocket            199
    + PQcancelErrorMessage      200
    + PQcancelReset             201
    + PQcancelFinish            202
    ++PQsetAuthDataHook         203
    ++PQgetAuthDataHook         204
    ++PQdefaultAuthDataHook     205
     
      ## src/interfaces/libpq/fe-auth-oauth-curl.c (new) ##
     @@
    @@ src/interfaces/libpq/fe-connect.c: static const internalPQconninfoOption PQconni
      	{NULL, NULL, NULL, NULL,
      	NULL, NULL, 0}
     @@ src/interfaces/libpq/fe-connect.c: pqDropServerData(PGconn *conn)
    + 	conn->write_failed = false;
    + 	free(conn->write_err_msg);
      	conn->write_err_msg = NULL;
    - 	conn->be_pid = 0;
    - 	conn->be_key = 0;
     +	/* conn->oauth_want_retry = false; TODO */
    - }
    - 
      
    + 	/*
    + 	 * Cancel connections need to retain their be_pid and be_key across
     @@ src/interfaces/libpq/fe-connect.c: PQconnectPoll(PGconn *conn)
      		case CONNECTION_NEEDED:
      		case CONNECTION_GSS_STARTUP:
    @@ src/interfaces/libpq/libpq-fe.h: extern "C"
      /*
       * Option flags for PQcopyResult
     @@ src/interfaces/libpq/libpq-fe.h: typedef enum
    - 	CONNECTION_CONSUME,			/* Consuming any extra messages. */
    - 	CONNECTION_GSS_STARTUP,		/* Negotiating GSSAPI. */
    - 	CONNECTION_CHECK_TARGET,	/* Checking target server properties. */
    --	CONNECTION_CHECK_STANDBY	/* Checking if server is in standby mode. */
    -+	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
    + 	CONNECTION_CHECK_TARGET,	/* Internal state: checking target server
    + 								 * properties. */
    + 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
    +-	CONNECTION_ALLOCATED		/* Waiting for connection attempt to be
    ++	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
    + 								 * started.  */
     +	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
     +								 * external system. */
      } ConnStatusType;
    @@ src/interfaces/libpq/libpq-int.h: typedef struct pg_conn_host
       * PGconn stores all the state data associated with a single connection
       * to a backend.
     @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
    - 	char	   *require_auth;	/* name of the expected auth method */
    - 	char	   *load_balance_hosts; /* load balance over hosts */
    + 								 * cancel request, instead of being a normal
    + 								 * connection that's used for queries */
      
     +	/* OAuth v2 */
     +	char	   *oauth_issuer;	/* token issuer URL */
 5:  5488ac25f5 !  3:  27251b63c1 backend: add OAUTHBEARER SASL mechanism
    @@ src/backend/utils/misc/guc_tables.c
      #include "nodes/queryjumble.h"
      #include "optimizer/cost.h"
     @@ src/backend/utils/misc/guc_tables.c: struct config_string ConfigureNamesString[] =
    - 		check_debug_io_direct, assign_debug_io_direct, NULL
    + 		check_standby_slot_names, assign_standby_slot_names, NULL
      	},
      
     +	{
 6:  fdbad1976a !  4:  78617fa9ba Introduce OAuth validator libraries
    @@ src/backend/utils/misc/guc_tables.c: struct config_string ConfigureNamesString[]
      		NULL, NULL, NULL
      	},
     
    - ## src/bin/pg_combinebackup/Makefile ##
    -@@ src/bin/pg_combinebackup/Makefile: OBJS = \
    - all: pg_combinebackup
    - 
    - pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
    --	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
    -+	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
    - 
    - install: all installdirs
    - 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
    -
      ## src/common/Makefile ##
     @@ src/common/Makefile: override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
      override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 7:  e1da97fb50 !  5:  1b871542f5 Add pytest suite for OAuth
    @@ src/test/python/client/test_oauth.py (new)
     +    discovery_uri = issuer + "/.well-known/openid-configuration"
     +    access_token = secrets.token_urlsafe()
     +
    -+    sock, _ = accept(
    ++    sock, client = accept(
     +        oauth_issuer=issuer,
     +        oauth_client_id="some-id",
     +        oauth_scope=scope,
    @@ src/test/python/client/test_oauth.py (new)
     +    assert call.openid_configuration.decode() == discovery_uri
     +    assert call.scope == (None if scope is None else scope.encode())
     +
    -+    # Make sure we cleaned up after ourselves.
    ++    # Make sure we clean up after ourselves when the connection is finished.
    ++    client.check_completed()
     +    assert cleanup_calls == 1
     +
     +
 8:  185f9902fd !  6:  e6c5d94682 XXX temporary patches to build and test
    @@ Metadata
      ## Commit message ##
         XXX temporary patches to build and test
     
    -    - the new pg_combinebackup utility uses JSON in the frontend without
    -      0001; has something changed?
         - construct 2.10.70 has some incompatibilities with the current tests
         - temporarily skip the exit check (from Daniel Gustafsson); this needs
           to be turned into an exception for curl rather than a plain exit call
     
    - ## src/bin/pg_combinebackup/Makefile ##
    -@@ src/bin/pg_combinebackup/Makefile: include $(top_builddir)/src/Makefile.global
    - 
    - override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
    - LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
    -+# TODO: fix this properly
    -+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
    - 
    - OBJS = \
    - 	$(WIN32RES) \
    -@@ src/bin/pg_combinebackup/Makefile: OBJS = \
    - 
    - all: pg_combinebackup
    - 
    --pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
    --	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(libpq_pgport) $(LIBS) -o $@$(X)
    -+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
    -+	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
    - 
    - install: all installdirs
    - 	$(INSTALL_PROGRAM) pg_combinebackup$(X) '$(DESTDIR)$(bindir)/pg_combinebackup$(X)'
    -
    - ## src/bin/pg_combinebackup/meson.build ##
    -@@ src/bin/pg_combinebackup/meson.build: endif
    - 
    - pg_combinebackup = executable('pg_combinebackup',
    -   pg_combinebackup_sources,
    --  dependencies: [frontend_code],
    -+  # XXX linking against libpq isn't good, but how was JSON working?
    -+  dependencies: [frontend_code, libpq],
    -   kwargs: default_bin_args,
    - )
    - bin_targets += pg_combinebackup
    -
    - ## src/bin/pg_verifybackup/Makefile ##
    -@@ src/bin/pg_verifybackup/Makefile: top_builddir = ../../..
    - include $(top_builddir)/src/Makefile.global
    - 
    - # We need libpq only because fe_utils does.
    --LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
    -+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
    - 
    - OBJS = \
    - 	$(WIN32RES) \
    -
      ## src/interfaces/libpq/Makefile ##
     @@ src/interfaces/libpq/Makefile: libpq-refs-stamp: $(shlib)
      ifneq ($(enable_coverage), yes)
 9:  c4d850a7c4 =  7:  decc90579a WIP: Python OAuth provider implementation
v21-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v21-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 27251b63c1ee4288b6974b16d70085ee558a74b0 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v21 3/7] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external program: the oauth_validator_command.
This command must do the following:

1. Receive the bearer token by reading its contents from a file
   descriptor passed from the server. (The numeric value of this
   descriptor may be inserted into the oauth_validator_command using the
   %f specifier.)

   This MUST be the first action the command performs. The server will
   not begin reading stdout from the command until the token has been
   read in full, so if the command tries to print anything and hits a
   buffer limit, the backend will deadlock and time out.

2. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the command must exit with a
   non-zero status. Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The command should print the
      authenticated identity string to stdout, followed by a newline.

      If the user cannot be authenticated, the validator should not
      print anything to stdout. It should also exit with a non-zero
      status, unless the token may be used to authorize the connection
      through some other means (see below).

      On a success, the command may then exit with a zero success code.
      By default, the server will then check to make sure the identity
      string matches the role that is being used (or matches a usermap
      entry, if one is in use).

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below), the validator simply
      returns a zero exit code if the client should be allowed to
      connect with its presented role (which can be passed to the
      command using the %r specifier), or a non-zero code otherwise.

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the command may print
      the authenticated ID and then fail with a non-zero exit code.
      (This makes it easier to see what's going on in the Postgres
      logs.)

4. Token validators may optionally log to stderr. This will be printed
   verbatim into the Postgres server logs.

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Unlike the client, servers support OAuth without needing to be built
against libiddawc (since the responsibility for "speaking" OAuth/OIDC
correctly is delegated entirely to the oauth_validator_command).

Several TODOs:
- port to platforms other than "modern Linux/BSD"
- overhaul the communication with oauth_validator_command, which is
  currently a bad hack on OpenPipeStream()
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- deal with role names that can't be safely passed to system() without
  shell-escaping
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/backend/libpq/Makefile          |   1 +
 src/backend/libpq/auth-oauth.c      | 883 ++++++++++++++++++++++++++++
 src/backend/libpq/auth-sasl.c       |  10 +-
 src/backend/libpq/auth-scram.c      |   4 +-
 src/backend/libpq/auth.c            |  26 +-
 src/backend/libpq/hba.c             |  31 +-
 src/backend/libpq/meson.build       |   1 +
 src/backend/utils/misc/guc_tables.c |  12 +
 src/include/libpq/auth.h            |  17 +
 src/include/libpq/hba.h             |   6 +-
 src/include/libpq/oauth.h           |  24 +
 src/include/libpq/sasl.h            |  11 +
 src/tools/pgindent/typedefs.list    |   1 +
 13 files changed, 995 insertions(+), 32 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..16596c089a
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,883 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *oauth_validator_command;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth, const char **logdetail);
+static bool run_validator_command(Port *port, const char *token);
+static bool check_exit(FILE **fh, const char *command);
+static bool set_cloexec(int fd);
+static bool username_ok_for_shell(const char *username);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth, logdetail))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: note that escaping here should be belt-and-suspenders, since
+	 * escapable characters aren't valid in either the issuer URI or the scope
+	 * list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+static bool
+validate(Port *port, const char *auth, const char **logdetail)
+{
+	static const char *const b64_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	const char *token;
+	size_t		span;
+	int			ret;
+
+	/* TODO: handle logdetail when the test framework can check it */
+
+	/*-----
+	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+	 * 2.1:
+	 *
+	 *      b64token    = 1*( ALPHA / DIGIT /
+	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+	 *      credentials = "Bearer" 1*SP b64token
+	 *
+	 * The "credentials" construction is what we receive in our auth value.
+	 *
+	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
+	 * it's pointed out in RFC 7628 Sec. 4.)
+	 *
+	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+	 */
+	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return false;
+
+	/* Pull the bearer token out of the auth value. */
+	token = auth + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/*
+	 * Before invoking the validator command, sanity-check the token format to
+	 * avoid any injection attacks later in the chain. Invalid formats are
+	 * technically a protocol violation, but don't reflect any information
+	 * about the sensitive Bearer token back to the client; log at COMMERROR
+	 * instead.
+	 */
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is empty.")));
+		return false;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return false;
+	}
+
+	/* Have the validator check the token. */
+	if (!run_validator_command(port, token))
+		return false;
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (!MyClientConnectionInfo.authn_id)
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	ret = check_usermap(port->hba->usermap, port->user_name,
+						MyClientConnectionInfo.authn_id, false);
+	return (ret == STATUS_OK);
+}
+
+static bool
+run_validator_command(Port *port, const char *token)
+{
+	bool		success = false;
+	int			rc;
+	int			pipefd[2];
+	int			rfd = -1;
+	int			wfd = -1;
+
+	StringInfoData command = {0};
+	char	   *p;
+	FILE	   *fh = NULL;
+
+	ssize_t		written;
+	char	   *line = NULL;
+	size_t		size = 0;
+	ssize_t		len;
+
+	Assert(oauth_validator_command);
+
+	if (!oauth_validator_command[0])
+	{
+		ereport(COMMERROR,
+				(errmsg("oauth_validator_command is not set"),
+				 errhint("To allow OAuth authenticated connections, set "
+						 "oauth_validator_command in postgresql.conf.")));
+		return false;
+	}
+
+	/*------
+	 * Since popen() is unidirectional, open up a pipe for the other
+	 * direction. Use CLOEXEC to ensure that our write end doesn't
+	 * accidentally get copied into child processes, which would prevent us
+	 * from closing it cleanly.
+	 *
+	 * XXX this is ugly. We should just read from the child process's stdout,
+	 * but that's a lot more code.
+	 * XXX by bypassing the popen API, we open the potential of process
+	 * deadlock. Clearly document child process requirements (i.e. the child
+	 * MUST read all data off of the pipe before writing anything).
+	 * TODO: port to Windows using _pipe().
+	 */
+	rc = pipe(pipefd);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not create child pipe: %m")));
+		return false;
+	}
+
+	rfd = pipefd[0];
+	wfd = pipefd[1];
+
+	if (!set_cloexec(wfd))
+	{
+		/* error message was already logged */
+		goto cleanup;
+	}
+
+	/*----------
+	 * Construct the command, substituting any recognized %-specifiers:
+	 *
+	 *   %f: the file descriptor of the input pipe
+	 *   %r: the role that the client wants to assume (port->user_name)
+	 *   %%: a literal '%'
+	 */
+	initStringInfo(&command);
+
+	for (p = oauth_validator_command; *p; p++)
+	{
+		if (p[0] == '%')
+		{
+			switch (p[1])
+			{
+				case 'f':
+					appendStringInfo(&command, "%d", rfd);
+					p++;
+					break;
+				case 'r':
+
+					/*
+					 * TODO: decide how this string should be escaped. The
+					 * role is controlled by the client, so if we don't escape
+					 * it, command injections are inevitable.
+					 *
+					 * This is probably an indication that the role name needs
+					 * to be communicated to the validator process in some
+					 * other way. For this proof of concept, just be
+					 * incredibly strict about the characters that are allowed
+					 * in user names.
+					 */
+					if (!username_ok_for_shell(port->user_name))
+						goto cleanup;
+
+					appendStringInfoString(&command, port->user_name);
+					p++;
+					break;
+				case '%':
+					appendStringInfoChar(&command, '%');
+					p++;
+					break;
+				default:
+					appendStringInfoChar(&command, p[0]);
+			}
+		}
+		else
+			appendStringInfoChar(&command, p[0]);
+	}
+
+	/* Execute the command. */
+	fh = OpenPipeStream(command.data, "r");
+	if (!fh)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("opening pipe to OAuth validator: %m")));
+		goto cleanup;
+	}
+
+	/* We don't need the read end of the pipe anymore. */
+	close(rfd);
+	rfd = -1;
+
+	/* Give the command the token to validate. */
+	written = write(wfd, token, strlen(token));
+	if (written != strlen(token))
+	{
+		/* TODO must loop for short writes, EINTR et al */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not write token to child pipe: %m")));
+		goto cleanup;
+	}
+
+	close(wfd);
+	wfd = -1;
+
+	/*-----
+	 * Read the command's response.
+	 *
+	 * TODO: getline() is probably too new to use, unfortunately.
+	 * TODO: loop over all lines
+	 */
+	if ((len = getline(&line, &size, fh)) >= 0)
+	{
+		/* TODO: fail if the authn_id doesn't end with a newline */
+		if (len > 0)
+			line[len - 1] = '\0';
+
+		set_authn_id(port, line);
+	}
+	else if (ferror(fh))
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not read from command \"%s\": %m",
+						command.data)));
+		goto cleanup;
+	}
+
+	/* Make sure the command exits cleanly. */
+	if (!check_exit(&fh, command.data))
+	{
+		/* error message already logged */
+		goto cleanup;
+	}
+
+	/* Done. */
+	success = true;
+
+cleanup:
+	if (line)
+		free(line);
+
+	/*
+	 * In the successful case, the pipe fds are already closed. For the error
+	 * case, always close out the pipe before waiting for the command, to
+	 * prevent deadlock.
+	 */
+	if (rfd >= 0)
+		close(rfd);
+	if (wfd >= 0)
+		close(wfd);
+
+	if (fh)
+	{
+		Assert(!success);
+		check_exit(&fh, command.data);
+	}
+
+	if (command.data)
+		pfree(command.data);
+
+	return success;
+}
+
+static bool
+check_exit(FILE **fh, const char *command)
+{
+	int			rc;
+
+	rc = ClosePipeStream(*fh);
+	*fh = NULL;
+
+	if (rc == -1)
+	{
+		/* pclose() itself failed. */
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not close pipe to command \"%s\": %m",
+						command)));
+	}
+	else if (rc != 0)
+	{
+		char	   *reason = wait_result_to_str(rc);
+
+		ereport(COMMERROR,
+				(errmsg("failed to execute command \"%s\": %s",
+						command, reason)));
+
+		pfree(reason);
+	}
+
+	return (rc == 0);
+}
+
+static bool
+set_cloexec(int fd)
+{
+	int			flags;
+	int			rc;
+
+	flags = fcntl(fd, F_GETFD);
+	if (flags == -1)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not get fd flags for child pipe: %m")));
+		return false;
+	}
+
+	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
+	if (rc < 0)
+	{
+		ereport(COMMERROR,
+				(errcode_for_file_access(),
+				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * XXX This should go away eventually and be replaced with either a proper
+ * escape or a different strategy for communication with the validator command.
+ */
+static bool
+username_ok_for_shell(const char *username)
+{
+	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
+	static const char *const allowed =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-_./:";
+	size_t		span;
+
+	Assert(username && username[0]);	/* should have already been checked */
+
+	span = strspn(username, allowed);
+	if (username[span] != '\0')
+	{
+		ereport(COMMERROR,
+				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
+		return false;
+	}
+
+	return true;
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 4161959914..486a34e719 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2b607c5270..0a5a8640fc 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index d506c3c0b7..e592bedf9f 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1743,6 +1744,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2062,8 +2065,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2446,6 +2450,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 1e71e7db4a..773a75c96f 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -48,6 +48,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4684,6 +4685,17 @@ struct config_string ConfigureNamesString[] =
 		check_standby_slot_names, assign_standby_slot_names, NULL
 	},
 
+	{
+		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&oauth_validator_command,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..5edab3b25a
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern char *oauth_validator_command;
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index f535fbbd5c..86b1cc6e5c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3564,6 +3564,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v21-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v21-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From b754a2f8fb918921e32cbf2b59ac823ae01fc301 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v21 2/7] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 configure                                 |  110 ++
 configure.ac                              |   28 +
 meson.build                               |   29 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   10 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 1982 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 +++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   77 +-
 src/interfaces/libpq/libpq-int.h          |   14 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 23 files changed, 3200 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/configure b/configure
index 36feeafbb2..d785ca846b 100755
--- a/configure
+++ b/configure
@@ -712,6 +712,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -858,6 +859,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8485,6 +8488,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13037,6 +13086,56 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14062,6 +14161,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 57f734879e..1ec0712ed5 100644
--- a/configure.ac
+++ b/configure.ac
@@ -927,6 +927,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1423,6 +1443,10 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1614,6 +1638,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/meson.build b/meson.build
index c8fdfeb0ec..34a0226ccf 100644
--- a/meson.build
+++ b/meson.build
@@ -840,6 +840,33 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  oauth = dependency('libcurl', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -2845,6 +2872,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3446,6 +3474,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index 249ecc5ffd..3248b9cc1c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -121,6 +121,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 8b3f8c24e0..79b3647834 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..5ff3488bfb
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 591e1ca3df..399578cebb 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -246,6 +246,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -714,6 +717,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index fe2af575c5..2618c293af 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -61,6 +61,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -79,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 9fbd3d3407..9946764c4a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -202,3 +202,6 @@ PQcancelSocket            199
 PQcancelErrorMessage      200
 PQcancelReset             201
 PQcancelFinish            202
+PQsetAuthDataHook         203
+PQgetAuthDataHook         204
+PQdefaultAuthDataHook     205
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..0504f96e4e
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,1982 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls to
+ * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by cURL, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by cURL to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two cURL handles,
+ * so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	char	   *content_type;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	/* Make sure the server thinks it's given us JSON. */
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		goto cleanup;
+	}
+	else if (strcasecmp(content_type, "application/json") != 0)
+	{
+		actx_error(actx, "unexpected content type \"%s\"", content_type);
+		goto cleanup;
+	}
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		goto cleanup;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.)
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return 1;				/* TODO this slows down the tests
+								 * considerably... */
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(authz->interval_str);
+	else
+	{
+		/* TODO: handle default interval of 5 seconds */
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * cURL Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * cURL multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the `data` field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * cURL multi handle. Rather than continually adding and removing the timer,
+ * we keep it in the set at all times and just disarm it when it's not
+ * needed.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means cURL wants us to call back immediately. That's
+		 * not technically an option for timerfd, but we can make the timeout
+		 * ridiculously short.
+		 *
+		 * TODO: maybe just signal drive_request() to immediately call back in
+		 * this case?
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Initializes the two cURL handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data	*curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * cURL for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create cURL multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create cURL handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
+	 */
+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS, return false);
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from cURL; appends the response body into actx->work_data.
+ * See start_request().
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * Sanity check.
+	 *
+	 * TODO: even though this is nominally an asynchronous process, there are
+	 * apparently operations that can synchronously fail by this point, such
+	 * as connections to closed local ports. Maybe we need to let this case
+	 * fall through to drive_request instead, or else perform a
+	 * curl_multi_info_read immediately.
+	 */
+	if (running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	err = curl_multi_socket_all(actx->curlm, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return PGRES_POLLING_FAILED;
+	}
+
+	if (running)
+	{
+		/* We'll come back again. */
+		return PGRES_POLLING_READING;
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future cURL versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "cURL easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * The top-level, nonblocking entry point for the cURL implementation. This will
+ * be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	struct token tok = {0};
+
+	/*
+	 * XXX This is not safe. cURL has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized cURL,
+	 * which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell cURL to initialize
+	 * everything else, because other pieces of our client executable may
+	 * already be using cURL for their own purposes. If we initialize libcurl
+	 * first, with only a subset of its features, we could break those other
+	 * clients nondeterministically, and that would probably be a nightmare to
+	 * debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	/* By default, the multiplexer is the altsock. Reassign as desired. */
+	*altsock = actx->mux;
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				PostgresPollingStatusType status;
+
+				status = drive_request(actx);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+				else if (status != PGRES_POLLING_OK)
+				{
+					/* not done yet */
+					free_token(&tok);
+					return status;
+				}
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			/* TODO check that the timer has expired */
+			break;
+	}
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			actx->errctx = "failed to fetch OpenID discovery document";
+			if (!start_discovery(actx, conn->oauth_discovery_uri))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DISCOVERY;
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+			if (!finish_discovery(actx))
+				goto error_return;
+
+			/* TODO: check issuer */
+
+			actx->errctx = "cannot run OAuth device authorization";
+			if (!check_for_device_flow(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain device authorization";
+			if (!start_device_authz(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+			break;
+
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			if (!finish_device_authz(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				const struct token_error *err;
+#ifdef HAVE_SYS_EPOLL_H
+				struct itimerspec spec = {0};
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				struct kevent ev = {0};
+#endif
+
+				if (!finish_token_request(actx, &tok))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					int			res;
+					PQpromptOAuthDevice prompt = {
+						.verification_uri = actx->authz.verification_uri,
+						.user_code = actx->authz.user_code,
+						/* TODO: optional fields */
+					};
+
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
+										 &prompt);
+
+					if (!res)
+					{
+						fprintf(stderr, "Visit %s and enter the code: %s",
+								prompt.verification_uri, prompt.user_code);
+					}
+					else if (res < 0)
+					{
+						actx_error(actx, "device prompt failed");
+						goto error_return;
+					}
+
+					actx->user_prompted = true;
+				}
+
+				if (tok.access_token)
+				{
+					/* Construct our Bearer token. */
+					resetPQExpBuffer(&actx->work_data);
+					appendPQExpBuffer(&actx->work_data, "Bearer %s",
+									  tok.access_token);
+
+					if (PQExpBufferDataBroken(actx->work_data))
+					{
+						actx_error(actx, "out of memory");
+						goto error_return;
+					}
+
+					state->token = strdup(actx->work_data.data);
+					break;
+				}
+
+				/*
+				 * authorization_pending and slow_down are the only acceptable
+				 * errors; anything else and we bail.
+				 */
+				err = &tok.err;
+				if (!err->error || (strcmp(err->error, "authorization_pending")
+									&& strcmp(err->error, "slow_down")))
+				{
+					/* TODO handle !err->error */
+					if (err->error_description)
+						appendPQExpBuffer(&actx->errbuf, "%s ",
+										  err->error_description);
+
+					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+
+					goto error_return;
+				}
+
+				/*
+				 * A slow_down error requires us to permanently increase our
+				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
+				 */
+				if (strcmp(err->error, "slow_down") == 0)
+				{
+					actx->authz.interval += 5;	/* TODO check for overflow? */
+				}
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				Assert(actx->authz.interval > 0);
+#ifdef HAVE_SYS_EPOLL_H
+				spec.it_value.tv_sec = actx->authz.interval;
+
+				if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+				{
+					actx_error(actx, "failed to set timerfd: %m");
+					goto error_return;
+				}
+
+				*altsock = actx->timerfd;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				/* XXX: I guess this wants to be hidden in a routine */
+				EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
+					   actx->authz.interval * 1000, 0);
+				if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
+				{
+					actx_error(actx, "failed to set kqueue timer: %m");
+					goto error_return;
+				}
+				/* XXX: why did we change the altsock in the epoll version? */
+#endif
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				break;
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+	}
+
+	free_token(&tok);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	free_token(&tok);
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..66ee8ff076
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..8d4ea45aa8
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2023, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 81ec08485d..9cd5c8cfb1 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -419,7 +420,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -437,7 +438,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -524,6 +525,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -563,26 +573,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -625,7 +657,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -650,11 +682,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -955,12 +997,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1118,7 +1166,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1135,7 +1183,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1451,3 +1500,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 01e49c6975..714594324e 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -359,6 +359,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -616,6 +633,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2595,6 +2613,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3603,6 +3622,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3758,6 +3778,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 #ifdef ENABLE_GSS
 
 					/*
@@ -3839,7 +3869,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/* OK, we have processed the message; mark data consumed */
 				conn->inStart = conn->inCursor;
@@ -3872,6 +3912,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4377,6 +4452,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4493,6 +4569,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -6975,6 +7056,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index f2fc78a481..663b1c1acf 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1039,10 +1039,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1059,7 +1062,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = pqSocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = pqSocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 09b485bd2b..454ec8236b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -38,6 +38,8 @@ extern "C"
 #define LIBPQ_HAS_TRACE_FLAGS 1
 /* Indicates that PQsslAttribute(NULL, "library") is useful */
 #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -80,8 +82,10 @@ typedef enum
 	CONNECTION_CHECK_TARGET,	/* Internal state: checking target server
 								 * properties. */
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
-	CONNECTION_ALLOCATED		/* Waiting for connection attempt to be
+	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -163,6 +167,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -684,10 +695,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 9c05f11a6e..5f3a0b00db 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -351,6 +351,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -413,6 +415,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -481,6 +492,9 @@ struct pg_conn
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
 
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index be6fadaea2..0d4b7ac17d 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index b0f4178b3d..f803c1200b 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -231,6 +231,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e2a0525dd4..f535fbbd5c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -354,6 +355,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1669,6 +1672,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1734,6 +1738,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1894,11 +1899,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3364,6 +3372,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v21-0004-Introduce-OAuth-validator-libraries.patchapplication/octet-stream; name=v21-0004-Introduce-OAuth-validator-libraries.patchDownload
From 78617fa9ba7af77b64ec9c821d6a7a3cab0e38ef Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Wed, 21 Feb 2024 17:04:26 +0100
Subject: [PATCH v21 4/7] Introduce OAuth validator libraries

This replaces the serverside validation code with an module API
for loading in extensions for validating bearer tokens. A lot of
code is left to be written.

Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
---
 src/backend/libpq/auth-oauth.c                | 431 +++++-------------
 src/backend/utils/misc/guc_tables.c           |   6 +-
 src/common/Makefile                           |   2 +-
 src/include/libpq/oauth.h                     |  29 +-
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  19 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  33 ++
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   |  78 ++++
 src/test/modules/oauth_validator/validator.c  |  82 ++++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  14 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 183 ++++++++
 src/tools/pgindent/typedefs.list              |   2 +
 15 files changed, 560 insertions(+), 331 deletions(-)
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 16596c089a..024f304e4d 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -6,7 +6,7 @@
  * See the following RFC for more details:
  * - RFC 7628: https://tools.ietf.org/html/rfc7628
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/backend/libpq/auth-oauth.c
@@ -19,22 +19,30 @@
 #include <fcntl.h>
 
 #include "common/oauth-common.h"
+#include "fmgr.h"
 #include "lib/stringinfo.h"
 #include "libpq/auth.h"
 #include "libpq/hba.h"
 #include "libpq/oauth.h"
 #include "libpq/sasl.h"
 #include "storage/fd.h"
+#include "storage/ipc.h"
 #include "utils/json.h"
 
 /* GUC */
-char	   *oauth_validator_command;
+char	   *OAuthValidatorLibrary = "";
 
 static void oauth_get_mechanisms(Port *port, StringInfo buf);
 static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
 static int	oauth_exchange(void *opaq, const char *input, int inputlen,
 						   char **output, int *outputlen, const char **logdetail);
 
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
 /* Mechanism declaration */
 const pg_be_sasl_mech pg_be_oauth_mech = {
 	oauth_get_mechanisms,
@@ -63,11 +71,7 @@ struct oauth_ctx
 static char *sanitize_char(char c);
 static char *parse_kvpairs_for_auth(char **input);
 static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
-static bool validate(Port *port, const char *auth, const char **logdetail);
-static bool run_validator_command(Port *port, const char *token);
-static bool check_exit(FILE **fh, const char *command);
-static bool set_cloexec(int fd);
-static bool username_ok_for_shell(const char *username);
+static bool validate(Port *port, const char *auth);
 
 #define KVSEP 0x01
 #define AUTH_KEY "auth"
@@ -100,6 +104,8 @@ oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 	ctx->issuer = port->hba->oauth_issuer;
 	ctx->scope = port->hba->oauth_scope;
 
+	load_validator_library();
+
 	return ctx;
 }
 
@@ -250,7 +256,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 				 errmsg("malformed OAUTHBEARER message"),
 				 errdetail("Message contains additional data after the final terminator.")));
 
-	if (!validate(ctx->port, auth, logdetail))
+	if (!validate(ctx->port, auth))
 	{
 		generate_error_response(ctx, output, outputlen);
 
@@ -489,70 +495,73 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	*outputlen = buf.len;
 }
 
-static bool
-validate(Port *port, const char *auth, const char **logdetail)
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
 {
-	static const char *const b64_set =
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
 		"abcdefghijklmnopqrstuvwxyz"
 		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
 		"0123456789-._~+/";
 
-	const char *token;
-	size_t		span;
-	int			ret;
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
 
-	/* TODO: handle logdetail when the test framework can check it */
-
-	/*-----
-	 * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
-	 * 2.1:
-	 *
-	 *      b64token    = 1*( ALPHA / DIGIT /
-	 *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
-	 *      credentials = "Bearer" 1*SP b64token
-	 *
-	 * The "credentials" construction is what we receive in our auth value.
-	 *
-	 * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
-	 * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
-	 * compared case-insensitively. (This is not mentioned in RFC 6750, but
-	 * it's pointed out in RFC 7628 Sec. 4.)
-	 *
-	 * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
-	 */
-	if (pg_strncasecmp(auth, BEARER_SCHEME, strlen(BEARER_SCHEME)))
-		return false;
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
 
 	/* Pull the bearer token out of the auth value. */
-	token = auth + strlen(BEARER_SCHEME);
+	token = header + strlen(BEARER_SCHEME);
 
 	/* Swallow any additional spaces. */
 	while (*token == ' ')
 		token++;
 
-	/*
-	 * Before invoking the validator command, sanity-check the token format to
-	 * avoid any injection attacks later in the chain. Invalid formats are
-	 * technically a protocol violation, but don't reflect any information
-	 * about the sensitive Bearer token back to the client; log at COMMERROR
-	 * instead.
-	 */
-
 	/* Tokens must not be empty. */
 	if (!*token)
 	{
 		ereport(COMMERROR,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
+				 errmsg("malformed OAuth bearer token 2"),
 				 errdetail("Bearer token is empty.")));
-		return false;
+		return NULL;
 	}
 
 	/*
 	 * Make sure the token contains only allowed characters. Tokens may end
 	 * with any number of '=' characters.
 	 */
-	span = strspn(token, b64_set);
+	span = strspn(token, b64token_allowed_set);
 	while (token[span] == '=')
 		span++;
 
@@ -565,15 +574,35 @@ validate(Port *port, const char *auth, const char **logdetail)
 		 */
 		ereport(COMMERROR,
 				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
+				 errmsg("malformed OAuth bearer token 3"),
 				 errdetail("Bearer token is not in the correct format.")));
-		return false;
+		return NULL;
 	}
 
-	/* Have the validator check the token. */
-	if (!run_validator_command(port, token))
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
 		return false;
 
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
 	if (port->hba->oauth_skip_usermap)
 	{
 		/*
@@ -586,7 +615,7 @@ validate(Port *port, const char *auth, const char **logdetail)
 	}
 
 	/* Make sure the validator authenticated the user. */
-	if (!MyClientConnectionInfo.authn_id)
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
 		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
@@ -596,288 +625,42 @@ validate(Port *port, const char *auth, const char **logdetail)
 	}
 
 	/* Finally, check the user map. */
-	ret = check_usermap(port->hba->usermap, port->user_name,
-						MyClientConnectionInfo.authn_id, false);
-	return (ret == STATUS_OK);
-}
-
-static bool
-run_validator_command(Port *port, const char *token)
-{
-	bool		success = false;
-	int			rc;
-	int			pipefd[2];
-	int			rfd = -1;
-	int			wfd = -1;
-
-	StringInfoData command = {0};
-	char	   *p;
-	FILE	   *fh = NULL;
-
-	ssize_t		written;
-	char	   *line = NULL;
-	size_t		size = 0;
-	ssize_t		len;
-
-	Assert(oauth_validator_command);
-
-	if (!oauth_validator_command[0])
-	{
-		ereport(COMMERROR,
-				(errmsg("oauth_validator_command is not set"),
-				 errhint("To allow OAuth authenticated connections, set "
-						 "oauth_validator_command in postgresql.conf.")));
-		return false;
-	}
-
-	/*------
-	 * Since popen() is unidirectional, open up a pipe for the other
-	 * direction. Use CLOEXEC to ensure that our write end doesn't
-	 * accidentally get copied into child processes, which would prevent us
-	 * from closing it cleanly.
-	 *
-	 * XXX this is ugly. We should just read from the child process's stdout,
-	 * but that's a lot more code.
-	 * XXX by bypassing the popen API, we open the potential of process
-	 * deadlock. Clearly document child process requirements (i.e. the child
-	 * MUST read all data off of the pipe before writing anything).
-	 * TODO: port to Windows using _pipe().
-	 */
-	rc = pipe(pipefd);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not create child pipe: %m")));
-		return false;
-	}
-
-	rfd = pipefd[0];
-	wfd = pipefd[1];
-
-	if (!set_cloexec(wfd))
-	{
-		/* error message was already logged */
-		goto cleanup;
-	}
-
-	/*----------
-	 * Construct the command, substituting any recognized %-specifiers:
-	 *
-	 *   %f: the file descriptor of the input pipe
-	 *   %r: the role that the client wants to assume (port->user_name)
-	 *   %%: a literal '%'
-	 */
-	initStringInfo(&command);
-
-	for (p = oauth_validator_command; *p; p++)
-	{
-		if (p[0] == '%')
-		{
-			switch (p[1])
-			{
-				case 'f':
-					appendStringInfo(&command, "%d", rfd);
-					p++;
-					break;
-				case 'r':
-
-					/*
-					 * TODO: decide how this string should be escaped. The
-					 * role is controlled by the client, so if we don't escape
-					 * it, command injections are inevitable.
-					 *
-					 * This is probably an indication that the role name needs
-					 * to be communicated to the validator process in some
-					 * other way. For this proof of concept, just be
-					 * incredibly strict about the characters that are allowed
-					 * in user names.
-					 */
-					if (!username_ok_for_shell(port->user_name))
-						goto cleanup;
-
-					appendStringInfoString(&command, port->user_name);
-					p++;
-					break;
-				case '%':
-					appendStringInfoChar(&command, '%');
-					p++;
-					break;
-				default:
-					appendStringInfoChar(&command, p[0]);
-			}
-		}
-		else
-			appendStringInfoChar(&command, p[0]);
-	}
-
-	/* Execute the command. */
-	fh = OpenPipeStream(command.data, "r");
-	if (!fh)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("opening pipe to OAuth validator: %m")));
-		goto cleanup;
-	}
-
-	/* We don't need the read end of the pipe anymore. */
-	close(rfd);
-	rfd = -1;
-
-	/* Give the command the token to validate. */
-	written = write(wfd, token, strlen(token));
-	if (written != strlen(token))
-	{
-		/* TODO must loop for short writes, EINTR et al */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not write token to child pipe: %m")));
-		goto cleanup;
-	}
-
-	close(wfd);
-	wfd = -1;
-
-	/*-----
-	 * Read the command's response.
-	 *
-	 * TODO: getline() is probably too new to use, unfortunately.
-	 * TODO: loop over all lines
-	 */
-	if ((len = getline(&line, &size, fh)) >= 0)
-	{
-		/* TODO: fail if the authn_id doesn't end with a newline */
-		if (len > 0)
-			line[len - 1] = '\0';
-
-		set_authn_id(port, line);
-	}
-	else if (ferror(fh))
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not read from command \"%s\": %m",
-						command.data)));
-		goto cleanup;
-	}
-
-	/* Make sure the command exits cleanly. */
-	if (!check_exit(&fh, command.data))
-	{
-		/* error message already logged */
-		goto cleanup;
-	}
-
-	/* Done. */
-	success = true;
-
-cleanup:
-	if (line)
-		free(line);
-
-	/*
-	 * In the successful case, the pipe fds are already closed. For the error
-	 * case, always close out the pipe before waiting for the command, to
-	 * prevent deadlock.
-	 */
-	if (rfd >= 0)
-		close(rfd);
-	if (wfd >= 0)
-		close(wfd);
-
-	if (fh)
-	{
-		Assert(!success);
-		check_exit(&fh, command.data);
-	}
-
-	if (command.data)
-		pfree(command.data);
-
-	return success;
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
 }
 
-static bool
-check_exit(FILE **fh, const char *command)
+static void
+load_validator_library(void)
 {
-	int			rc;
-
-	rc = ClosePipeStream(*fh);
-	*fh = NULL;
-
-	if (rc == -1)
-	{
-		/* pclose() itself failed. */
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not close pipe to command \"%s\": %m",
-						command)));
-	}
-	else if (rc != 0)
-	{
-		char	   *reason = wait_result_to_str(rc);
+	OAuthValidatorModuleInit validator_init;
 
-		ereport(COMMERROR,
-				(errmsg("failed to execute command \"%s\": %s",
-						command, reason)));
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
 
-		pfree(reason);
-	}
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
 
-	return (rc == 0);
-}
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
 
-static bool
-set_cloexec(int fd)
-{
-	int			flags;
-	int			rc;
+	ValidatorCallbacks = (*validator_init) ();
 
-	flags = fcntl(fd, F_GETFD);
-	if (flags == -1)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not get fd flags for child pipe: %m")));
-		return false;
-	}
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
 
-	rc = fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
-	if (rc < 0)
-	{
-		ereport(COMMERROR,
-				(errcode_for_file_access(),
-				 errmsg("could not set FD_CLOEXEC for child pipe: %m")));
-		return false;
-	}
-
-	return true;
+	before_shmem_exit(shutdown_validator_library, 0);
 }
 
-/*
- * XXX This should go away eventually and be replaced with either a proper
- * escape or a different strategy for communication with the validator command.
- */
-static bool
-username_ok_for_shell(const char *username)
+static void
+shutdown_validator_library(int code, Datum arg)
 {
-	/* This set is borrowed from fe_utils' appendShellStringNoError(). */
-	static const char *const allowed =
-		"abcdefghijklmnopqrstuvwxyz"
-		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
-		"0123456789-_./:";
-	size_t		span;
-
-	Assert(username && username[0]);	/* should have already been checked */
-
-	span = strspn(username, allowed);
-	if (username[span] != '\0')
-	{
-		ereport(COMMERROR,
-				(errmsg("PostgreSQL user name contains unsafe characters and cannot be passed to the OAuth validator")));
-		return false;
-	}
-
-	return true;
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
 }
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 773a75c96f..62d4f15d2b 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -4686,12 +4686,12 @@ struct config_string ConfigureNamesString[] =
 	},
 
 	{
-		{"oauth_validator_command", PGC_SIGHUP, CONN_AUTH_AUTH,
-			gettext_noop("Command to validate OAuth v2 bearer tokens."),
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
 			NULL,
 			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
 		},
-		&oauth_validator_command,
+		&OAuthValidatorLibrary,
 		"",
 		NULL, NULL, NULL
 	},
diff --git a/src/common/Makefile b/src/common/Makefile
index c4f4448e2f..98b8dac0b4 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -41,7 +41,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
 override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
-LIBS += $(PTHREAD_LIBS)
+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
 
 OBJS_COMMON = \
 	archive.o \
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
index 5edab3b25a..6f98e84cc9 100644
--- a/src/include/libpq/oauth.h
+++ b/src/include/libpq/oauth.h
@@ -3,7 +3,7 @@
  * oauth.h
  *	  Interface to libpq/auth-oauth.c
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/include/libpq/oauth.h
@@ -16,7 +16,32 @@
 #include "libpq/libpq-be.h"
 #include "libpq/sasl.h"
 
-extern char *oauth_validator_command;
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
 
 /* Implementation */
 extern const pg_be_sasl_mech pg_be_oauth_mech;
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 7c11fb97f2..a08105e0c2 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..1f874cd7f2
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,19 @@
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..d9c1d1d577
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..14c7778298
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,78 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+
+my $issuer = "127.0.0.1:18080";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test    oauth issuer="$issuer"           scope="openid postgres"
+local all testalt oauth issuer="$issuer/alternate" scope="openid postgres alt"
+});
+$node->reload;
+
+my $webserver = PostgreSQL::Test::OAuthServer->new(18080);
+
+my $port = $webserver->port();
+
+is($port, 18080, "Port is 18080");
+
+$webserver->setup();
+$webserver->run();
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+my $user = "test";
+$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+$node->log_check("user $user: validator receives correct parameters", $log_start,
+				 log_like => [
+					 qr/oauth_validator: token="9243959234", role="$user"/,
+					 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+				 ]);
+$node->log_check("user $user: validator sets authenticated identity", $log_start,
+				 log_like => [
+					 qr/connection authenticated: identity="test" method=oauth/,
+				 ]);
+$log_start = $log_end;
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+				  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+$node->log_check("user $user: validator receives correct parameters", $log_start,
+				 log_like => [
+					 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+					 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+				 ]);
+$node->log_check("user $user: validator sets authenticated identity", $log_start,
+				 log_like => [
+					 qr/connection authenticated: identity="testalt" method=oauth/,
+				 ]);
+$log_start = $log_end;
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..09a4bf61d2
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,82 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+				state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index b08296605c..4b7c93f19d 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2299,6 +2299,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2342,7 +2347,14 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..5c195efb79
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,183 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use threads;
+use Socket;
+use IO::Select;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+	my $port = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	$self->{'port'} = $port;
+
+	return $self;
+}
+
+sub setup
+{
+	my $self = shift;
+	my $tcp = getprotobyname('tcp');
+
+	socket($self->{'socket'}, PF_INET, SOCK_STREAM, $tcp)
+		or die "no socket";
+	setsockopt($self->{'socket'}, SOL_SOCKET, SO_REUSEADDR, pack("l", 1));
+	bind($self->{'socket'}, sockaddr_in($self->{'port'}, INADDR_ANY));
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+
+	my $server_thread = threads->create(\&_listen, $self);
+	$server_thread->detach();
+}
+
+sub _listen
+{
+	my $self = shift;
+
+	listen($self->{'socket'}, SOMAXCONN) or die "fail to listen: $!";
+
+	while (1)
+	{
+		my $fh;
+		my %request;
+		my $remote = accept($fh, $self->{'socket'});
+		binmode $fh;
+
+		my ($method, $object, $prot) = split(/ /, <$fh>);
+		$request{'method'} = $method;
+		$request{'object'} = $object;
+		chomp($request{'object'});
+
+		local $/ = Socket::CRLF;
+		my $c = 0;
+		while(<$fh>)
+		{
+			chomp;
+			# Headers
+			if (/:/)
+			{
+				my ($field, $value) = split(/:/, $_, 2);
+				$value =~ s/^\s+//;
+				$request{'headers'}{lc $field} = $value;
+			}
+			# POST data
+			elsif (/^$/)
+			{
+				read($fh, $request{'content'}, $request{'headers'}{'content-length'})
+					if defined $request{'headers'}{'content-length'};
+				last;
+			}
+		}
+
+		# Debug printing
+		# print ": read ".$request{'method'} . ";" . $request{'object'}.";\n";
+		# foreach my $h (keys(%{$request{'headers'}}))
+		#{
+		#	printf ": headers: " . $request{'headers'}{$h} . "\n";
+		#}
+		#printf ": POST: " . $request{'content'} . "\n" if defined $request{'content'};
+
+		my $alternate = 0;
+		if ($request{'object'} =~ qr|^/alternate(/.*)$|)
+		{
+			$alternate = 1;
+			$request{'object'} = $1;
+		}
+
+		if ($request{'object'} eq '/.well-known/openid-configuration')
+		{
+			my $issuer = "http://localhost:$self->{'port'}";
+			if ($alternate)
+			{
+				$issuer .= "/alternate";
+			}
+
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"issuer": "$issuer",
+				"token_endpoint": "$issuer/token",
+				"device_authorization_endpoint": "$issuer/authorize",
+				"response_types_supported": ["token"],
+				"subject_types_supported": ["public"],
+				"id_token_signing_alg_values_supported": ["RS256"],
+				"grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"]
+			}
+EOR
+		}
+		elsif ($request{'object'} eq '/authorize')
+		{
+			my $uri = "https://example.com/";
+			if ($alternate)
+			{
+				$uri = "https://example.org/";
+			}
+
+			print ": returning device_code\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"device_code": "postgres",
+				"user_code" : "postgresuser",
+				"interval" : 0,
+				"verification_uri" : "$uri",
+				"expires-in": 5
+			}
+EOR
+		}
+		elsif ($request{'object'} eq '/token')
+		{
+			my $token = "9243959234";
+			if ($alternate)
+			{
+				$token .= "-alt";
+			}
+
+			print ": returning token\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: application/json\r\n";
+			print $fh "\r\n";
+			print $fh <<EOR;
+			{
+				"access_token": "$token",
+				"token_type": "bearer"
+			}
+EOR
+		}
+		else
+		{
+			print ": returning default\n";
+			print $fh "HTTP/1.0 200 OK\r\nServer: Postgres Regress\r\n";
+			print $fh "Content-Type: text/html\r\n";
+			print $fh "\r\n";
+			print $fh "Ok\n";
+		}
+
+		close($fh);
+	}
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 86b1cc6e5c..0eaa079261 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1673,6 +1673,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3000,6 +3001,7 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
 ValuesScan
 ValuesScanState
 Var
-- 
2.34.1

v21-0001-common-jsonapi-support-libpq-as-a-client.patchapplication/octet-stream; name=v21-0001-common-jsonapi-support-libpq-as-a-client.patchDownload
From 557370eabb9e7c8004267dd617ac8936f499b2cf Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v21 1/7] common/jsonapi: support libpq as a client

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed rather than exit()ing.
---
 src/bin/pg_combinebackup/Makefile    |   4 +-
 src/bin/pg_combinebackup/meson.build |   2 +-
 src/bin/pg_verifybackup/Makefile     |   2 +-
 src/common/Makefile                  |   2 +-
 src/common/jsonapi.c                 | 173 ++++++++++++++++++++-------
 src/common/meson.build               |   8 +-
 src/include/common/jsonapi.h         |  19 ++-
 7 files changed, 158 insertions(+), 52 deletions(-)

diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index c3729755ba..2f7dc1ed87 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -18,6 +18,8 @@ include $(top_builddir)/src/Makefile.global
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
 LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
@@ -30,7 +32,7 @@ OBJS = \
 
 all: pg_combinebackup
 
-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
 	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
 
 install: all installdirs
diff --git a/src/bin/pg_combinebackup/meson.build b/src/bin/pg_combinebackup/meson.build
index 1d4b9c218f..cab677b574 100644
--- a/src/bin/pg_combinebackup/meson.build
+++ b/src/bin/pg_combinebackup/meson.build
@@ -17,7 +17,7 @@ endif
 
 pg_combinebackup = executable('pg_combinebackup',
   pg_combinebackup_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args,
 )
 bin_targets += pg_combinebackup
diff --git a/src/bin/pg_verifybackup/Makefile b/src/bin/pg_verifybackup/Makefile
index 7c045f142e..3372fada01 100644
--- a/src/bin/pg_verifybackup/Makefile
+++ b/src/bin/pg_verifybackup/Makefile
@@ -17,7 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 # We need libpq only because fe_utils does.
-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
diff --git a/src/common/Makefile b/src/common/Makefile
index 3d83299432..c4f4448e2f 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 OBJS_COMMON = \
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 98d6e66a21..4aeedc0bc5 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,43 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+
+#define appendStrVal		appendPQExpBuffer
+#define appendBinaryStrVal  appendBinaryPQExpBuffer
+#define appendStrValChar	appendPQExpBufferChar
+#define createStrVal		createPQExpBuffer
+#define resetStrVal			resetPQExpBuffer
+#define destroyStrVal		destroyPQExpBuffer
+
+#else							/* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+
+#define appendStrVal		appendStringInfo
+#define appendBinaryStrVal  appendBinaryStringInfo
+#define appendStrValChar	appendStringInfoChar
+#define createStrVal		makeStringInfo
+#define resetStrVal			resetStringInfo
+#define destroyStrVal		destroyStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -168,9 +201,16 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+	lex->errormsg = NULL;
 
 	return lex;
 }
@@ -185,14 +225,18 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-		destroyStringInfo(lex->strval);
+		destroyStrVal(lex->strval);
 
 	if (lex->errormsg)
-		destroyStringInfo(lex->errormsg);
+		destroyStrVal(lex->errormsg);
 
 	if (lex->flags & JSONLEX_FREE_STRUCT)
 		pfree(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -258,7 +302,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -320,14 +364,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -361,8 +412,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -418,6 +473,11 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -766,8 +826,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -804,7 +871,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -861,19 +928,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -883,22 +950,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -933,7 +1000,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to appendBinaryStrVal.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -957,8 +1024,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				appendBinaryStrVal(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -974,6 +1041,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -1158,19 +1230,25 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
 	if (lex->errormsg)
-		resetStringInfo(lex->errormsg);
+		resetStrVal(lex->errormsg);
 	else
-		lex->errormsg = makeStringInfo();
+		lex->errormsg = createStrVal();
 
 	/*
 	 * A helper for error messages that should print the current token. The
 	 * format must contain exactly one %.*s specifier.
 	 */
 #define token_error(lex, format) \
-	appendStringInfo((lex)->errormsg, _(format), \
-					 (int) ((lex)->token_terminator - (lex)->token_start), \
-					 (lex)->token_start);
+	appendStrVal((lex)->errormsg, _(format), \
+				 (int) ((lex)->token_terminator - (lex)->token_start), \
+				 (lex)->token_start);
 
 	switch (error)
 	{
@@ -1181,9 +1259,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
 			break;
 		case JSON_ESCAPING_REQUIRED:
-			appendStringInfo(lex->errormsg,
-							 _("Character with value 0x%02x must be escaped."),
-							 (unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
 			break;
 		case JSON_EXPECTED_END:
 			token_error(lex, "Expected end of input, but found \"%.*s\".");
@@ -1214,6 +1292,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 		case JSON_INVALID_TOKEN:
 			token_error(lex, "Token \"%.*s\" is invalid.");
 			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -1245,15 +1326,23 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 	}
 #undef token_error
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	if (lex->errormsg->len == 0)
-		appendStringInfo(lex->errormsg,
-						 _("unexpected json parse error type: %d"),
-						 (int) error);
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && lex->errormsg->len == 0)
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 _("unexpected json parse error type: %d"),
+					 (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
+#endif
 
 	return lex->errormsg->data;
 }
diff --git a/src/common/meson.build b/src/common/meson.build
index de68e408fa..379b228d86 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -125,13 +125,18 @@ common_sources_frontend_static += files(
 # least cryptohash_openssl.c, hmac_openssl.c depend on it.
 # controldata_utils.c depends on wait_event_types_h. That's arguably a
 # layering violation, but ...
+#
+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
+# appropriately. This seems completely broken.
 pgcommon = {}
 pgcommon_variants = {
   '_srv': internal_lib_args + {
+    'include_directories': include_directories('.'),
     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
     'dependencies': [backend_common_code],
   },
   '': default_lib_args + {
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_static,
     'dependencies': [frontend_common_code],
     # Files in libpgcommon.a should use/export the "xxx_private" versions
@@ -140,6 +145,7 @@ pgcommon_variants = {
   },
   '_shlib': default_lib_args + {
     'pic': true,
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
   },
@@ -157,7 +163,6 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'sources': sources,
         'c_args': c_args,
@@ -170,7 +175,6 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'dependencies': opts['dependencies'] + [ssl],
       }
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 86a0fc2d00..75d444c17a 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -48,6 +46,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -57,6 +56,17 @@ typedef enum JsonParseErrorType
 	JSON_SEM_ACTION_FAILED,		/* error should already be reported */
 } JsonParseErrorType;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
 
 /*
  * All the fields in this structure should be treated as read-only.
@@ -88,8 +98,9 @@ typedef struct JsonLexContext
 	bits32		flags;
 	int			line_number;	/* line number, starting from 1 */
 	char	   *line_start;		/* where that line starts within input */
-	StringInfo	strval;
-	StringInfo	errormsg;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
-- 
2.34.1

v21-0005-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v21-0005-Add-pytest-suite-for-OAuth.patchDownload
From 1b871542f59a447cd04d2e60e6a4628315a4c94f Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v21 5/7] Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |   22 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  137 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1721 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 +++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |    9 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1074 +++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 +++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5441 insertions(+), 7 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 1adfdfdd45..b7dc66bd14 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl load_balance python
 
 
 # What files to preserve in case tests fail
@@ -163,7 +163,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -175,6 +175,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -223,6 +224,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -235,6 +237,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -310,8 +313,11 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bullseye - Autoconf
@@ -366,6 +372,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -376,7 +384,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.32-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
@@ -674,8 +682,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/meson.build b/meson.build
index 34a0226ccf..feb159abbf 100644
--- a/meson.build
+++ b/meson.build
@@ -3185,6 +3185,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3346,6 +3349,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index c3d0dfedf1..f401ec179e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -7,6 +7,7 @@ subdir('authentication')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..94f3620af3
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,137 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            self._pump_async(conn)
+            conn.close()
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..14fe139e73
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1721 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": "application/json"}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept, openid_provider, asynchronous, retries, scope, secret, auth_data_cb
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+def test_oauth_retry_interval(accept, openid_provider, retries, error_code):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": expected_retry_interval,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..57ba1ced94
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,9 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+construct~=2.10.61
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..dbb8b8823c
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..427ab063e6
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1074 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

v21-0006-XXX-temporary-patches-to-build-and-test.patchapplication/octet-stream; name=v21-0006-XXX-temporary-patches-to-build-and-test.patchDownload
From e6c5d94682d601002e029cd28fbdb9d94f2dac0d Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 20 Feb 2024 11:35:29 -0800
Subject: [PATCH v21 6/7] XXX temporary patches to build and test

- construct 2.10.70 has some incompatibilities with the current tests
- temporarily skip the exit check (from Daniel Gustafsson); this needs
  to be turned into an exception for curl rather than a plain exit call
---
 src/interfaces/libpq/Makefile    | 2 +-
 src/test/python/requirements.txt | 4 +++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 2618c293af..e86d4803ff 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -124,7 +124,7 @@ libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
 	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
-		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
+		echo 'libpq must not be calling any function which invokes exit'; \
 	fi
 endif
 endif
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
index 57ba1ced94..0dfcffb83e 100644
--- a/src/test/python/requirements.txt
+++ b/src/test/python/requirements.txt
@@ -1,7 +1,9 @@
 black
 # cryptography 35.x and later add many platform/toolchain restrictions, beware
 cryptography~=3.4.8
-construct~=2.10.61
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
 isort~=5.6
 # TODO: update to psycopg[c] 3.1
 psycopg2~=2.9.7
-- 
2.34.1

#101Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#100)
4 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 22 Mar 2024, at 19:21, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

v21 is a quick rebase over HEAD, which has adopted a few pieces of
v20. I've also fixed a race condition in the tests.

Thanks for the rebase, I have a few comments from working with it a bit:

In jsonapi.c, makeJsonLexContextCstringLen initialize a JsonLexContext with
palloc0 which would need to be ported over to use ALLOC for frontend code. On
that note, the errorhandling in parse_oauth_json() for content-type checks
attempts to free the JsonLexContext even before it has been created. Here we
can just return false.

-  echo 'libpq must not be calling any function which invokes exit'; exit 1; \
+  echo 'libpq must not be calling any function which invokes exit'; \
The offending codepath in libcurl was in the NTLM_WB module, a very old and
obscure form of NTLM support which was replaced (yet remained in the tree) a
long time ago by a full NTLM implementatin.  Based on the findings in this
thread it was deprecated with a removal date set to April 2024 [0].  A bug in
the 8.4.0 release however disconnected NTLM_WB from the build and given the
lack of complaints it was decided to leave as is, so we can base our libcurl
requirements on 8.4.0 while keeping the exit() check intact.

+ else if (strcasecmp(content_type, "application/json") != 0)
This needs to handle parameters as well since it will now fail if the charset
parameter is appended (which undoubtedly will be pretty common). The easiest
way is probably to just verify the mediatype and skip the parameters since we
know it can only be charset?

+  /* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+  CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+  CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
CURLOPT_ERRORBUFFER is the old and finicky way of extracting error messages, we
should absolutely move to using CURLOPT_DEBUGFUNCTION instead.

+ /* && response_code != 401 TODO */ )
Why is this marked with a TODO, do you remember?

+ print("# OAuth provider (PID $pid) is listening on port $port\n");
Code running under Test::More need to use diag() for printing non-test output
like this.

Another issue I have is the sheer size and the fact that so much code is
replaced by subsequent commits, so I took the liberty to squash some of this
down into something less daunting. The attached v22 retains the 0001 and then
condenses the rest into two commits for frontent and backend parts. I did drop
the Python pytest patch since I feel that it's unlikely to go in from this
thread (adding pytest seems worthy of its own thread and discussion), and the
weight of it makes this seem scarier than it is. For using it, it can be
easily applied from the v21 patchset independently. I did tweak the commit
message to match reality a bit better, but there is a lot of work left there.

The final patch contains fixes for all of the above review comments as well as
a some refactoring, smaller clean-ups and TODO fixing. If these fixes are
accepted I'll incorporate them into the two commits.

Next I intend to work on writing documentation for this.

--
Daniel Gustafsson

[0]: https://curl.se/dev/deprecate.html
[1]: https://github.com/curl/curl/pull/12479

Attachments:

v22-0004-Review-comments.patchapplication/octet-stream; name=v22-0004-Review-comments.patch; x-unix-mode=0644Download
From 136930c0d55469a92f9d34a25a6d3fd5b195c6b2 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Thu, 28 Mar 2024 21:59:02 +0100
Subject: [PATCH v22 4/4] Review comments

Fixes and tidy-ups following a review of v21, a few items
are (listed in no specific order):

* Implement a version check for libcurl in autoconf, the
  equivalent check for Meson is still a TODO.
* Address a few TODOs in the code
* libpq JSON support memory management fixups
---
 config/programs.m4                           |  21 ++
 configure                                    |  34 +++
 configure.ac                                 |   1 +
 src/backend/libpq/auth-oauth.c               |  28 +-
 src/common/jsonapi.c                         |  20 +-
 src/common/parse_manifest.c                  |   2 +
 src/include/common/oauth-common.h            |   2 +-
 src/interfaces/libpq/Makefile                |   4 +-
 src/interfaces/libpq/fe-auth-oauth-curl.c    | 275 +++++++++++--------
 src/interfaces/libpq/fe-auth-oauth.c         |   9 +-
 src/interfaces/libpq/fe-auth-oauth.h         |   2 +-
 src/test/modules/oauth_validator/validator.c |   2 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm |  10 +-
 src/tools/pgindent/typedefs.list             |   1 +
 14 files changed, 274 insertions(+), 137 deletions(-)

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..157da7eec5 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,28 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 8.4.0 or higher since earlier versions can be compiled
+# with a codepatch containing exit(), and PostgreSQL does not allow any lib
+# linked to libpq which can call exit.
 
+# PGAC_CHECK_LIBCURL
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR <= 8 && LIBCURL_VERSION_MINOR < 4
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 8.4.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index d785ca846b..d2585e8ed0 100755
--- a/configure
+++ b/configure
@@ -13134,6 +13134,40 @@ else
   as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
 fi
 
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR <= 8 && LIBCURL_VERSION_MINOR < 4
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 8.4.0 or later is required." "$LINENO" 5
+fi
 fi
 
 # for contrib/sepgsql
diff --git a/configure.ac b/configure.ac
index 1ec0712ed5..ab0949b8c4 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1445,6 +1445,7 @@ AC_SUBST(LDAP_LIBS_BE)
 
 if test "$with_oauth" = curl ; then
   AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
 fi
 
 # for contrib/sepgsql
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 024f304e4d..ec1418c3fc 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -476,9 +476,9 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	initStringInfo(&buf);
 
 	/*
-	 * TODO: note that escaping here should be belt-and-suspenders, since
-	 * escapable characters aren't valid in either the issuer URI or the scope
-	 * list, but the HBA doesn't enforce that yet.
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
 	 */
 	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
 
@@ -533,7 +533,9 @@ validate_token_format(const char *header)
 	if (!header || strlen(header) <= 7)
 	{
 		ereport(COMMERROR,
-				(errmsg("malformed OAuth bearer token 1")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is less than 8 bytes."));
 		return NULL;
 	}
 
@@ -551,9 +553,9 @@ validate_token_format(const char *header)
 	if (!*token)
 	{
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 2"),
-				 errdetail("Bearer token is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
 		return NULL;
 	}
 
@@ -573,9 +575,9 @@ validate_token_format(const char *header)
 		 * of someone's password into the logs.
 		 */
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 3"),
-				 errdetail("Bearer token is not in the correct format.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
 		return NULL;
 	}
 
@@ -617,10 +619,10 @@ validate(Port *port, const char *auth)
 	/* Make sure the validator authenticated the user. */
 	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
-		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
-				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
-						port->user_name)));
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity"));
 		return false;
 	}
 
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 4aeedc0bc5..758be913be 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -29,13 +29,15 @@
 #endif
 
 /*
- * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
- * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend,
+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
  */
 #ifdef FRONTEND
 
 #define STRDUP(s) strdup(s)
 #define ALLOC(size) malloc(size)
+#define ALLOC0(size) calloc(1, size)
+#define FREE(s) free(s)
 
 #define appendStrVal		appendPQExpBuffer
 #define appendBinaryStrVal  appendBinaryPQExpBuffer
@@ -48,6 +50,8 @@
 
 #define STRDUP(s) pstrdup(s)
 #define ALLOC(size) palloc(size)
+#define ALLOC0(size) palloc0(size)
+#define FREE(s) pfree(s)
 
 #define appendStrVal		appendStringInfo
 #define appendBinaryStrVal  appendBinaryStringInfo
@@ -181,6 +185,9 @@ IsValidJsonNumber(const char *str, int len)
  * responsible for freeing the returned struct, either by calling
  * freeJsonLexContext() or (in backend environment) via memory context
  * cleanup.
+ *
+ * In frontend code this can return NULL on OOM, so callers must inspect the
+ * returned pointer.
  */
 JsonLexContext *
 makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
@@ -188,7 +195,9 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return NULL;
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -227,6 +236,9 @@ freeJsonLexContext(JsonLexContext *lex)
 {
 	static const JsonLexContext empty = {0};
 
+	if (!lex)
+		return;
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
 		destroyStrVal(lex->strval);
 
@@ -234,7 +246,7 @@ freeJsonLexContext(JsonLexContext *lex)
 		destroyStrVal(lex->errormsg);
 
 	if (lex->flags & JSONLEX_FREE_STRUCT)
-		pfree(lex);
+		FREE(lex);
 	else
 		*lex = empty;
 }
diff --git a/src/common/parse_manifest.c b/src/common/parse_manifest.c
index 40ec3b4f58..3d08e3177f 100644
--- a/src/common/parse_manifest.c
+++ b/src/common/parse_manifest.c
@@ -136,6 +136,8 @@ json_parse_manifest(JsonManifestParseContext *context, char *buffer,
 
 	/* Create a JSON lexing context. */
 	lex = makeJsonLexContextCstringLen(NULL, buffer, size, PG_UTF8, true);
+	if (!lex)
+		json_manifest_parse_failure(context, "out of memory");
 
 	/* Set up semantic actions. */
 	sem.semstate = &parse;
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
index 5ff3488bfb..8fe5626778 100644
--- a/src/include/common/oauth-common.h
+++ b/src/include/common/oauth-common.h
@@ -3,7 +3,7 @@
  * oauth-common.h
  *		Declarations for helper functions used for OAuth/OIDC authentication
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/include/common/oauth-common.h
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 2618c293af..c532cef4d7 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -116,6 +116,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -123,7 +125,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 0504f96e4e..9dd8454cac 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -3,7 +3,7 @@
  * fe-auth-oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication.
  *
- * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
@@ -31,6 +31,8 @@
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
 
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
 /*
  * Parsed JSON Representations
  *
@@ -143,7 +145,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -151,8 +153,9 @@ typedef enum
 } OAuthStep;
 
 /*
- * The async_ctx holds onto state that needs to persist across multiple calls to
- * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
  */
 struct async_ctx
 {
@@ -162,9 +165,10 @@ struct async_ctx
 	int			timerfd;		/* a timerfd for signaling async timeouts */
 #endif
 	pgsocket	mux;			/* the multiplexer socket containing all
-								 * descriptors tracked by cURL, plus the
+								 * descriptors tracked by libcurl, plus the
 								 * timerfd */
-	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
 	CURL	   *curl;			/* the (single) easy handle for serial
 								 * requests */
 
@@ -183,7 +187,7 @@ struct async_ctx
 	 *				actx_error[_str] to manipulate this. This must be filled
 	 *				with something useful on an error.
 	 *
-	 * - curl_err:	an optional static error buffer used by cURL to put
+	 * - curl_err:	an optional static error buffer used by libcurl to put
 	 *				detailed information about failures. Unfortunately
 	 *				untranslatable.
 	 *
@@ -195,7 +199,7 @@ struct async_ctx
 	 */
 	const char *errctx;			/* not freed; must point to static allocation */
 	PQExpBufferData errbuf;
-	char		curl_err[CURL_ERROR_SIZE];
+	PQExpBufferData curl_err;
 
 	/*
 	 * These documents need to survive over multiple calls, and are therefore
@@ -205,6 +209,8 @@ struct async_ctx
 	struct device_authz authz;
 
 	bool		user_prompted;	/* have we already sent the authz prompt? */
+
+	int			running;
 };
 
 /*
@@ -238,7 +244,7 @@ free_curl_async_ctx(PGconn *conn, void *ctx)
 
 		if (err)
 			libpq_append_conn_error(conn,
-									"cURL easy handle removal failed: %s",
+									"libcurl easy handle removal failed: %s",
 									curl_multi_strerror(err));
 	}
 
@@ -258,7 +264,7 @@ free_curl_async_ctx(PGconn *conn, void *ctx)
 
 		if (err)
 			libpq_append_conn_error(conn,
-									"cURL multi handle cleanup failed: %s",
+									"libcurl multi handle cleanup failed: %s",
 									curl_multi_strerror(err));
 	}
 
@@ -292,8 +298,8 @@ free_curl_async_ctx(PGconn *conn, void *ctx)
 	appendPQExpBufferStr(&(ACTX)->errbuf, S)
 
 /*
- * Macros for getting and setting state for the connection's two cURL handles,
- * so you don't have to write out the error handling every time.
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
  */
 
 #define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
@@ -622,19 +628,28 @@ parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
 		actx_error(actx, "no content type was provided");
 		goto cleanup;
 	}
-	else if (strcasecmp(content_type, "application/json") != 0)
+
+	/*
+	 * We only check the media-type and not the parameters, so we need to
+	 * perform a length limited comparison and not compare the whole string.
+	 */
+	if (pg_strncasecmp(content_type, "application/json", strlen("application/json")) != 0)
 	{
-		actx_error(actx, "unexpected content type \"%s\"", content_type);
-		goto cleanup;
+		actx_error(actx, "unexpected content type: \"%s\"", content_type);
+		return false;
 	}
 
 	if (strlen(resp->data) != resp->len)
 	{
 		actx_error(actx, "response contains embedded NULLs");
-		goto cleanup;
+		return false;
 	}
 
-	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	if (!makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	ctx.errbuf = &actx->errbuf;
 	ctx.fields = fields;
@@ -787,7 +802,11 @@ parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
 		authz->interval = parse_interval(authz->interval_str);
 	else
 	{
-		/* TODO: handle default interval of 5 seconds */
+		/*
+		 * RFC 8628 specify 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
 	}
 
 	return true;
@@ -838,7 +857,7 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 }
 
 /*
- * cURL Multi Setup/Callbacks
+ * libcurl Multi Setup/Callbacks
  */
 
 /*
@@ -894,7 +913,7 @@ setup_multiplexer(struct async_ctx *actx)
 
 /*
  * Adds and removes sockets from the multiplexer set, as directed by the
- * cURL multi handle.
+ * libcurl multi handle.
  */
 static int
 register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
@@ -925,7 +944,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 			break;
 
 		default:
-			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
 			return -1;
 	}
 
@@ -997,7 +1016,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 			break;
 
 		default:
-			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
 			return -1;
 	}
 
@@ -1018,7 +1037,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 		/*
 		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
 		 * whether successful or not. Failed entries contain a non-zero errno
-		 * in the `data` field.
+		 * in the data field.
 		 */
 		Assert(ev_out[i].flags & EV_ERROR);
 
@@ -1043,9 +1062,8 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 
 /*
  * Adds or removes timeouts from the multiplexer set, as directed by the
- * cURL multi handle. Rather than continually adding and removing the timer,
- * we keep it in the set at all times and just disarm it when it's not
- * needed.
+ * libcurl multi handle. Rather than continually adding and removing the timer,
+ * we keep it in the set at all times and just disarm it when it's not needed.
  */
 static int
 register_timer(CURLM *curlm, long timeout, void *ctx)
@@ -1061,9 +1079,9 @@ register_timer(CURLM *curlm, long timeout, void *ctx)
 	else if (timeout == 0)
 	{
 		/*
-		 * A zero timeout means cURL wants us to call back immediately. That's
-		 * not technically an option for timerfd, but we can make the timeout
-		 * ridiculously short.
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
 		 *
 		 * TODO: maybe just signal drive_request() to immediately call back in
 		 * this case?
@@ -1098,8 +1116,21 @@ register_timer(CURLM *curlm, long timeout, void *ctx)
 	return 0;
 }
 
+static int
+debug_callback(CURL *handle, curl_infotype *type, char *data, size_t size,
+			   void *clientp)
+{
+	struct async_ctx *actx = (struct async_ctx *) clientp;
+
+	/* For now we only store TEXT debug information, extending is a TODO */
+	if (type == CURLINFO_TEXT)
+		appendBinaryPQExpBuffer(&actx->curl_err, data, size);
+
+	return 0;
+}
+
 /*
- * Initializes the two cURL handles in the async_ctx. The multi handle,
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
  * actx->curlm, is what drives the asynchronous engine and tells us what to do
  * next. The easy handle, actx->curl, encapsulates the state for a single
  * request/response. It's added to the multi handle as needed, during
@@ -1108,17 +1139,17 @@ register_timer(CURLM *curlm, long timeout, void *ctx)
 static bool
 setup_curl_handles(struct async_ctx *actx)
 {
-	curl_version_info_data	*curl_info;
+	curl_version_info_data *curl_info;
 
 	/*
 	 * Create our multi handle. This encapsulates the entire conversation with
-	 * cURL for this connection.
+	 * libcurl for this connection.
 	 */
 	actx->curlm = curl_multi_init();
 	if (!actx->curlm)
 	{
 		/* We don't get a lot of feedback on the failure reason. */
-		actx_error(actx, "failed to create cURL multi handle");
+		actx_error(actx, "failed to create libcurl multi handle");
 		return false;
 	}
 
@@ -1143,7 +1174,7 @@ setup_curl_handles(struct async_ctx *actx)
 	actx->curl = curl_easy_init();
 	if (!actx->curl)
 	{
-		actx_error(actx, "failed to create cURL handle");
+		actx_error(actx, "failed to create libcurl handle");
 		return false;
 	}
 
@@ -1160,9 +1191,14 @@ setup_curl_handles(struct async_ctx *actx)
 		/* No alternative resolver, TODO: warn about timeouts */
 	}
 
-	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+	/*
+	 * Set a callback for retrieving error information from libcurl, the
+	 * function only takes effect when CURLOPT_VERBOSE has been set so make
+	 * sure the order is kept.
+	 */
+	CHECK_SETOPT(actx, CURLOPT_DEBUGDATA, actx, return false);
+	CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
 	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
-	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
 
 	/*
 	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
@@ -1175,7 +1211,12 @@ setup_curl_handles(struct async_ctx *actx)
 	 * pretty strict when it comes to provider behavior, so we have to check
 	 * what comes back anyway.)
 	 */
-	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
 
 	return true;
@@ -1186,8 +1227,10 @@ setup_curl_handles(struct async_ctx *actx)
  */
 
 /*
- * Response callback from cURL; appends the response body into actx->work_data.
- * See start_request().
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
  */
 static size_t
 append_data(char *buf, size_t size, size_t nmemb, void *userdata)
@@ -1195,9 +1238,19 @@ append_data(char *buf, size_t size, size_t nmemb, void *userdata)
 	PQExpBuffer resp = userdata;
 	size_t		len = size * nmemb;
 
-	/* TODO: cap the maximum size */
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+		return 0;
+
+	/* The data passed from libcurl is not null-terminated */
 	appendBinaryPQExpBuffer(resp, buf, len);
-	/* TODO: check for broken buffer */
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+		return 0;
 
 	return len;
 }
@@ -1214,7 +1267,6 @@ static bool
 start_request(struct async_ctx *actx)
 {
 	CURLMcode	err;
-	int			running;
 
 	resetPQExpBuffer(&actx->work_data);
 	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
@@ -1228,7 +1280,7 @@ start_request(struct async_ctx *actx)
 		return false;
 	}
 
-	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
 	if (err)
 	{
 		actx_error(actx, "asynchronous HTTP request failed: %s",
@@ -1237,19 +1289,11 @@ start_request(struct async_ctx *actx)
 	}
 
 	/*
-	 * Sanity check.
-	 *
-	 * TODO: even though this is nominally an asynchronous process, there are
-	 * apparently operations that can synchronously fail by this point, such
-	 * as connections to closed local ports. Maybe we need to let this case
-	 * fall through to drive_request instead, or else perform a
-	 * curl_multi_info_read immediately.
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point like connections
+	 * to closed local ports. Fall through and leave the sanity check for the
+	 * next state consuming actx.
 	 */
-	if (running != 1)
-	{
-		actx_error(actx, "failed to queue HTTP request");
-		return false;
-	}
 
 	return true;
 }
@@ -1262,12 +1306,18 @@ static PostgresPollingStatusType
 drive_request(struct async_ctx *actx)
 {
 	CURLMcode	err;
-	int			running;
 	CURLMsg    *msg;
 	int			msgs_left;
 	bool		done;
 
-	err = curl_multi_socket_all(actx->curlm, &running);
+	/* Sanity check the previous operation */
+	if (actx->running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	err = curl_multi_socket_all(actx->curlm, &actx->running);
 	if (err)
 	{
 		actx_error(actx, "asynchronous HTTP request failed: %s",
@@ -1275,7 +1325,7 @@ drive_request(struct async_ctx *actx)
 		return PGRES_POLLING_FAILED;
 	}
 
-	if (running)
+	if (actx->running)
 	{
 		/* We'll come back again. */
 		return PGRES_POLLING_READING;
@@ -1287,7 +1337,7 @@ drive_request(struct async_ctx *actx)
 		if (msg->msg != CURLMSG_DONE)
 		{
 			/*
-			 * Future cURL versions may define new message types; we don't
+			 * Future libcurl versions may define new message types; we don't
 			 * know how to handle them, so we'll ignore them.
 			 */
 			continue;
@@ -1304,7 +1354,7 @@ drive_request(struct async_ctx *actx)
 		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
 		if (err)
 		{
-			actx_error(actx, "cURL easy handle removal failed: %s",
+			actx_error(actx, "libcurl easy handle removal failed: %s",
 					   curl_multi_strerror(err));
 			return PGRES_POLLING_FAILED;
 		}
@@ -1489,7 +1539,12 @@ start_device_authz(struct async_ctx *actx, PGconn *conn)
 	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
 	if (conn->oauth_scope)
 		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
-	/* TODO check for broken buffer */
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	/* Make our request. */
 	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
@@ -1631,37 +1686,40 @@ finish_token_request(struct async_ctx *actx, struct token *tok)
 	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
-	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
-	 * response uses either 400 Bad Request or 401 Unauthorized.
-	 *
-	 * TODO: there are references online to 403 appearing in the wild...
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
 	 */
-	if (response_code != 200
-		&& response_code != 400
-		 /* && response_code != 401 TODO */ )
+	if (response_code == 200)
 	{
-		actx_error(actx, "unexpected response code %ld", response_code);
-		return false;
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
 	}
 
 	/*
-	 * Pull the fields we care about from the document.
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
 	 */
-	if (response_code == 200)
+	if (response_code == 400 || response_code == 401)
 	{
-		actx->errctx = "failed to parse access token response";
-		if (!parse_access_token(actx, tok))
-			return false;		/* error message already set */
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
 	}
-	else if (!parse_token_error(actx, &tok->err))
-		return false;
 
-	return true;
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
 }
 
+
 /*
- * The top-level, nonblocking entry point for the cURL implementation. This will
- * be called several times to pump the async engine.
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
  *
  * The architecture is based on PQconnectPoll(). The first half drives the
  * connection state forward as necessary, returning if we're not ready to
@@ -1682,7 +1740,7 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 	struct token tok = {0};
 
 	/*
-	 * XXX This is not safe. cURL has stringent requirements for the thread
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
 	 * context in which you call curl_global_init(), because it's going to try
 	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
 	 * probably need to consider both the TLS backend libcurl is compiled
@@ -1691,16 +1749,16 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 	 * Recent versions of libcurl have improved the thread-safety situation,
 	 * but you apparently can't check at compile time whether the
 	 * implementation is thread-safe, and there's a chicken-and-egg problem
-	 * where you can't check the thread safety until you've initialized cURL,
-	 * which you can't do before you've made sure it's thread-safe...
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
 	 *
 	 * We know we've already initialized Winsock by this point, so we should
-	 * be able to safely skip that bit. But we have to tell cURL to initialize
-	 * everything else, because other pieces of our client executable may
-	 * already be using cURL for their own purposes. If we initialize libcurl
-	 * first, with only a subset of its features, we could break those other
-	 * clients nondeterministically, and that would probably be a nightmare to
-	 * debug.
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
 	 */
 	curl_global_init(CURL_GLOBAL_ALL
 					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
@@ -1729,6 +1787,7 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 
 		initPQExpBuffer(&actx->work_data);
 		initPQExpBuffer(&actx->errbuf);
+		initPQExpBuffer(&actx->curl_err);
 
 		if (!setup_multiplexer(actx))
 			goto error_return;
@@ -1873,16 +1932,20 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 				 * errors; anything else and we bail.
 				 */
 				err = &tok.err;
-				if (!err->error || (strcmp(err->error, "authorization_pending")
-									&& strcmp(err->error, "slow_down")))
+				if (!err->error)
+				{
+					actx_error(actx, "unknown error");
+					goto error_return;
+				}
+
+				if (strcmp(err->error, "authorization_pending") != 0 &&
+					strcmp(err->error, "slow_down") != 0)
 				{
-					/* TODO handle !err->error */
 					if (err->error_description)
 						appendPQExpBuffer(&actx->errbuf, "%s ",
 										  err->error_description);
 
 					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
-
 					goto error_return;
 				}
 
@@ -1892,7 +1955,14 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 				 */
 				if (strcmp(err->error, "slow_down") == 0)
 				{
-					actx->authz.interval += 5;	/* TODO check for overflow? */
+					int			prev_interval = actx->authz.interval;
+
+					actx->authz.interval += 5;
+					if (actx->authz.interval < prev_interval)
+					{
+						actx_error(actx, "slow_down interval overflow");
+						goto error_return;
+					}
 				}
 
 				/*
@@ -1959,21 +2029,8 @@ error_return:
 	else
 		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
 
-	if (actx->curl_err[0])
-	{
-		size_t		len;
-
-		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
-
-		/* Sometimes libcurl adds a newline to the error buffer. :( */
-		len = conn->errorMessage.len;
-		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
-		{
-			conn->errorMessage.data[len - 2] = ')';
-			conn->errorMessage.data[len - 1] = '\0';
-			conn->errorMessage.len--;
-		}
-	}
+	if (actx->curl_err.len > 0)
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err.data);
 
 	appendPQExpBufferStr(&conn->errorMessage, "\n");
 
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 66ee8ff076..61de9ac451 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -3,7 +3,7 @@
  * fe-auth-oauth.c
  *	   The front-end (client) implementation of OAuth/OIDC authentication.
  *
- * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
@@ -247,7 +247,12 @@ handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
 		return false;
 	}
 
-	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	if (!makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+		return false;
+	}
 
 	initPQExpBuffer(&ctx.errbuf);
 	sem.semstate = &ctx;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 8d4ea45aa8..6e5e946364 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -4,7 +4,7 @@
  *
  *	  Definitions for OAuth authentication implementations
  *
- * Portions Copyright (c) 2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
  *
  * src/interfaces/libpq/fe-auth-oauth.h
  *
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
index 09a4bf61d2..7b4dc9c494 100644
--- a/src/test/modules/oauth_validator/validator.c
+++ b/src/test/modules/oauth_validator/validator.c
@@ -66,7 +66,7 @@ validate_token(ValidatorModuleState *state, const char *token, const char *role)
 	/* Check to make sure our private state still exists. */
 	if (state->private_data != PRIVATE_COOKIE)
 		elog(ERROR, "oauth_validator: private state cookie changed to %p",
-				state->private_data);
+			 state->private_data);
 
 	res = palloc(sizeof(ValidatorModuleResult));
 
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
index d96733f531..9e18186f23 100644
--- a/src/test/perl/PostgreSQL/Test/OAuthServer.pm
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -4,10 +4,10 @@ package PostgreSQL::Test::OAuthServer;
 
 use warnings;
 use strict;
-use threads;
 use Scalar::Util;
 use Socket;
 use IO::Select;
+use Test::More;
 
 local *server_socket;
 
@@ -34,9 +34,9 @@ sub run
 	my $port;
 
 	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
-		// die "failed to start OAuth server: $!";
+		or die "failed to start OAuth server: $!";
 
-	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	read($read_fh, $port, 7) or die "failed to read port number: $!";
 	chomp $port;
 	die "server did not advertise a valid port"
 		unless Scalar::Util::looks_like_number($port);
@@ -45,14 +45,14 @@ sub run
 	$self->{'port'} = $port;
 	$self->{'child'} = $read_fh;
 
-	print("# OAuth provider (PID $pid) is listening on port $port\n");
+	diag("OAuth provider (PID $pid) is listening on port $port\n");
 }
 
 sub stop
 {
 	my $self = shift;
 
-	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
 
 	kill(15, $self->{'pid'});
 	$self->{'pid'} = undef;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 0eaa079261..0e95e7a0e0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3002,6 +3002,7 @@ VacuumStmt
 ValidIOData
 ValidateIndexState
 ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
-- 
2.32.1 (Apple Git-133)

v22-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v22-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 96a896b37a1b4716e1ef2e788df5da7073af0887 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v22 3/4] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- port to platforms other than "modern Linux/BSD"
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 666 ++++++++++++++++++
 src/backend/libpq/auth-sasl.c                 |  10 +-
 src/backend/libpq/auth-scram.c                |   4 +-
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/common/Makefile                           |   2 +-
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/include/libpq/sasl.h                      |  11 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  21 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  36 +
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   |  74 ++
 .../modules/oauth_validator/t/oauth_server.py | 114 +++
 src/test/modules/oauth_validator/validator.c  |  82 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  14 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   3 +
 25 files changed, 1223 insertions(+), 34 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..024f304e4d
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: note that escaping here should be belt-and-suspenders, since
+	 * escapable characters aren't valid in either the issuer URI or the scope
+	 * list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 2"),
+				 errdetail("Bearer token is empty.")));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 3"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 4161959914..486a34e719 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2b607c5270..0a5a8640fc 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index d506c3c0b7..e592bedf9f 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1743,6 +1744,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2062,8 +2065,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2446,6 +2450,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 1e71e7db4a..62d4f15d2b 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -48,6 +48,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4684,6 +4685,17 @@ struct config_string ConfigureNamesString[] =
 		check_standby_slot_names, assign_standby_slot_names, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/common/Makefile b/src/common/Makefile
index c4f4448e2f..98b8dac0b4 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -41,7 +41,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
 override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
-LIBS += $(PTHREAD_LIBS)
+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
 
 OBJS_COMMON = \
 	archive.o \
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 7c11fb97f2..a08105e0c2 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..e93e01455a
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,21 @@
+export PYTHON
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..3feba6f826
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,36 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..ea610fcd28
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,74 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test    oauth issuer="$issuer"           scope="openid postgres"
+local all testalt oauth issuer="$issuer/alternate" scope="openid postgres alt"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+my $user = "test";
+$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+$node->log_check("user $user: validator receives correct parameters", $log_start,
+				 log_like => [
+					 qr/oauth_validator: token="9243959234", role="$user"/,
+					 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+				 ]);
+$node->log_check("user $user: validator sets authenticated identity", $log_start,
+				 log_like => [
+					 qr/connection authenticated: identity="test" method=oauth/,
+				 ]);
+$log_start = $log_end;
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+				  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+$node->log_check("user $user: validator receives correct parameters", $log_start,
+				 log_like => [
+					 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+					 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+				 ]);
+$node->log_check("user $user: validator sets authenticated identity", $log_start,
+				 log_like => [
+					 qr/connection authenticated: identity="testalt" method=oauth/,
+				 ]);
+$log_start = $log_end;
+
+$webserver->stop();
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..77e3883a81
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,114 @@
+#! /usr/bin/env python3
+
+import http.server
+import json
+import os
+import sys
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+
+    def do_GET(self):
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def do_POST(self):
+        self._check_issuer()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        """
+
+        resp = json.dumps(js).encode("ascii")
+
+        self.send_response(200, "OK")
+        self.send_header("Content-Type", "application/json")
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        return {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            "interval": 0,
+            "verification_uri": uri,
+            "expires-in": 5,
+        }
+
+    def token(self) -> JsonObject:
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return {
+            "access_token": token,
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..09a4bf61d2
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,82 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+				state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index b08296605c..4b7c93f19d 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2299,6 +2299,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2342,7 +2347,14 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..d96733f531
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use threads;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		// die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	print("# OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index f535fbbd5c..0eaa079261 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1673,6 +1673,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3000,6 +3001,7 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
 ValuesScan
 ValuesScanState
 Var
@@ -3564,6 +3566,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.32.1 (Apple Git-133)

v22-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v22-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 01f9bbae380abc2385f8425680d6815a4375fa5f Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v22 2/4] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 configure                                 |  110 ++
 configure.ac                              |   28 +
 meson.build                               |   29 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   10 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 1982 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 +++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   77 +-
 src/interfaces/libpq/libpq-int.h          |   14 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 23 files changed, 3200 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/configure b/configure
index 36feeafbb2..d785ca846b 100755
--- a/configure
+++ b/configure
@@ -712,6 +712,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -858,6 +859,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8485,6 +8488,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13037,6 +13086,56 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14062,6 +14161,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 57f734879e..1ec0712ed5 100644
--- a/configure.ac
+++ b/configure.ac
@@ -927,6 +927,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1423,6 +1443,10 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1614,6 +1638,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/meson.build b/meson.build
index 18b5be842e..99e29513e4 100644
--- a/meson.build
+++ b/meson.build
@@ -840,6 +840,33 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  oauth = dependency('libcurl', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -2845,6 +2872,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3511,6 +3539,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index 249ecc5ffd..3248b9cc1c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -121,6 +121,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 8b3f8c24e0..79b3647834 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..5ff3488bfb
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 591e1ca3df..399578cebb 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -246,6 +246,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -714,6 +717,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index fe2af575c5..2618c293af 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -61,6 +61,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -79,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 9fbd3d3407..9946764c4a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -202,3 +202,6 @@ PQcancelSocket            199
 PQcancelErrorMessage      200
 PQcancelReset             201
 PQcancelFinish            202
+PQsetAuthDataHook         203
+PQgetAuthDataHook         204
+PQdefaultAuthDataHook     205
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..0504f96e4e
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,1982 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls to
+ * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by cURL, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by cURL to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two cURL handles,
+ * so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	char	   *content_type;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	/* Make sure the server thinks it's given us JSON. */
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		goto cleanup;
+	}
+	else if (strcasecmp(content_type, "application/json") != 0)
+	{
+		actx_error(actx, "unexpected content type \"%s\"", content_type);
+		goto cleanup;
+	}
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		goto cleanup;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.)
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return 1;				/* TODO this slows down the tests
+								 * considerably... */
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(authz->interval_str);
+	else
+	{
+		/* TODO: handle default interval of 5 seconds */
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * cURL Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * cURL multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the `data` field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * cURL multi handle. Rather than continually adding and removing the timer,
+ * we keep it in the set at all times and just disarm it when it's not
+ * needed.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means cURL wants us to call back immediately. That's
+		 * not technically an option for timerfd, but we can make the timeout
+		 * ridiculously short.
+		 *
+		 * TODO: maybe just signal drive_request() to immediately call back in
+		 * this case?
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Initializes the two cURL handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data	*curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * cURL for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create cURL multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create cURL handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
+	 */
+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS, return false);
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from cURL; appends the response body into actx->work_data.
+ * See start_request().
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * Sanity check.
+	 *
+	 * TODO: even though this is nominally an asynchronous process, there are
+	 * apparently operations that can synchronously fail by this point, such
+	 * as connections to closed local ports. Maybe we need to let this case
+	 * fall through to drive_request instead, or else perform a
+	 * curl_multi_info_read immediately.
+	 */
+	if (running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	err = curl_multi_socket_all(actx->curlm, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return PGRES_POLLING_FAILED;
+	}
+
+	if (running)
+	{
+		/* We'll come back again. */
+		return PGRES_POLLING_READING;
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future cURL versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "cURL easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * The top-level, nonblocking entry point for the cURL implementation. This will
+ * be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	struct token tok = {0};
+
+	/*
+	 * XXX This is not safe. cURL has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized cURL,
+	 * which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell cURL to initialize
+	 * everything else, because other pieces of our client executable may
+	 * already be using cURL for their own purposes. If we initialize libcurl
+	 * first, with only a subset of its features, we could break those other
+	 * clients nondeterministically, and that would probably be a nightmare to
+	 * debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	/* By default, the multiplexer is the altsock. Reassign as desired. */
+	*altsock = actx->mux;
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				PostgresPollingStatusType status;
+
+				status = drive_request(actx);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+				else if (status != PGRES_POLLING_OK)
+				{
+					/* not done yet */
+					free_token(&tok);
+					return status;
+				}
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			/* TODO check that the timer has expired */
+			break;
+	}
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			actx->errctx = "failed to fetch OpenID discovery document";
+			if (!start_discovery(actx, conn->oauth_discovery_uri))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DISCOVERY;
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+			if (!finish_discovery(actx))
+				goto error_return;
+
+			/* TODO: check issuer */
+
+			actx->errctx = "cannot run OAuth device authorization";
+			if (!check_for_device_flow(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain device authorization";
+			if (!start_device_authz(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+			break;
+
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			if (!finish_device_authz(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				const struct token_error *err;
+#ifdef HAVE_SYS_EPOLL_H
+				struct itimerspec spec = {0};
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				struct kevent ev = {0};
+#endif
+
+				if (!finish_token_request(actx, &tok))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					int			res;
+					PQpromptOAuthDevice prompt = {
+						.verification_uri = actx->authz.verification_uri,
+						.user_code = actx->authz.user_code,
+						/* TODO: optional fields */
+					};
+
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
+										 &prompt);
+
+					if (!res)
+					{
+						fprintf(stderr, "Visit %s and enter the code: %s",
+								prompt.verification_uri, prompt.user_code);
+					}
+					else if (res < 0)
+					{
+						actx_error(actx, "device prompt failed");
+						goto error_return;
+					}
+
+					actx->user_prompted = true;
+				}
+
+				if (tok.access_token)
+				{
+					/* Construct our Bearer token. */
+					resetPQExpBuffer(&actx->work_data);
+					appendPQExpBuffer(&actx->work_data, "Bearer %s",
+									  tok.access_token);
+
+					if (PQExpBufferDataBroken(actx->work_data))
+					{
+						actx_error(actx, "out of memory");
+						goto error_return;
+					}
+
+					state->token = strdup(actx->work_data.data);
+					break;
+				}
+
+				/*
+				 * authorization_pending and slow_down are the only acceptable
+				 * errors; anything else and we bail.
+				 */
+				err = &tok.err;
+				if (!err->error || (strcmp(err->error, "authorization_pending")
+									&& strcmp(err->error, "slow_down")))
+				{
+					/* TODO handle !err->error */
+					if (err->error_description)
+						appendPQExpBuffer(&actx->errbuf, "%s ",
+										  err->error_description);
+
+					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+
+					goto error_return;
+				}
+
+				/*
+				 * A slow_down error requires us to permanently increase our
+				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
+				 */
+				if (strcmp(err->error, "slow_down") == 0)
+				{
+					actx->authz.interval += 5;	/* TODO check for overflow? */
+				}
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				Assert(actx->authz.interval > 0);
+#ifdef HAVE_SYS_EPOLL_H
+				spec.it_value.tv_sec = actx->authz.interval;
+
+				if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+				{
+					actx_error(actx, "failed to set timerfd: %m");
+					goto error_return;
+				}
+
+				*altsock = actx->timerfd;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				/* XXX: I guess this wants to be hidden in a routine */
+				EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
+					   actx->authz.interval * 1000, 0);
+				if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
+				{
+					actx_error(actx, "failed to set kqueue timer: %m");
+					goto error_return;
+				}
+				/* XXX: why did we change the altsock in the epoll version? */
+#endif
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				break;
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+	}
+
+	free_token(&tok);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	free_token(&tok);
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..66ee8ff076
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..8d4ea45aa8
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2023, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 81ec08485d..9cd5c8cfb1 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -419,7 +420,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -437,7 +438,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -524,6 +525,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -563,26 +573,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -625,7 +657,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -650,11 +682,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -955,12 +997,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1118,7 +1166,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1135,7 +1183,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1451,3 +1500,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 01e49c6975..714594324e 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -359,6 +359,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -616,6 +633,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2595,6 +2613,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3603,6 +3622,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3758,6 +3778,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 #ifdef ENABLE_GSS
 
 					/*
@@ -3839,7 +3869,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/* OK, we have processed the message; mark data consumed */
 				conn->inStart = conn->inCursor;
@@ -3872,6 +3912,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4377,6 +4452,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4493,6 +4569,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -6975,6 +7056,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index f2fc78a481..663b1c1acf 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1039,10 +1039,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1059,7 +1062,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = pqSocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = pqSocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 09b485bd2b..454ec8236b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -38,6 +38,8 @@ extern "C"
 #define LIBPQ_HAS_TRACE_FLAGS 1
 /* Indicates that PQsslAttribute(NULL, "library") is useful */
 #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -80,8 +82,10 @@ typedef enum
 	CONNECTION_CHECK_TARGET,	/* Internal state: checking target server
 								 * properties. */
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
-	CONNECTION_ALLOCATED		/* Waiting for connection attempt to be
+	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -163,6 +167,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -684,10 +695,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 9c05f11a6e..5f3a0b00db 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -351,6 +351,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -413,6 +415,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -481,6 +492,9 @@ struct pg_conn
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
 
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index be6fadaea2..0d4b7ac17d 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index b0f4178b3d..f803c1200b 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -231,6 +231,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e2a0525dd4..f535fbbd5c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -354,6 +355,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1669,6 +1672,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1734,6 +1738,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1894,11 +1899,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3364,6 +3372,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.32.1 (Apple Git-133)

v22-0001-common-jsonapi-support-libpq-as-a-client.patchapplication/octet-stream; name=v22-0001-common-jsonapi-support-libpq-as-a-client.patch; x-unix-mode=0644Download
From 95ed868179a09ffe374316d0356a128adcebfe71 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v22 1/4] common/jsonapi: support libpq as a client

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed rather than exit()ing.
---
 src/bin/pg_combinebackup/Makefile    |   4 +-
 src/bin/pg_combinebackup/meson.build |   2 +-
 src/bin/pg_verifybackup/Makefile     |   2 +-
 src/common/Makefile                  |   2 +-
 src/common/jsonapi.c                 | 173 ++++++++++++++++++++-------
 src/common/meson.build               |   8 +-
 src/include/common/jsonapi.h         |  19 ++-
 7 files changed, 158 insertions(+), 52 deletions(-)

diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index c3729755ba..2f7dc1ed87 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -18,6 +18,8 @@ include $(top_builddir)/src/Makefile.global
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
 LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
@@ -30,7 +32,7 @@ OBJS = \
 
 all: pg_combinebackup
 
-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
 	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
 
 install: all installdirs
diff --git a/src/bin/pg_combinebackup/meson.build b/src/bin/pg_combinebackup/meson.build
index 1d4b9c218f..cab677b574 100644
--- a/src/bin/pg_combinebackup/meson.build
+++ b/src/bin/pg_combinebackup/meson.build
@@ -17,7 +17,7 @@ endif
 
 pg_combinebackup = executable('pg_combinebackup',
   pg_combinebackup_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args,
 )
 bin_targets += pg_combinebackup
diff --git a/src/bin/pg_verifybackup/Makefile b/src/bin/pg_verifybackup/Makefile
index 7c045f142e..3372fada01 100644
--- a/src/bin/pg_verifybackup/Makefile
+++ b/src/bin/pg_verifybackup/Makefile
@@ -17,7 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 # We need libpq only because fe_utils does.
-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
diff --git a/src/common/Makefile b/src/common/Makefile
index 3d83299432..c4f4448e2f 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 OBJS_COMMON = \
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 98d6e66a21..4aeedc0bc5 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,43 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+
+#define appendStrVal		appendPQExpBuffer
+#define appendBinaryStrVal  appendBinaryPQExpBuffer
+#define appendStrValChar	appendPQExpBufferChar
+#define createStrVal		createPQExpBuffer
+#define resetStrVal			resetPQExpBuffer
+#define destroyStrVal		destroyPQExpBuffer
+
+#else							/* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+
+#define appendStrVal		appendStringInfo
+#define appendBinaryStrVal  appendBinaryStringInfo
+#define appendStrValChar	appendStringInfoChar
+#define createStrVal		makeStringInfo
+#define resetStrVal			resetStringInfo
+#define destroyStrVal		destroyStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -168,9 +201,16 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+	lex->errormsg = NULL;
 
 	return lex;
 }
@@ -185,14 +225,18 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-		destroyStringInfo(lex->strval);
+		destroyStrVal(lex->strval);
 
 	if (lex->errormsg)
-		destroyStringInfo(lex->errormsg);
+		destroyStrVal(lex->errormsg);
 
 	if (lex->flags & JSONLEX_FREE_STRUCT)
 		pfree(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -258,7 +302,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -320,14 +364,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -361,8 +412,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -418,6 +473,11 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -766,8 +826,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -804,7 +871,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -861,19 +928,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -883,22 +950,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -933,7 +1000,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to appendBinaryStrVal.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -957,8 +1024,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				appendBinaryStrVal(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -974,6 +1041,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -1158,19 +1230,25 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
 	if (lex->errormsg)
-		resetStringInfo(lex->errormsg);
+		resetStrVal(lex->errormsg);
 	else
-		lex->errormsg = makeStringInfo();
+		lex->errormsg = createStrVal();
 
 	/*
 	 * A helper for error messages that should print the current token. The
 	 * format must contain exactly one %.*s specifier.
 	 */
 #define token_error(lex, format) \
-	appendStringInfo((lex)->errormsg, _(format), \
-					 (int) ((lex)->token_terminator - (lex)->token_start), \
-					 (lex)->token_start);
+	appendStrVal((lex)->errormsg, _(format), \
+				 (int) ((lex)->token_terminator - (lex)->token_start), \
+				 (lex)->token_start);
 
 	switch (error)
 	{
@@ -1181,9 +1259,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
 			break;
 		case JSON_ESCAPING_REQUIRED:
-			appendStringInfo(lex->errormsg,
-							 _("Character with value 0x%02x must be escaped."),
-							 (unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
 			break;
 		case JSON_EXPECTED_END:
 			token_error(lex, "Expected end of input, but found \"%.*s\".");
@@ -1214,6 +1292,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 		case JSON_INVALID_TOKEN:
 			token_error(lex, "Token \"%.*s\" is invalid.");
 			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -1245,15 +1326,23 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 	}
 #undef token_error
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	if (lex->errormsg->len == 0)
-		appendStringInfo(lex->errormsg,
-						 _("unexpected json parse error type: %d"),
-						 (int) error);
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && lex->errormsg->len == 0)
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 _("unexpected json parse error type: %d"),
+					 (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
+#endif
 
 	return lex->errormsg->data;
 }
diff --git a/src/common/meson.build b/src/common/meson.build
index de68e408fa..379b228d86 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -125,13 +125,18 @@ common_sources_frontend_static += files(
 # least cryptohash_openssl.c, hmac_openssl.c depend on it.
 # controldata_utils.c depends on wait_event_types_h. That's arguably a
 # layering violation, but ...
+#
+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
+# appropriately. This seems completely broken.
 pgcommon = {}
 pgcommon_variants = {
   '_srv': internal_lib_args + {
+    'include_directories': include_directories('.'),
     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
     'dependencies': [backend_common_code],
   },
   '': default_lib_args + {
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_static,
     'dependencies': [frontend_common_code],
     # Files in libpgcommon.a should use/export the "xxx_private" versions
@@ -140,6 +145,7 @@ pgcommon_variants = {
   },
   '_shlib': default_lib_args + {
     'pic': true,
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
   },
@@ -157,7 +163,6 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'sources': sources,
         'c_args': c_args,
@@ -170,7 +175,6 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'dependencies': opts['dependencies'] + [ssl],
       }
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 86a0fc2d00..75d444c17a 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -48,6 +46,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -57,6 +56,17 @@ typedef enum JsonParseErrorType
 	JSON_SEM_ACTION_FAILED,		/* error should already be reported */
 } JsonParseErrorType;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
 
 /*
  * All the fields in this structure should be treated as read-only.
@@ -88,8 +98,9 @@ typedef struct JsonLexContext
 	bits32		flags;
 	int			line_number;	/* line number, starting from 1 */
 	char	   *line_start;		/* where that line starts within input */
-	StringInfo	strval;
-	StringInfo	errormsg;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
-- 
2.32.1 (Apple Git-133)

#102Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#101)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Mar 28, 2024 at 3:34 PM Daniel Gustafsson <daniel@yesql.se> wrote:

In jsonapi.c, makeJsonLexContextCstringLen initialize a JsonLexContext with
palloc0 which would need to be ported over to use ALLOC for frontend code.

Seems reasonable (but see below, too).

On
that note, the errorhandling in parse_oauth_json() for content-type checks
attempts to free the JsonLexContext even before it has been created. Here we
can just return false.

Agreed. They're zero-initialized, so freeJsonLexContext() is safe
IIUC, but it's clearer not to call the free function at all.

But for these additions:

-   makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+   if (!makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true))
+   {
+       actx_error(actx, "out of memory");
+       return false;
+   }

...since we're using the stack-based API as opposed to the heap-based
API, they shouldn't be possible to hit. Any failures in createStrVal()
are deferred to parse time on purpose.

-  echo 'libpq must not be calling any function which invokes exit'; exit 1; \
+  echo 'libpq must not be calling any function which invokes exit'; \
The offending codepath in libcurl was in the NTLM_WB module, a very old and
obscure form of NTLM support which was replaced (yet remained in the tree) a
long time ago by a full NTLM implementatin.  Based on the findings in this
thread it was deprecated with a removal date set to April 2024 [0].  A bug in
the 8.4.0 release however disconnected NTLM_WB from the build and given the
lack of complaints it was decided to leave as is, so we can base our libcurl
requirements on 8.4.0 while keeping the exit() check intact.

Of the Cirrus machines, it looks like only FreeBSD has a new enough
libcurl for that. Ubuntu won't until 24.04, Debian Bookworm doesn't
have it unless you're running backports, RHEL 9 is still on 7.x... I
think requiring libcurl 8 is effectively saying no one will be able to
use this for a long time. Is there an alternative?

+ else if (strcasecmp(content_type, "application/json") != 0)
This needs to handle parameters as well since it will now fail if the charset
parameter is appended (which undoubtedly will be pretty common). The easiest
way is probably to just verify the mediatype and skip the parameters since we
know it can only be charset?

Good catch. application/json no longer defines charsets officially
[1]: https://datatracker.ietf.org/doc/html/rfc7159#section-11
strncasecmp needs to handle a spurious prefix, too; I have that on my
TODO list.

+  /* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+  CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+  CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
CURLOPT_ERRORBUFFER is the old and finicky way of extracting error messages, we
should absolutely move to using CURLOPT_DEBUGFUNCTION instead.

This new way doesn't do the same thing. Here's a sample error:

connection to server at "127.0.0.1", port 56619 failed: failed to
fetch OpenID discovery document: Weird server reply ( Trying
127.0.0.1:36647...
Connected to localhost (127.0.0.1) port 36647 (#0)
Mark bundle as not supporting multiuse
HTTP 1.0, assume close after body
Invalid Content-Length: value
Closing connection 0
)

IMO that's too much noise. Prior to the change, the same error would have been

connection to server at "127.0.0.1", port 56619 failed: failed to
fetch OpenID discovery document: Weird server reply (Invalid
Content-Length: value)

The error buffer is finicky for sure, but it's also a generic one-line
explanation of what went wrong... Is there an alternative API for that
I'm missing?

+ /* && response_code != 401 TODO */ )
Why is this marked with a TODO, do you remember?

Yeah -- I have a feeling that 401s coming back are going to need more
helpful hints to the user, since it implies that libpq itself hasn't
authenticated correctly as opposed to some user-related auth failure.
I was hoping to find some sample behaviors in the wild and record
those into the suite.

+ print("# OAuth provider (PID $pid) is listening on port $port\n");
Code running under Test::More need to use diag() for printing non-test output
like this.

Ah, thanks.

+#if LIBCURL_VERSION_MAJOR <= 8 && LIBCURL_VERSION_MINOR < 4

I don't think this catches versions like 7.76, does it? Maybe
`LIBCURL_VERSION_MAJOR < 8 || (LIBCURL_VERSION_MAJOR == 8 &&
LIBCURL_VERSION_MINOR < 4)`, or else `LIBCURL_VERSION_NUM < 0x080400`?

my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
-       // die "failed to start OAuth server: $!";
+       or die "failed to start OAuth server: $!";
-   read($read_fh, $port, 7) // die "failed to read port number: $!";
+   read($read_fh, $port, 7) or die "failed to read port number: $!";

The first hunk here looks good (thanks for the catch!) but I think the
second is not correct behavior. $! doesn't get set unless undef is
returned, if I'm reading the docs correctly. Yay Perl.

+   /* Sanity check the previous operation */
+   if (actx->running != 1)
+   {
+       actx_error(actx, "failed to queue HTTP request");
+       return false;
+   }

`running` can be set to zero on success, too. I'm having trouble
forcing that code path in a test so far, but we're going to have to do
something special in that case.

Another issue I have is the sheer size and the fact that so much code is
replaced by subsequent commits, so I took the liberty to squash some of this
down into something less daunting. The attached v22 retains the 0001 and then
condenses the rest into two commits for frontent and backend parts.

Looks good.

I did drop
the Python pytest patch since I feel that it's unlikely to go in from this
thread (adding pytest seems worthy of its own thread and discussion), and the
weight of it makes this seem scarier than it is.

Until its coverage gets ported over, can we keep it as a `DO NOT
MERGE` patch? Otherwise there's not much to run in Cirrus.

The final patch contains fixes for all of the above review comments as well as
a some refactoring, smaller clean-ups and TODO fixing. If these fixes are
accepted I'll incorporate them into the two commits.

Next I intend to work on writing documentation for this.

Awesome, thank you! I will start adding coverage to the new code paths.

--Jacob

[1]: https://datatracker.ietf.org/doc/html/rfc7159#section-11

#103Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#102)
7 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Apr 1, 2024 at 3:07 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Awesome, thank you! I will start adding coverage to the new code paths.

This patchset rotted more than I thought it would with the new
incremental JSON, and I got stuck in rebase hell. Rather than chip
away at that while the cfbot is red, here's a rebase of v22 to get the
CI up again, and I will port what I've been working on over that. (So,
for prior reviewers: recent upthread and offline feedback is not yet
incorporated, sorry, come back later.)

The big change in v23 is that I've removed fe_memutils.c from
libpgcommon_shlib completely, to try to reduce my own hair-pulling
when it comes to keeping exit() out of libpq. (It snuck in several
ways with incremental JSON.)

As far as I can tell, removing fe_memutils causes only one problem,
which is that Informix ECPG is relying on pnstrdup(). And I think that
may be a bug in itself? There's code in deccvasc() right after the
pnstrdup() call that takes care of a failed allocation, but the
frontend pnstrdup() is going to call exit() on failure. So my 0001
patch reverts that change, which was made in 0b9466fce. If that can go
in, and I'm not missing something that makes that call okay, maybe
0002 can be peeled off as well.

Thanks,
--Jacob

Attachments:

since-v22.diff.txttext/plain; charset=US-ASCII; name=since-v22.diff.txtDownload
-:  ---------- > 1:  aa553b7700 Revert ECPG's use of pnstrdup()
-:  ---------- > 2:  d6cae9157e Remove fe_memutils from libpgcommon_shlib
1:  9efb0d64e5 ! 3:  5543539169 common/jsonapi: support libpq as a client
    @@ Commit message
         us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
         as needed rather than exit()ing.
     
    +    Co-authored-by: Michael Paquier <michael@paquier.xyz>
    +    Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
    +
      ## src/bin/pg_combinebackup/Makefile ##
     @@ src/bin/pg_combinebackup/Makefile: include $(top_builddir)/src/Makefile.global
      
    @@ src/common/jsonapi.c
      #endif
      
     +/*
    -+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
    -+ * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
    ++ * In backend, we will use palloc/pfree along with StringInfo.  In frontend,
    ++ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
     + */
     +#ifdef FRONTEND
     +
     +#define STRDUP(s) strdup(s)
     +#define ALLOC(size) malloc(size)
    ++#define ALLOC0(size) calloc(1, size)
    ++#define REALLOC realloc
    ++#define FREE(s) free(s)
     +
    -+#define appendStrVal		appendPQExpBuffer
    -+#define appendBinaryStrVal  appendBinaryPQExpBuffer
    -+#define appendStrValChar	appendPQExpBufferChar
    -+#define createStrVal		createPQExpBuffer
    -+#define resetStrVal			resetPQExpBuffer
    -+#define destroyStrVal		destroyPQExpBuffer
    ++#define appendStrVal			appendPQExpBuffer
    ++#define appendBinaryStrVal		appendBinaryPQExpBuffer
    ++#define appendStrValChar		appendPQExpBufferChar
    ++/* XXX should we add a macro version to PQExpBuffer? */
    ++#define appendStrValCharMacro	appendPQExpBufferChar
    ++#define createStrVal			createPQExpBuffer
    ++#define initStrVal				initPQExpBuffer
    ++#define resetStrVal				resetPQExpBuffer
    ++#define termStrVal				termPQExpBuffer
    ++#define destroyStrVal			destroyPQExpBuffer
     +
     +#else							/* !FRONTEND */
     +
     +#define STRDUP(s) pstrdup(s)
     +#define ALLOC(size) palloc(size)
    ++#define ALLOC0(size) palloc0(size)
    ++#define REALLOC repalloc
     +
    -+#define appendStrVal		appendStringInfo
    -+#define appendBinaryStrVal  appendBinaryStringInfo
    -+#define appendStrValChar	appendStringInfoChar
    -+#define createStrVal		makeStringInfo
    -+#define resetStrVal			resetStringInfo
    -+#define destroyStrVal		destroyStringInfo
    ++/*
    ++ * Backend pfree() doesn't handle NULL pointers like the frontend's does; smooth
    ++ * that over to reduce mental gymnastics. Avoid multiple evaluation of the macro
    ++ * argument to avoid future hair-pulling.
    ++ */
    ++#define FREE(s) do {	\
    ++	void *__v = (s);	\
    ++	if (__v)			\
    ++		pfree(__v);		\
    ++} while (0)
    ++
    ++#define appendStrVal			appendStringInfo
    ++#define appendBinaryStrVal		appendBinaryStringInfo
    ++#define appendStrValChar		appendStringInfoChar
    ++#define appendStrValCharMacro	appendStringInfoCharMacro
    ++#define createStrVal			makeStringInfo
    ++#define initStrVal				initStringInfo
    ++#define resetStrVal				resetStringInfo
    ++#define termStrVal(s)			pfree((s)->data)
    ++#define destroyStrVal			destroyStringInfo
     +
     +#endif
     +
      /*
       * The context of the parser is maintained by the recursive descent
       * mechanism, but is passed explicitly to the error reporting routine
    -@@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
    +@@ src/common/jsonapi.c: struct JsonIncrementalState
    + {
    + 	bool		is_last_chunk;
    + 	bool		partial_completed;
    +-	StringInfoData partial_token;
    ++	StrValType	partial_token;
    + };
    + 
    + /*
    +@@ src/common/jsonapi.c: static JsonParseErrorType parse_object(JsonLexContext *lex, JsonSemAction *sem);
    + static JsonParseErrorType parse_array_element(JsonLexContext *lex, JsonSemAction *sem);
    + static JsonParseErrorType parse_array(JsonLexContext *lex, JsonSemAction *sem);
    + static JsonParseErrorType report_parse_error(JsonParseContext ctx, JsonLexContext *lex);
    ++static bool allocate_incremental_state(JsonLexContext *lex);
    + 
    + /* the null action object used for pure validation */
    + JsonSemAction nullSemAction =
    +@@ src/common/jsonapi.c: IsValidJsonNumber(const char *str, size_t len)
    + {
    + 	bool		numeric_error;
    + 	size_t		total_len;
    +-	JsonLexContext dummy_lex;
    ++	JsonLexContext dummy_lex = {0};
    + 
    + 	if (len <= 0)
    + 		return false;
    + 
    +-	dummy_lex.incremental = false;
    +-	dummy_lex.inc_state = NULL;
    +-	dummy_lex.pstack = NULL;
    +-
    + 	/*
    + 	 * json_lex_number expects a leading  '-' to have been eaten already.
    + 	 *
    +@@ src/common/jsonapi.c: IsValidJsonNumber(const char *str, size_t len)
    +  * responsible for freeing the returned struct, either by calling
    +  * freeJsonLexContext() or (in backend environment) via memory context
    +  * cleanup.
    ++ *
    ++ * In frontend code this can return NULL on OOM, so callers must inspect the
    ++ * returned pointer.
    +  */
    + JsonLexContext *
    + makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
    +@@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
    + {
    + 	if (lex == NULL)
    + 	{
    +-		lex = palloc0(sizeof(JsonLexContext));
    ++		lex = ALLOC0(sizeof(JsonLexContext));
    ++		if (!lex)
    ++			return NULL;
    + 		lex->flags |= JSONLEX_FREE_STRUCT;
    + 	}
    + 	else
    +@@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
      	lex->input_encoding = encoding;
      	if (need_escapes)
      	{
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, char *js
      		lex->flags |= JSONLEX_FREE_STRVAL;
     +		lex->parse_strval = true;
      	}
    -+	lex->errormsg = NULL;
      
      	return lex;
      }
    -@@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
    + 
    ++/*
    ++ * Allocates the internal bookkeeping structures for incremental parsing. This
    ++ * can only fail in-band with FRONTEND code.
    ++ */
    ++#define JS_STACK_CHUNK_SIZE 64
    ++#define JS_MAX_PROD_LEN 10		/* more than we need */
    ++#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
    ++								 * number */
    ++static bool
    ++allocate_incremental_state(JsonLexContext *lex)
    ++{
    ++	void	   *pstack,
    ++			   *prediction,
    ++			   *fnames,
    ++			   *fnull;
    ++
    ++	lex->inc_state = ALLOC0(sizeof(JsonIncrementalState));
    ++	pstack = ALLOC(sizeof(JsonParserStack));
    ++	prediction = ALLOC(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
    ++	fnames = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(char *));
    ++	fnull = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(bool));
    ++
    ++#ifdef FRONTEND
    ++	if (!lex->inc_state
    ++		|| !pstack
    ++		|| !prediction
    ++		|| !fnames
    ++		|| !fnull)
    ++	{
    ++		FREE(lex->inc_state);
    ++		FREE(pstack);
    ++		FREE(prediction);
    ++		FREE(fnames);
    ++		FREE(fnull);
    ++
    ++		return false;
    ++	}
    ++#endif
    ++
    ++	initStrVal(&(lex->inc_state->partial_token));
    ++	lex->pstack = pstack;
    ++	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
    ++	lex->pstack->prediction = prediction;
    ++	lex->pstack->pred_index = 0;
    ++	lex->pstack->fnames = fnames;
    ++	lex->pstack->fnull = fnull;
    ++
    ++	lex->incremental = true;
    ++	return true;
    ++}
    ++
    + 
    + /*
    +  * makeJsonLexContextIncremental
    +@@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
    +  * we don't need the input, that will be handed in bit by bit to the
    +  * parse routine. We also need an accumulator for partial tokens in case
    +  * the boundary between chunks happens to fall in the middle of a token.
    ++ *
    ++ * In frontend code this can return NULL on OOM, so callers must inspect the
    ++ * returned pointer.
    +  */
    +-#define JS_STACK_CHUNK_SIZE 64
    +-#define JS_MAX_PROD_LEN 10		/* more than we need */
    +-#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
    +-								 * number */
    +-
    + JsonLexContext *
    + makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
    + 							  bool need_escapes)
    + {
    + 	if (lex == NULL)
    + 	{
    +-		lex = palloc0(sizeof(JsonLexContext));
    ++		lex = ALLOC0(sizeof(JsonLexContext));
    ++		if (!lex)
    ++			return NULL;
    ++
    + 		lex->flags |= JSONLEX_FREE_STRUCT;
    + 	}
    + 	else
    +@@ src/common/jsonapi.c: makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
    + 
    + 	lex->line_number = 1;
    + 	lex->input_encoding = encoding;
    +-	lex->incremental = true;
    +-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
    +-	initStringInfo(&(lex->inc_state->partial_token));
    +-	lex->pstack = palloc(sizeof(JsonParserStack));
    +-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
    +-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
    +-	lex->pstack->pred_index = 0;
    +-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
    +-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
    ++
    ++	if (!allocate_incremental_state(lex))
    ++	{
    ++		if (lex->flags & JSONLEX_FREE_STRUCT)
    ++			FREE(lex);
    ++		return NULL;
    ++	}
    ++
    + 	if (need_escapes)
    + 	{
    +-		lex->strval = makeStringInfo();
    ++		/*
    ++		 * This call can fail in FRONTEND code. We defer error handling to
    ++		 * time of use (json_lex_string()) since we might not need to parse
    ++		 * any strings anyway.
    ++		 */
    ++		lex->strval = createStrVal();
    + 		lex->flags |= JSONLEX_FREE_STRVAL;
    ++		lex->parse_strval = true;
    + 	}
    ++
    + 	return lex;
    + }
    + 
    +-static inline void
    ++static inline bool
    + inc_lex_level(JsonLexContext *lex)
    + {
    +-	lex->lex_level += 1;
    +-
    +-	if (lex->incremental && lex->lex_level >= lex->pstack->stack_size)
    ++	if (lex->incremental && (lex->lex_level + 1) >= lex->pstack->stack_size)
    + 	{
    +-		lex->pstack->stack_size += JS_STACK_CHUNK_SIZE;
    +-		lex->pstack->prediction =
    +-			repalloc(lex->pstack->prediction,
    +-					 lex->pstack->stack_size * JS_MAX_PROD_LEN);
    +-		if (lex->pstack->fnames)
    +-			lex->pstack->fnames =
    +-				repalloc(lex->pstack->fnames,
    +-						 lex->pstack->stack_size * sizeof(char *));
    +-		if (lex->pstack->fnull)
    +-			lex->pstack->fnull =
    +-				repalloc(lex->pstack->fnull, lex->pstack->stack_size * sizeof(bool));
    ++		size_t		new_stack_size;
    ++		char	   *new_prediction;
    ++		char	  **new_fnames;
    ++		bool	   *new_fnull;
    ++
    ++		new_stack_size = lex->pstack->stack_size + JS_STACK_CHUNK_SIZE;
    ++
    ++		new_prediction = REALLOC(lex->pstack->prediction,
    ++								 new_stack_size * JS_MAX_PROD_LEN);
    ++		new_fnames = REALLOC(lex->pstack->fnames,
    ++							 new_stack_size * sizeof(char *));
    ++		new_fnull = REALLOC(lex->pstack->fnull, new_stack_size * sizeof(bool));
    ++
    ++#ifdef FRONTEND
    ++		if (!new_prediction || !new_fnames || !new_fnull)
    ++			return false;
    ++#endif
    ++
    ++		lex->pstack->stack_size = new_stack_size;
    ++		lex->pstack->prediction = new_prediction;
    ++		lex->pstack->fnames = new_fnames;
    ++		lex->pstack->fnull = new_fnull;
    + 	}
    ++
    ++	lex->lex_level += 1;
    ++	return true;
    + }
    + 
    + static inline void
    +@@ src/common/jsonapi.c: get_fnull(JsonLexContext *lex)
      void
      freeJsonLexContext(JsonLexContext *lex)
      {
     +	static const JsonLexContext empty = {0};
    ++
    ++	if (!lex)
    ++		return;
     +
      	if (lex->flags & JSONLEX_FREE_STRVAL)
     -		destroyStringInfo(lex->strval);
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, char *js
     -		destroyStringInfo(lex->errormsg);
     +		destroyStrVal(lex->errormsg);
      
    + 	if (lex->incremental)
    + 	{
    +-		pfree(lex->inc_state->partial_token.data);
    +-		pfree(lex->inc_state);
    +-		pfree(lex->pstack->prediction);
    +-		pfree(lex->pstack->fnames);
    +-		pfree(lex->pstack->fnull);
    +-		pfree(lex->pstack);
    ++		termStrVal(&lex->inc_state->partial_token);
    ++		FREE(lex->inc_state);
    ++		FREE(lex->pstack->prediction);
    ++		FREE(lex->pstack->fnames);
    ++		FREE(lex->pstack->fnull);
    ++		FREE(lex->pstack);
    + 	}
    + 
      	if (lex->flags & JSONLEX_FREE_STRUCT)
    - 		pfree(lex);
    +-		pfree(lex);
    ++		FREE(lex);
     +	else
     +		*lex = empty;
      }
      
      /*
    +@@ src/common/jsonapi.c: JsonParseErrorType
    + pg_parse_json(JsonLexContext *lex, JsonSemAction *sem)
    + {
    + #ifdef FORCE_JSON_PSTACK
    +-
    +-	lex->incremental = true;
    +-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
    +-
    + 	/*
    + 	 * We don't need partial token processing, there is only one chunk. But we
    + 	 * still need to init the partial token string so that freeJsonLexContext
    +-	 * works.
    ++	 * works, so perform the full incremental initialization.
    + 	 */
    +-	initStringInfo(&(lex->inc_state->partial_token));
    +-	lex->pstack = palloc(sizeof(JsonParserStack));
    +-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
    +-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
    +-	lex->pstack->pred_index = 0;
    +-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
    +-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
    ++	if (!allocate_incremental_state(lex))
    ++		return JSON_OUT_OF_MEMORY;
    + 
    + 	return pg_parse_json_incremental(lex, sem, lex->input, lex->input_length, true);
    + 
     @@ src/common/jsonapi.c: json_count_array_elements(JsonLexContext *lex, int *elements)
      	 * etc, so doing this with a copy makes that safe.
      	 */
    @@ src/common/jsonapi.c: json_count_array_elements(JsonLexContext *lex, int *elemen
      	copylex.lex_level++;
      
      	count = 0;
    +@@ src/common/jsonapi.c: pg_parse_json_incremental(JsonLexContext *lex,
    + 							if (result != JSON_SUCCESS)
    + 								return result;
    + 						}
    +-						inc_lex_level(lex);
    ++
    ++						if (!inc_lex_level(lex))
    ++							return JSON_OUT_OF_MEMORY;
    + 					}
    + 					break;
    + 				case JSON_SEM_OEND:
    +@@ src/common/jsonapi.c: pg_parse_json_incremental(JsonLexContext *lex,
    + 							if (result != JSON_SUCCESS)
    + 								return result;
    + 						}
    +-						inc_lex_level(lex);
    ++
    ++						if (!inc_lex_level(lex))
    ++							return JSON_OUT_OF_MEMORY;
    + 					}
    + 					break;
    + 				case JSON_SEM_AEND:
    +@@ src/common/jsonapi.c: pg_parse_json_incremental(JsonLexContext *lex,
    + 						json_ofield_action ostart = sem->object_field_start;
    + 						json_ofield_action oend = sem->object_field_end;
    + 
    +-						if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
    ++						if ((ostart != NULL || oend != NULL) && lex->parse_strval)
    + 						{
    +-							fname = pstrdup(lex->strval->data);
    ++							fname = STRDUP(lex->strval->data);
    ++							if (fname == NULL)
    ++								return JSON_OUT_OF_MEMORY;
    + 						}
    + 						set_fname(lex, fname);
    + 					}
    +@@ src/common/jsonapi.c: pg_parse_json_incremental(JsonLexContext *lex,
    + 							 */
    + 							if (tok == JSON_TOKEN_STRING)
    + 							{
    +-								if (lex->strval != NULL)
    +-									pstack->scalar_val = pstrdup(lex->strval->data);
    ++								if (lex->parse_strval)
    ++								{
    ++									pstack->scalar_val = STRDUP(lex->strval->data);
    ++									if (pstack->scalar_val == NULL)
    ++										return JSON_OUT_OF_MEMORY;
    ++								}
    + 							}
    + 							else
    + 							{
    + 								ptrdiff_t	tlen = (lex->token_terminator - lex->token_start);
    + 
    +-								pstack->scalar_val = palloc(tlen + 1);
    ++								pstack->scalar_val = ALLOC(tlen + 1);
    ++								if (pstack->scalar_val == NULL)
    ++									return JSON_OUT_OF_MEMORY;
    ++
    + 								memcpy(pstack->scalar_val, lex->token_start, tlen);
    + 								pstack->scalar_val[tlen] = '\0';
    + 							}
     @@ src/common/jsonapi.c: parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
      	/* extract the de-escaped string value, or the raw lexeme */
      	if (lex_peek(lex) == JSON_TOKEN_STRING)
    @@ src/common/jsonapi.c: parse_object(JsonLexContext *lex, JsonSemAction *sem)
      	check_stack_depth();
      #endif
      
    +@@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
    + 	const char *const end = lex->input + lex->input_length;
    + 	JsonParseErrorType result;
    + 
    +-	if (lex->incremental && lex->inc_state->partial_completed)
    ++	if (lex->incremental)
    + 	{
    +-		/*
    +-		 * We just lexed a completed partial token on the last call, so reset
    +-		 * everything
    +-		 */
    +-		resetStringInfo(&(lex->inc_state->partial_token));
    +-		lex->token_terminator = lex->input;
    +-		lex->inc_state->partial_completed = false;
    ++		if (lex->inc_state->partial_completed)
    ++		{
    ++			/*
    ++			 * We just lexed a completed partial token on the last call, so
    ++			 * reset everything
    ++			 */
    ++			resetStrVal(&(lex->inc_state->partial_token));
    ++			lex->token_terminator = lex->input;
    ++			lex->inc_state->partial_completed = false;
    ++		}
    ++
    ++#ifdef FRONTEND
    ++		/* Make sure our partial token buffer is valid before using it below. */
    ++		if (PQExpBufferDataBroken(lex->inc_state->partial_token))
    ++			return JSON_OUT_OF_MEMORY;
    ++#endif
    + 	}
    + 
    + 	s = lex->token_terminator;
    +@@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
    + 		 * We have a partial token. Extend it and if completed lex it by a
    + 		 * recursive call
    + 		 */
    +-		StringInfo	ptok = &(lex->inc_state->partial_token);
    ++		StrValType *ptok = &(lex->inc_state->partial_token);
    + 		size_t		added = 0;
    + 		bool		tok_done = false;
    + 		JsonLexContext dummy_lex;
    +@@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
    + 			{
    + 				char		c = lex->input[i];
    + 
    +-				appendStringInfoCharMacro(ptok, c);
    ++				appendStrValCharMacro(ptok, c);
    + 				added++;
    + 				if (c == '"' && escapes % 2 == 0)
    + 				{
    +@@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
    + 						case '8':
    + 						case '9':
    + 							{
    +-								appendStringInfoCharMacro(ptok, cc);
    ++								appendStrValCharMacro(ptok, cc);
    + 								added++;
    + 							}
    + 							break;
    +@@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
    + 
    + 				if (JSON_ALPHANUMERIC_CHAR(cc))
    + 				{
    +-					appendStringInfoCharMacro(ptok, cc);
    ++					appendStrValCharMacro(ptok, cc);
    + 					added++;
    + 				}
    + 				else
    +@@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
    + 		dummy_lex.input_length = ptok->len;
    + 		dummy_lex.input_encoding = lex->input_encoding;
    + 		dummy_lex.incremental = false;
    ++		dummy_lex.parse_strval = lex->parse_strval;
    + 		dummy_lex.strval = lex->strval;
    + 
    + 		partial_result = json_lex(&dummy_lex);
    +@@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
    + 					if (lex->incremental && !lex->inc_state->is_last_chunk &&
    + 						p == lex->input + lex->input_length)
    + 					{
    +-						appendBinaryStringInfo(
    +-											   &(lex->inc_state->partial_token), s, end - s);
    ++						appendBinaryStrVal(
    ++										   &(lex->inc_state->partial_token), s, end - s);
    + 						return JSON_INCOMPLETE;
    + 					}
    + 
    +@@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
    + 	do { \
    + 		if (lex->incremental && !lex->inc_state->is_last_chunk) \
    + 		{ \
    +-			appendBinaryStringInfo(&lex->inc_state->partial_token, \
    +-								   lex->token_start, end - lex->token_start); \
    ++			appendBinaryStrVal(&lex->inc_state->partial_token, \
    ++							   lex->token_start, end - lex->token_start); \
    + 			return JSON_INCOMPLETE; \
    + 		} \
    + 		lex->token_terminator = s; \
     @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      		return code; \
      	} while (0)
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      	/* Hooray, we found the end of the string! */
      	lex->prev_token_terminator = lex->token_terminator;
      	lex->token_terminator = s + 1;
    +@@ src/common/jsonapi.c: json_lex_number(JsonLexContext *lex, const char *s,
    + 	if (lex->incremental && !lex->inc_state->is_last_chunk &&
    + 		len >= lex->input_length)
    + 	{
    +-		appendBinaryStringInfo(&lex->inc_state->partial_token,
    +-							   lex->token_start, s - lex->token_start);
    ++		appendBinaryStrVal(&lex->inc_state->partial_token,
    ++						   lex->token_start, s - lex->token_start);
    + 		if (num_err != NULL)
    + 			*num_err = error;
    + 
     @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
      char *
      json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
    @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *l
      	 * A helper for error messages that should print the current token. The
      	 * format must contain exactly one %.*s specifier.
      	 */
    - #define token_error(lex, format) \
    + #define json_token_error(lex, format) \
     -	appendStringInfo((lex)->errormsg, _(format), \
     -					 (int) ((lex)->token_terminator - (lex)->token_start), \
     -					 (lex)->token_start);
    @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *l
      	switch (error)
      	{
     @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
    - 			token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
    + 			json_token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
      			break;
      		case JSON_ESCAPING_REQUIRED:
     -			appendStringInfo(lex->errormsg,
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
     +						 (unsigned char) *(lex->token_terminator));
      			break;
      		case JSON_EXPECTED_END:
    - 			token_error(lex, "Expected end of input, but found \"%.*s\".");
    + 			json_token_error(lex, "Expected end of input, but found \"%.*s\".");
     @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
      		case JSON_INVALID_TOKEN:
    - 			token_error(lex, "Token \"%.*s\" is invalid.");
    + 			json_token_error(lex, "Token \"%.*s\" is invalid.");
      			break;
     +		case JSON_OUT_OF_MEMORY:
     +			/* should have been handled above; use the error path */
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
      		case JSON_UNICODE_ESCAPE_FORMAT:
     @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
      	}
    - #undef token_error
    + #undef json_token_error
      
     -	/*
     -	 * We don't use a default: case, so that the compiler will warn about
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
     -	 */
     -	if (lex->errormsg->len == 0)
     -		appendStringInfo(lex->errormsg,
    --						 _("unexpected json parse error type: %d"),
    +-						 "unexpected json parse error type: %d",
     -						 (int) error);
     +	/* Note that lex->errormsg can be NULL in FRONTEND code. */
     +	if (lex->errormsg && lex->errormsg->len == 0)
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
     +		 * the possibility of an incorrect input.
     +		 */
     +		appendStrVal(lex->errormsg,
    -+					 _("unexpected json parse error type: %d"),
    ++					 "unexpected json parse error type: %d",
     +					 (int) error);
     +	}
     +
    @@ src/common/meson.build: foreach name, opts : pgcommon_variants
              'dependencies': opts['dependencies'] + [ssl],
            }
     
    + ## src/common/parse_manifest.c ##
    +@@ src/common/parse_manifest.c: json_parse_manifest_incremental_init(JsonManifestParseContext *context)
    + 	parse->state = JM_EXPECT_TOPLEVEL_START;
    + 	parse->saw_version_field = false;
    + 
    +-	makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true);
    ++	if (!makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true))
    ++		context->error_cb(context, "out of memory");
    + 
    + 	incstate->sem.semstate = parse;
    + 	incstate->sem.object_start = json_manifest_object_start;
    +@@ src/common/parse_manifest.c: json_parse_manifest(JsonManifestParseContext *context, const char *buffer,
    + 
    + 	/* Create a JSON lexing context. */
    + 	lex = makeJsonLexContextCstringLen(NULL, buffer, size, PG_UTF8, true);
    ++	if (!lex)
    ++		json_manifest_parse_failure(context, "out of memory");
    + 
    + 	/* Set up semantic actions. */
    + 	sem.semstate = &parse;
    +
      ## src/include/common/jsonapi.h ##
     @@
      #ifndef JSONAPI_H
    @@ src/include/common/jsonapi.h: typedef enum JsonParseErrorType
      	JSON_UNICODE_ESCAPE_FORMAT,
      	JSON_UNICODE_HIGH_ESCAPE,
     @@ src/include/common/jsonapi.h: typedef enum JsonParseErrorType
    - 	JSON_SEM_ACTION_FAILED,		/* error should already be reported */
    - } JsonParseErrorType;
    + typedef struct JsonParserStack JsonParserStack;
    + typedef struct JsonIncrementalState JsonIncrementalState;
      
     +/*
     + * Don't depend on the internal type header for strval; if callers need access
    @@ src/include/common/jsonapi.h: typedef enum JsonParseErrorType
     +#endif
     +
     +typedef struct StrValType StrValType;
    - 
    ++
      /*
       * All the fields in this structure should be treated as read-only.
    +  *
     @@ src/include/common/jsonapi.h: typedef struct JsonLexContext
    - 	bits32		flags;
    - 	int			line_number;	/* line number, starting from 1 */
    - 	char	   *line_start;		/* where that line starts within input */
    + 	const char *line_start;		/* where that line starts within input */
    + 	JsonParserStack *pstack;
    + 	JsonIncrementalState *inc_state;
     -	StringInfo	strval;
     -	StringInfo	errormsg;
     +	bool		parse_strval;
    @@ src/include/common/jsonapi.h: typedef struct JsonLexContext
      } JsonLexContext;
      
      typedef JsonParseErrorType (*json_struct_action) (void *state);
    +
    + ## src/test/modules/test_json_parser/Makefile ##
    +@@ src/test/modules/test_json_parser/Makefile: include $(top_builddir)/src/Makefile.global
    + include $(top_srcdir)/contrib/contrib-global.mk
    + endif
    + 
    ++# TODO: fix this properly
    ++LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
    ++
    + all: test_json_parser_incremental$(X) test_json_parser_perf$(X)
    + 
    + %.o: $(top_srcdir)/$(subdir)/%.c
    +
    + ## src/test/modules/test_json_parser/meson.build ##
    +@@ src/test/modules/test_json_parser/meson.build: endif
    + 
    + test_json_parser_incremental = executable('test_json_parser_incremental',
    +   test_json_parser_incremental_sources,
    +-  dependencies: [frontend_code],
    ++  dependencies: [frontend_code, libpq],
    +   kwargs: default_bin_args + {
    +     'install': false,
    +   },
    +@@ src/test/modules/test_json_parser/meson.build: endif
    + 
    + test_json_parser_perf = executable('test_json_parser_perf',
    +   test_json_parser_perf_sources,
    +-  dependencies: [frontend_code],
    ++  dependencies: [frontend_code, libpq],
    +   kwargs: default_bin_args + {
    +     'install': false,
    +   },
2:  31f2ffdf0b ! 4:  1639f7eb9a libpq: add OAUTHBEARER SASL mechanism
    @@ src/interfaces/libpq/Makefile: endif
      endif
     
      ## src/interfaces/libpq/exports.txt ##
    -@@ src/interfaces/libpq/exports.txt: PQcancelSocket            199
    - PQcancelErrorMessage      200
    - PQcancelReset             201
    - PQcancelFinish            202
    -+PQsetAuthDataHook         203
    -+PQgetAuthDataHook         204
    -+PQdefaultAuthDataHook     205
    +@@ src/interfaces/libpq/exports.txt: PQcancelFinish            202
    + PQsocketPoll              203
    + PQsetChunkedRowsMode      204
    + PQgetCurrentTimeUSec      205
    ++PQsetAuthDataHook         206
    ++PQgetAuthDataHook         207
    ++PQdefaultAuthDataHook     208
     
      ## src/interfaces/libpq/fe-auth-oauth-curl.c (new) ##
     @@
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
     +					}
     +#endif
     +
    - #ifdef ENABLE_GSS
    - 
    - 					/*
    + 					CONNECTION_FAILED();
    + 				}
    + 				else if (beresp == PqMsg_NegotiateProtocolVersion)
     @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here until there is
      				 * Note that conn->pghost must be non-NULL if we are going to
      				 * avoid the Kerberos code doing a hostname look-up.
    @@ src/interfaces/libpq/fe-connect.c: PQsocket(const PGconn *conn)
     
      ## src/interfaces/libpq/fe-misc.c ##
     @@ src/interfaces/libpq/fe-misc.c: static int
    - pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
    + pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
      {
      	int			result;
     +	pgsocket	sock;
    @@ src/interfaces/libpq/fe-misc.c: static int
      	{
      		libpq_append_conn_error(conn, "invalid socket");
      		return -1;
    -@@ src/interfaces/libpq/fe-misc.c: pqSocketCheck(PGconn *conn, int forRead, int forWrite, time_t end_time)
    +@@ src/interfaces/libpq/fe-misc.c: pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
      
      	/* We will retry as long as we get EINTR */
      	do
    --		result = pqSocketPoll(conn->sock, forRead, forWrite, end_time);
    -+		result = pqSocketPoll(sock, forRead, forWrite, end_time);
    +-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
    ++		result = PQsocketPoll(sock, forRead, forWrite, end_time);
      	while (result < 0 && SOCK_ERRNO == EINTR);
      
      	if (result < 0)
    @@ src/interfaces/libpq/libpq-fe.h: extern "C"
      /*
       * Option flags for PQcopyResult
     @@ src/interfaces/libpq/libpq-fe.h: typedef enum
    - 	CONNECTION_CHECK_TARGET,	/* Internal state: checking target server
    - 								 * properties. */
      	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
    --	CONNECTION_ALLOCATED		/* Waiting for connection attempt to be
    -+	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
    + 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
      								 * started.  */
     +	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
     +								 * external system. */
3:  96892c6249 ! 5:  044d8b08e9 backend: add OAUTHBEARER SASL mechanism
    @@ Commit message
     
         Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
     
    + ## .cirrus.tasks.yml ##
    +@@ .cirrus.tasks.yml: task:
    +     chown root:postgres /tmp/cores
    +     sysctl kern.corefile='/tmp/cores/%N.%P.core'
    +   setup_additional_packages_script: |
    +-    #pkg install -y ...
    ++    pkg install -y curl
    + 
    +   # NB: Intentionally build without -Dllvm. The freebsd image size is already
    +   # large enough to make VM startup slow, and even without llvm freebsd
    +@@ .cirrus.tasks.yml: task:
    +         -Dcassert=true -Dinjection_points=true \
    +         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
    +         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
    ++        -Doauth=curl \
    +         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
    +         build
    +     EOF
    +@@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
    +   --with-libxslt
    +   --with-llvm
    +   --with-lz4
    ++  --with-oauth=curl
    +   --with-pam
    +   --with-perl
    +   --with-python
    +@@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
    + 
    + LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
    +   -Dllvm=enabled
    ++  -Doauth=curl
    +   -Duuid=e2fs
    + 
    + 
    +@@ .cirrus.tasks.yml: task:
    +     EOF
    + 
    +   setup_additional_packages_script: |
    +-    #apt-get update
    +-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
    ++    apt-get update
    ++    DEBIAN_FRONTEND=noninteractive apt-get -y install \
    ++      libcurl4-openssl-dev \
    ++      libcurl4-openssl-dev:i386 \
    + 
    +   matrix:
    +     - name: Linux - Debian Bullseye - Autoconf
    +@@ .cirrus.tasks.yml: task:
    +     folder: $CCACHE_DIR
    + 
    +   setup_additional_packages_script: |
    +-    #apt-get update
    +-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
    ++    apt-get update
    ++    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
    + 
    +   ###
    +   # Test that code can be built with gcc/clang without warnings
    +
      ## src/backend/libpq/Makefile ##
     @@ src/backend/libpq/Makefile: include $(top_builddir)/src/Makefile.global
      # be-fsstubs is here for historical reasons, probably belongs elsewhere
    @@ src/backend/utils/misc/guc_tables.c
      #include "nodes/queryjumble.h"
      #include "optimizer/cost.h"
     @@ src/backend/utils/misc/guc_tables.c: struct config_string ConfigureNamesString[] =
    - 		check_standby_slot_names, assign_standby_slot_names, NULL
    + 		check_synchronized_standby_slots, assign_synchronized_standby_slots, NULL
      	},
      
     +	{
    @@ src/include/libpq/sasl.h: typedef struct pg_be_sasl_mech
      
      /* Common implementation for auth.c */
     
    + ## src/test/modules/Makefile ##
    +@@ src/test/modules/Makefile: SUBDIRS = \
    + 		  dummy_index_am \
    + 		  dummy_seclabel \
    + 		  libpq_pipeline \
    ++		  oauth_validator \
    + 		  plsample \
    + 		  spgist_name_ops \
    + 		  test_bloomfilter \
    +
      ## src/test/modules/meson.build ##
     @@ src/test/modules/meson.build: subdir('gin')
      subdir('injection_points')
    @@ src/test/modules/oauth_validator/.gitignore (new)
      ## src/test/modules/oauth_validator/Makefile (new) ##
     @@
     +export PYTHON
    ++export with_oauth
     +
     +MODULES = validator
     +PGFILEDESC = "validator - test OAuth validator module"
    @@ src/test/modules/oauth_validator/meson.build (new)
     +    ],
     +    'env': {
     +      'PYTHON': python.path(),
    ++      'with_oauth': oauth_library,
     +    },
     +  },
     +}
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +use PostgreSQL::Test::OAuthServer;
     +use Test::More;
     +
    ++if ($ENV{with_oauth} ne 'curl')
    ++{
    ++	plan skip_all => 'client-side OAuth not supported by this build';
    ++}
    ++
     +my $node = PostgreSQL::Test::Cluster->new('primary');
     +$node->init;
     +$node->append_conf('postgresql.conf', "log_connections = on\n");
4:  5677e59152 ! 6:  84c6893325 Review comments
    @@ Commit message
         * Implement a version check for libcurl in autoconf, the
           equivalent check for Meson is still a TODO.
         * Address a few TODOs in the code
    -    * libpq JSON support memory management fixups
    +    * libpq JSON support memory management fixups [ed: these have been moved
    +      to an earlier commit]
     
      ## config/programs.m4 ##
     @@ config/programs.m4: if test "$pgac_cv_ldap_safe" != yes; then
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth)
      	}
      
     
    - ## src/common/jsonapi.c ##
    -@@
    - #endif
    - 
    - /*
    -- * In backend, we will use palloc/pfree along with StringInfo.  In frontend, use
    -- * malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
    -+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend,
    -+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
    -  */
    - #ifdef FRONTEND
    - 
    - #define STRDUP(s) strdup(s)
    - #define ALLOC(size) malloc(size)
    -+#define ALLOC0(size) calloc(1, size)
    -+#define FREE(s) free(s)
    - 
    - #define appendStrVal		appendPQExpBuffer
    - #define appendBinaryStrVal  appendBinaryPQExpBuffer
    -@@
    - 
    - #define STRDUP(s) pstrdup(s)
    - #define ALLOC(size) palloc(size)
    -+#define ALLOC0(size) palloc0(size)
    -+#define FREE(s) pfree(s)
    - 
    - #define appendStrVal		appendStringInfo
    - #define appendBinaryStrVal  appendBinaryStringInfo
    -@@ src/common/jsonapi.c: IsValidJsonNumber(const char *str, int len)
    -  * responsible for freeing the returned struct, either by calling
    -  * freeJsonLexContext() or (in backend environment) via memory context
    -  * cleanup.
    -+ *
    -+ * In frontend code this can return NULL on OOM, so callers must inspect the
    -+ * returned pointer.
    -  */
    - JsonLexContext *
    - makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
    -@@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, char *json,
    - {
    - 	if (lex == NULL)
    - 	{
    --		lex = palloc0(sizeof(JsonLexContext));
    -+		lex = ALLOC0(sizeof(JsonLexContext));
    -+		if (!lex)
    -+			return NULL;
    - 		lex->flags |= JSONLEX_FREE_STRUCT;
    - 	}
    - 	else
    -@@ src/common/jsonapi.c: freeJsonLexContext(JsonLexContext *lex)
    - {
    - 	static const JsonLexContext empty = {0};
    - 
    -+	if (!lex)
    -+		return;
    -+
    - 	if (lex->flags & JSONLEX_FREE_STRVAL)
    - 		destroyStrVal(lex->strval);
    - 
    -@@ src/common/jsonapi.c: freeJsonLexContext(JsonLexContext *lex)
    - 		destroyStrVal(lex->errormsg);
    - 
    - 	if (lex->flags & JSONLEX_FREE_STRUCT)
    --		pfree(lex);
    -+		FREE(lex);
    - 	else
    - 		*lex = empty;
    - }
    -
    - ## src/common/parse_manifest.c ##
    -@@ src/common/parse_manifest.c: json_parse_manifest(JsonManifestParseContext *context, char *buffer,
    - 
    - 	/* Create a JSON lexing context. */
    - 	lex = makeJsonLexContextCstringLen(NULL, buffer, size, PG_UTF8, true);
    -+	if (!lex)
    -+		json_manifest_parse_failure(context, "out of memory");
    - 
    - 	/* Set up semantic actions. */
    - 	sem.semstate = &parse;
    -
      ## src/include/common/oauth-common.h ##
     @@
       * oauth-common.h
v23-0002-Remove-fe_memutils-from-libpgcommon_shlib.patchapplication/octet-stream; name=v23-0002-Remove-fe_memutils-from-libpgcommon_shlib.patchDownload
From d6cae9157e1571f5afb514d39494a3c6cb5d5fa8 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 1 Jul 2024 14:18:33 -0700
Subject: [PATCH v23 2/6] Remove fe_memutils from libpgcommon_shlib

libpq appears to have no need for this, and the exit() references cause
our libpq-refs-stamp test to fail if the linker doesn't strip out the
unused code.
---
 src/common/Makefile    | 2 +-
 src/common/meson.build | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/common/Makefile b/src/common/Makefile
index 3d83299432..5712078dae 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -105,11 +105,11 @@ endif
 # libraries such as libpq to report errors directly.
 OBJS_FRONTEND_SHLIB = \
 	$(OBJS_COMMON) \
-	fe_memutils.o \
 	restricted_token.o \
 	sprompt.o
 OBJS_FRONTEND = \
 	$(OBJS_FRONTEND_SHLIB) \
+	fe_memutils.o \
 	logging.o
 
 # foo.o, foo_shlib.o, and foo_srv.o are all built from foo.c
diff --git a/src/common/meson.build b/src/common/meson.build
index de68e408fa..7eb604c608 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -105,13 +105,13 @@ common_sources_cflags = {
 
 common_sources_frontend_shlib = common_sources
 common_sources_frontend_shlib += files(
-  'fe_memutils.c',
   'restricted_token.c',
   'sprompt.c',
 )
 
 common_sources_frontend_static = common_sources_frontend_shlib
 common_sources_frontend_static += files(
+  'fe_memutils.c',
   'logging.c',
 )
 
-- 
2.34.1

v23-0003-common-jsonapi-support-libpq-as-a-client.patchapplication/octet-stream; name=v23-0003-common-jsonapi-support-libpq-as-a-client.patchDownload
From 5543539169dbe3a9e5cef4f0151cff4f8498c8cf Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v23 3/6] common/jsonapi: support libpq as a client

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed rather than exit()ing.

Co-authored-by: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/bin/pg_combinebackup/Makefile             |   4 +-
 src/bin/pg_combinebackup/meson.build          |   2 +-
 src/bin/pg_verifybackup/Makefile              |   2 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 448 +++++++++++++-----
 src/common/meson.build                        |   8 +-
 src/common/parse_manifest.c                   |   5 +-
 src/include/common/jsonapi.h                  |  20 +-
 src/test/modules/test_json_parser/Makefile    |   3 +
 src/test/modules/test_json_parser/meson.build |   4 +-
 10 files changed, 361 insertions(+), 137 deletions(-)

diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index c3729755ba..2f7dc1ed87 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -18,6 +18,8 @@ include $(top_builddir)/src/Makefile.global
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
 LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
@@ -30,7 +32,7 @@ OBJS = \
 
 all: pg_combinebackup
 
-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
 	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
 
 install: all installdirs
diff --git a/src/bin/pg_combinebackup/meson.build b/src/bin/pg_combinebackup/meson.build
index 1d4b9c218f..cab677b574 100644
--- a/src/bin/pg_combinebackup/meson.build
+++ b/src/bin/pg_combinebackup/meson.build
@@ -17,7 +17,7 @@ endif
 
 pg_combinebackup = executable('pg_combinebackup',
   pg_combinebackup_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args,
 )
 bin_targets += pg_combinebackup
diff --git a/src/bin/pg_verifybackup/Makefile b/src/bin/pg_verifybackup/Makefile
index 7c045f142e..3372fada01 100644
--- a/src/bin/pg_verifybackup/Makefile
+++ b/src/bin/pg_verifybackup/Makefile
@@ -17,7 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 # We need libpq only because fe_utils does.
-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
diff --git a/src/common/Makefile b/src/common/Makefile
index 5712078dae..f1da2ed13d 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 OBJS_COMMON = \
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 0c6374b0fc..e5fe512b8b 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,66 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend,
+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+#define ALLOC0(size) calloc(1, size)
+#define REALLOC realloc
+#define FREE(s) free(s)
+
+#define appendStrVal			appendPQExpBuffer
+#define appendBinaryStrVal		appendBinaryPQExpBuffer
+#define appendStrValChar		appendPQExpBufferChar
+/* XXX should we add a macro version to PQExpBuffer? */
+#define appendStrValCharMacro	appendPQExpBufferChar
+#define createStrVal			createPQExpBuffer
+#define initStrVal				initPQExpBuffer
+#define resetStrVal				resetPQExpBuffer
+#define termStrVal				termPQExpBuffer
+#define destroyStrVal			destroyPQExpBuffer
+
+#else							/* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+#define ALLOC0(size) palloc0(size)
+#define REALLOC repalloc
+
+/*
+ * Backend pfree() doesn't handle NULL pointers like the frontend's does; smooth
+ * that over to reduce mental gymnastics. Avoid multiple evaluation of the macro
+ * argument to avoid future hair-pulling.
+ */
+#define FREE(s) do {	\
+	void *__v = (s);	\
+	if (__v)			\
+		pfree(__v);		\
+} while (0)
+
+#define appendStrVal			appendStringInfo
+#define appendBinaryStrVal		appendBinaryStringInfo
+#define appendStrValChar		appendStringInfoChar
+#define appendStrValCharMacro	appendStringInfoCharMacro
+#define createStrVal			makeStringInfo
+#define initStrVal				initStringInfo
+#define resetStrVal				resetStringInfo
+#define termStrVal(s)			pfree((s)->data)
+#define destroyStrVal			destroyStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -103,7 +159,7 @@ struct JsonIncrementalState
 {
 	bool		is_last_chunk;
 	bool		partial_completed;
-	StringInfoData partial_token;
+	StrValType	partial_token;
 };
 
 /*
@@ -219,6 +275,7 @@ static JsonParseErrorType parse_object(JsonLexContext *lex, JsonSemAction *sem);
 static JsonParseErrorType parse_array_element(JsonLexContext *lex, JsonSemAction *sem);
 static JsonParseErrorType parse_array(JsonLexContext *lex, JsonSemAction *sem);
 static JsonParseErrorType report_parse_error(JsonParseContext ctx, JsonLexContext *lex);
+static bool allocate_incremental_state(JsonLexContext *lex);
 
 /* the null action object used for pure validation */
 JsonSemAction nullSemAction =
@@ -273,15 +330,11 @@ IsValidJsonNumber(const char *str, size_t len)
 {
 	bool		numeric_error;
 	size_t		total_len;
-	JsonLexContext dummy_lex;
+	JsonLexContext dummy_lex = {0};
 
 	if (len <= 0)
 		return false;
 
-	dummy_lex.incremental = false;
-	dummy_lex.inc_state = NULL;
-	dummy_lex.pstack = NULL;
-
 	/*
 	 * json_lex_number expects a leading  '-' to have been eaten already.
 	 *
@@ -321,6 +374,9 @@ IsValidJsonNumber(const char *str, size_t len)
  * responsible for freeing the returned struct, either by calling
  * freeJsonLexContext() or (in backend environment) via memory context
  * cleanup.
+ *
+ * In frontend code this can return NULL on OOM, so callers must inspect the
+ * returned pointer.
  */
 JsonLexContext *
 makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
@@ -328,7 +384,9 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return NULL;
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -341,13 +399,70 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
 
 	return lex;
 }
 
+/*
+ * Allocates the internal bookkeeping structures for incremental parsing. This
+ * can only fail in-band with FRONTEND code.
+ */
+#define JS_STACK_CHUNK_SIZE 64
+#define JS_MAX_PROD_LEN 10		/* more than we need */
+#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
+								 * number */
+static bool
+allocate_incremental_state(JsonLexContext *lex)
+{
+	void	   *pstack,
+			   *prediction,
+			   *fnames,
+			   *fnull;
+
+	lex->inc_state = ALLOC0(sizeof(JsonIncrementalState));
+	pstack = ALLOC(sizeof(JsonParserStack));
+	prediction = ALLOC(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
+	fnames = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(char *));
+	fnull = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+#ifdef FRONTEND
+	if (!lex->inc_state
+		|| !pstack
+		|| !prediction
+		|| !fnames
+		|| !fnull)
+	{
+		FREE(lex->inc_state);
+		FREE(pstack);
+		FREE(prediction);
+		FREE(fnames);
+		FREE(fnull);
+
+		return false;
+	}
+#endif
+
+	initStrVal(&(lex->inc_state->partial_token));
+	lex->pstack = pstack;
+	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
+	lex->pstack->prediction = prediction;
+	lex->pstack->pred_index = 0;
+	lex->pstack->fnames = fnames;
+	lex->pstack->fnull = fnull;
+
+	lex->incremental = true;
+	return true;
+}
+
 
 /*
  * makeJsonLexContextIncremental
@@ -357,19 +472,20 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
  * we don't need the input, that will be handed in bit by bit to the
  * parse routine. We also need an accumulator for partial tokens in case
  * the boundary between chunks happens to fall in the middle of a token.
+ *
+ * In frontend code this can return NULL on OOM, so callers must inspect the
+ * returned pointer.
  */
-#define JS_STACK_CHUNK_SIZE 64
-#define JS_MAX_PROD_LEN 10		/* more than we need */
-#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
-								 * number */
-
 JsonLexContext *
 makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 							  bool need_escapes)
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return NULL;
+
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -377,42 +493,60 @@ makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 
 	lex->line_number = 1;
 	lex->input_encoding = encoding;
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+	if (!allocate_incremental_state(lex))
+	{
+		if (lex->flags & JSONLEX_FREE_STRUCT)
+			FREE(lex);
+		return NULL;
+	}
+
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+
 	return lex;
 }
 
-static inline void
+static inline bool
 inc_lex_level(JsonLexContext *lex)
 {
-	lex->lex_level += 1;
-
-	if (lex->incremental && lex->lex_level >= lex->pstack->stack_size)
+	if (lex->incremental && (lex->lex_level + 1) >= lex->pstack->stack_size)
 	{
-		lex->pstack->stack_size += JS_STACK_CHUNK_SIZE;
-		lex->pstack->prediction =
-			repalloc(lex->pstack->prediction,
-					 lex->pstack->stack_size * JS_MAX_PROD_LEN);
-		if (lex->pstack->fnames)
-			lex->pstack->fnames =
-				repalloc(lex->pstack->fnames,
-						 lex->pstack->stack_size * sizeof(char *));
-		if (lex->pstack->fnull)
-			lex->pstack->fnull =
-				repalloc(lex->pstack->fnull, lex->pstack->stack_size * sizeof(bool));
+		size_t		new_stack_size;
+		char	   *new_prediction;
+		char	  **new_fnames;
+		bool	   *new_fnull;
+
+		new_stack_size = lex->pstack->stack_size + JS_STACK_CHUNK_SIZE;
+
+		new_prediction = REALLOC(lex->pstack->prediction,
+								 new_stack_size * JS_MAX_PROD_LEN);
+		new_fnames = REALLOC(lex->pstack->fnames,
+							 new_stack_size * sizeof(char *));
+		new_fnull = REALLOC(lex->pstack->fnull, new_stack_size * sizeof(bool));
+
+#ifdef FRONTEND
+		if (!new_prediction || !new_fnames || !new_fnull)
+			return false;
+#endif
+
+		lex->pstack->stack_size = new_stack_size;
+		lex->pstack->prediction = new_prediction;
+		lex->pstack->fnames = new_fnames;
+		lex->pstack->fnull = new_fnull;
 	}
+
+	lex->lex_level += 1;
+	return true;
 }
 
 static inline void
@@ -482,24 +616,31 @@ get_fnull(JsonLexContext *lex)
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
+	if (!lex)
+		return;
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-		destroyStringInfo(lex->strval);
+		destroyStrVal(lex->strval);
 
 	if (lex->errormsg)
-		destroyStringInfo(lex->errormsg);
+		destroyStrVal(lex->errormsg);
 
 	if (lex->incremental)
 	{
-		pfree(lex->inc_state->partial_token.data);
-		pfree(lex->inc_state);
-		pfree(lex->pstack->prediction);
-		pfree(lex->pstack->fnames);
-		pfree(lex->pstack->fnull);
-		pfree(lex->pstack);
+		termStrVal(&lex->inc_state->partial_token);
+		FREE(lex->inc_state);
+		FREE(lex->pstack->prediction);
+		FREE(lex->pstack->fnames);
+		FREE(lex->pstack->fnull);
+		FREE(lex->pstack);
 	}
 
 	if (lex->flags & JSONLEX_FREE_STRUCT)
-		pfree(lex);
+		FREE(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -522,22 +663,13 @@ JsonParseErrorType
 pg_parse_json(JsonLexContext *lex, JsonSemAction *sem)
 {
 #ifdef FORCE_JSON_PSTACK
-
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-
 	/*
 	 * We don't need partial token processing, there is only one chunk. But we
 	 * still need to init the partial token string so that freeJsonLexContext
-	 * works.
+	 * works, so perform the full incremental initialization.
 	 */
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+	if (!allocate_incremental_state(lex))
+		return JSON_OUT_OF_MEMORY;
 
 	return pg_parse_json_incremental(lex, sem, lex->input, lex->input_length, true);
 
@@ -597,7 +729,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -737,7 +869,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_OEND:
@@ -766,7 +900,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_AEND:
@@ -793,9 +929,11 @@ pg_parse_json_incremental(JsonLexContext *lex,
 						json_ofield_action ostart = sem->object_field_start;
 						json_ofield_action oend = sem->object_field_end;
 
-						if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
+						if ((ostart != NULL || oend != NULL) && lex->parse_strval)
 						{
-							fname = pstrdup(lex->strval->data);
+							fname = STRDUP(lex->strval->data);
+							if (fname == NULL)
+								return JSON_OUT_OF_MEMORY;
 						}
 						set_fname(lex, fname);
 					}
@@ -883,14 +1021,21 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							 */
 							if (tok == JSON_TOKEN_STRING)
 							{
-								if (lex->strval != NULL)
-									pstack->scalar_val = pstrdup(lex->strval->data);
+								if (lex->parse_strval)
+								{
+									pstack->scalar_val = STRDUP(lex->strval->data);
+									if (pstack->scalar_val == NULL)
+										return JSON_OUT_OF_MEMORY;
+								}
 							}
 							else
 							{
 								ptrdiff_t	tlen = (lex->token_terminator - lex->token_start);
 
-								pstack->scalar_val = palloc(tlen + 1);
+								pstack->scalar_val = ALLOC(tlen + 1);
+								if (pstack->scalar_val == NULL)
+									return JSON_OUT_OF_MEMORY;
+
 								memcpy(pstack->scalar_val, lex->token_start, tlen);
 								pstack->scalar_val[tlen] = '\0';
 							}
@@ -1025,14 +1170,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -1066,8 +1218,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -1123,6 +1279,11 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -1312,15 +1473,24 @@ json_lex(JsonLexContext *lex)
 	const char *const end = lex->input + lex->input_length;
 	JsonParseErrorType result;
 
-	if (lex->incremental && lex->inc_state->partial_completed)
+	if (lex->incremental)
 	{
-		/*
-		 * We just lexed a completed partial token on the last call, so reset
-		 * everything
-		 */
-		resetStringInfo(&(lex->inc_state->partial_token));
-		lex->token_terminator = lex->input;
-		lex->inc_state->partial_completed = false;
+		if (lex->inc_state->partial_completed)
+		{
+			/*
+			 * We just lexed a completed partial token on the last call, so
+			 * reset everything
+			 */
+			resetStrVal(&(lex->inc_state->partial_token));
+			lex->token_terminator = lex->input;
+			lex->inc_state->partial_completed = false;
+		}
+
+#ifdef FRONTEND
+		/* Make sure our partial token buffer is valid before using it below. */
+		if (PQExpBufferDataBroken(lex->inc_state->partial_token))
+			return JSON_OUT_OF_MEMORY;
+#endif
 	}
 
 	s = lex->token_terminator;
@@ -1331,7 +1501,7 @@ json_lex(JsonLexContext *lex)
 		 * We have a partial token. Extend it and if completed lex it by a
 		 * recursive call
 		 */
-		StringInfo	ptok = &(lex->inc_state->partial_token);
+		StrValType *ptok = &(lex->inc_state->partial_token);
 		size_t		added = 0;
 		bool		tok_done = false;
 		JsonLexContext dummy_lex;
@@ -1358,7 +1528,7 @@ json_lex(JsonLexContext *lex)
 			{
 				char		c = lex->input[i];
 
-				appendStringInfoCharMacro(ptok, c);
+				appendStrValCharMacro(ptok, c);
 				added++;
 				if (c == '"' && escapes % 2 == 0)
 				{
@@ -1403,7 +1573,7 @@ json_lex(JsonLexContext *lex)
 						case '8':
 						case '9':
 							{
-								appendStringInfoCharMacro(ptok, cc);
+								appendStrValCharMacro(ptok, cc);
 								added++;
 							}
 							break;
@@ -1424,7 +1594,7 @@ json_lex(JsonLexContext *lex)
 
 				if (JSON_ALPHANUMERIC_CHAR(cc))
 				{
-					appendStringInfoCharMacro(ptok, cc);
+					appendStrValCharMacro(ptok, cc);
 					added++;
 				}
 				else
@@ -1467,6 +1637,7 @@ json_lex(JsonLexContext *lex)
 		dummy_lex.input_length = ptok->len;
 		dummy_lex.input_encoding = lex->input_encoding;
 		dummy_lex.incremental = false;
+		dummy_lex.parse_strval = lex->parse_strval;
 		dummy_lex.strval = lex->strval;
 
 		partial_result = json_lex(&dummy_lex);
@@ -1622,8 +1793,8 @@ json_lex(JsonLexContext *lex)
 					if (lex->incremental && !lex->inc_state->is_last_chunk &&
 						p == lex->input + lex->input_length)
 					{
-						appendBinaryStringInfo(
-											   &(lex->inc_state->partial_token), s, end - s);
+						appendBinaryStrVal(
+										   &(lex->inc_state->partial_token), s, end - s);
 						return JSON_INCOMPLETE;
 					}
 
@@ -1680,8 +1851,8 @@ json_lex_string(JsonLexContext *lex)
 	do { \
 		if (lex->incremental && !lex->inc_state->is_last_chunk) \
 		{ \
-			appendBinaryStringInfo(&lex->inc_state->partial_token, \
-								   lex->token_start, end - lex->token_start); \
+			appendBinaryStrVal(&lex->inc_state->partial_token, \
+							   lex->token_start, end - lex->token_start); \
 			return JSON_INCOMPLETE; \
 		} \
 		lex->token_terminator = s; \
@@ -1694,8 +1865,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -1732,7 +1910,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -1789,19 +1967,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -1811,22 +1989,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -1861,7 +2039,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to appendBinaryStrVal.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -1885,8 +2063,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				appendBinaryStrVal(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -1902,6 +2080,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -2019,8 +2202,8 @@ json_lex_number(JsonLexContext *lex, const char *s,
 	if (lex->incremental && !lex->inc_state->is_last_chunk &&
 		len >= lex->input_length)
 	{
-		appendBinaryStringInfo(&lex->inc_state->partial_token,
-							   lex->token_start, s - lex->token_start);
+		appendBinaryStrVal(&lex->inc_state->partial_token,
+						   lex->token_start, s - lex->token_start);
 		if (num_err != NULL)
 			*num_err = error;
 
@@ -2096,19 +2279,25 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
 	if (lex->errormsg)
-		resetStringInfo(lex->errormsg);
+		resetStrVal(lex->errormsg);
 	else
-		lex->errormsg = makeStringInfo();
+		lex->errormsg = createStrVal();
 
 	/*
 	 * A helper for error messages that should print the current token. The
 	 * format must contain exactly one %.*s specifier.
 	 */
 #define json_token_error(lex, format) \
-	appendStringInfo((lex)->errormsg, _(format), \
-					 (int) ((lex)->token_terminator - (lex)->token_start), \
-					 (lex)->token_start);
+	appendStrVal((lex)->errormsg, _(format), \
+				 (int) ((lex)->token_terminator - (lex)->token_start), \
+				 (lex)->token_start);
 
 	switch (error)
 	{
@@ -2127,9 +2316,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			json_token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
 			break;
 		case JSON_ESCAPING_REQUIRED:
-			appendStringInfo(lex->errormsg,
-							 _("Character with value 0x%02x must be escaped."),
-							 (unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
 			break;
 		case JSON_EXPECTED_END:
 			json_token_error(lex, "Expected end of input, but found \"%.*s\".");
@@ -2160,6 +2349,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 		case JSON_INVALID_TOKEN:
 			json_token_error(lex, "Token \"%.*s\" is invalid.");
 			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -2191,15 +2383,23 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 	}
 #undef json_token_error
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	if (lex->errormsg->len == 0)
-		appendStringInfo(lex->errormsg,
-						 "unexpected json parse error type: %d",
-						 (int) error);
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && lex->errormsg->len == 0)
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d",
+					 (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
+#endif
 
 	return lex->errormsg->data;
 }
diff --git a/src/common/meson.build b/src/common/meson.build
index 7eb604c608..ea5f19e89e 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -125,13 +125,18 @@ common_sources_frontend_static += files(
 # least cryptohash_openssl.c, hmac_openssl.c depend on it.
 # controldata_utils.c depends on wait_event_types_h. That's arguably a
 # layering violation, but ...
+#
+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
+# appropriately. This seems completely broken.
 pgcommon = {}
 pgcommon_variants = {
   '_srv': internal_lib_args + {
+    'include_directories': include_directories('.'),
     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
     'dependencies': [backend_common_code],
   },
   '': default_lib_args + {
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_static,
     'dependencies': [frontend_common_code],
     # Files in libpgcommon.a should use/export the "xxx_private" versions
@@ -140,6 +145,7 @@ pgcommon_variants = {
   },
   '_shlib': default_lib_args + {
     'pic': true,
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
   },
@@ -157,7 +163,6 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'sources': sources,
         'c_args': c_args,
@@ -170,7 +175,6 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'dependencies': opts['dependencies'] + [ssl],
       }
diff --git a/src/common/parse_manifest.c b/src/common/parse_manifest.c
index 612e120b17..0da6272336 100644
--- a/src/common/parse_manifest.c
+++ b/src/common/parse_manifest.c
@@ -139,7 +139,8 @@ json_parse_manifest_incremental_init(JsonManifestParseContext *context)
 	parse->state = JM_EXPECT_TOPLEVEL_START;
 	parse->saw_version_field = false;
 
-	makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true);
+	if (!makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true))
+		context->error_cb(context, "out of memory");
 
 	incstate->sem.semstate = parse;
 	incstate->sem.object_start = json_manifest_object_start;
@@ -240,6 +241,8 @@ json_parse_manifest(JsonManifestParseContext *context, const char *buffer,
 
 	/* Create a JSON lexing context. */
 	lex = makeJsonLexContextCstringLen(NULL, buffer, size, PG_UTF8, true);
+	if (!lex)
+		json_manifest_parse_failure(context, "out of memory");
 
 	/* Set up semantic actions. */
 	sem.semstate = &parse;
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 71a491d72d..d03a61fcd6 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -51,6 +49,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -64,6 +63,18 @@ typedef enum JsonParseErrorType
 typedef struct JsonParserStack JsonParserStack;
 typedef struct JsonIncrementalState JsonIncrementalState;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
+
 /*
  * All the fields in this structure should be treated as read-only.
  *
@@ -102,8 +113,9 @@ typedef struct JsonLexContext
 	const char *line_start;		/* where that line starts within input */
 	JsonParserStack *pstack;
 	JsonIncrementalState *inc_state;
-	StringInfo	strval;
-	StringInfo	errormsg;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
diff --git a/src/test/modules/test_json_parser/Makefile b/src/test/modules/test_json_parser/Makefile
index 2dc7175b7c..f410e04cf1 100644
--- a/src/test/modules/test_json_parser/Makefile
+++ b/src/test/modules/test_json_parser/Makefile
@@ -19,6 +19,9 @@ include $(top_builddir)/src/Makefile.global
 include $(top_srcdir)/contrib/contrib-global.mk
 endif
 
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
+
 all: test_json_parser_incremental$(X) test_json_parser_perf$(X)
 
 %.o: $(top_srcdir)/$(subdir)/%.c
diff --git a/src/test/modules/test_json_parser/meson.build b/src/test/modules/test_json_parser/meson.build
index b224f3e07e..8136070233 100644
--- a/src/test/modules/test_json_parser/meson.build
+++ b/src/test/modules/test_json_parser/meson.build
@@ -13,7 +13,7 @@ endif
 
 test_json_parser_incremental = executable('test_json_parser_incremental',
   test_json_parser_incremental_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args + {
     'install': false,
   },
@@ -32,7 +32,7 @@ endif
 
 test_json_parser_perf = executable('test_json_parser_perf',
   test_json_parser_perf_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args + {
     'install': false,
   },
-- 
2.34.1

v23-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v23-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 044d8b08e99ddcac3e9680ee50c53672009bf95b Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v23 5/6] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- port to platforms other than "modern Linux/BSD"
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 666 ++++++++++++++++++
 src/backend/libpq/auth-sasl.c                 |  10 +-
 src/backend/libpq/auth-scram.c                |   4 +-
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/common/Makefile                           |   2 +-
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/include/libpq/sasl.h                      |  11 +
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  22 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  37 +
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   |  79 +++
 .../modules/oauth_validator/t/oauth_server.py | 114 +++
 src/test/modules/oauth_validator/validator.c  |  82 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  14 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   3 +
 27 files changed, 1241 insertions(+), 39 deletions(-)
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 33646faead..95f131baa9 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -163,7 +163,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -175,6 +175,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -223,6 +224,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -235,6 +237,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -310,8 +313,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bullseye - Autoconf
@@ -676,8 +681,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..024f304e4d
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * TODO: note that escaping here should be belt-and-suspenders, since
+	 * escapable characters aren't valid in either the issuer URI or the scope
+	 * list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 2"),
+				 errdetail("Bearer token is empty.")));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 3"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 4161959914..486a34e719 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2b607c5270..0a5a8640fc 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 18271def2e..aabe0b0e68 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1743,6 +1744,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2062,8 +2065,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2446,6 +2450,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d28b0bcb40..461094f288 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -48,6 +48,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4707,6 +4708,17 @@ struct config_string ConfigureNamesString[] =
 		check_synchronized_standby_slots, assign_synchronized_standby_slots, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/common/Makefile b/src/common/Makefile
index f1da2ed13d..beb9830030 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -41,7 +41,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
 override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
-LIBS += $(PTHREAD_LIBS)
+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
 
 OBJS_COMMON = \
 	archive.o \
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..655ce75796
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,22 @@
+export PYTHON
+export with_oauth
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..3db2ddea1c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,37 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..e3cf3ac7f2
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,79 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test    oauth issuer="$issuer"           scope="openid postgres"
+local all testalt oauth issuer="$issuer/alternate" scope="openid postgres alt"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+my $user = "test";
+$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+$node->log_check("user $user: validator receives correct parameters", $log_start,
+				 log_like => [
+					 qr/oauth_validator: token="9243959234", role="$user"/,
+					 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+				 ]);
+$node->log_check("user $user: validator sets authenticated identity", $log_start,
+				 log_like => [
+					 qr/connection authenticated: identity="test" method=oauth/,
+				 ]);
+$log_start = $log_end;
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+				  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+$node->log_check("user $user: validator receives correct parameters", $log_start,
+				 log_like => [
+					 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+					 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+				 ]);
+$node->log_check("user $user: validator sets authenticated identity", $log_start,
+				 log_like => [
+					 qr/connection authenticated: identity="testalt" method=oauth/,
+				 ]);
+$log_start = $log_end;
+
+$webserver->stop();
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..77e3883a81
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,114 @@
+#! /usr/bin/env python3
+
+import http.server
+import json
+import os
+import sys
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+
+    def do_GET(self):
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def do_POST(self):
+        self._check_issuer()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        """
+
+        resp = json.dumps(js).encode("ascii")
+
+        self.send_response(200, "OK")
+        self.send_header("Content-Type", "application/json")
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        return {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            "interval": 0,
+            "verification_uri": uri,
+            "expires-in": 5,
+        }
+
+    def token(self) -> JsonObject:
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return {
+            "access_token": token,
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..09a4bf61d2
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,82 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+				state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 0135c5a795..f14839f4c5 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2388,6 +2388,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2431,7 +2436,14 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..d96733f531
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use threads;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		// die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	print("# OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 8f323da558..b02ee48898 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1715,6 +1715,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3055,6 +3056,7 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
 ValuesScan
 ValuesScanState
 Var
@@ -3647,6 +3649,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v23-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v23-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 1639f7eb9a9184ae58dcd20f8184dd01dbd820b3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v23 4/6] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 configure                                 |  110 ++
 configure.ac                              |   28 +
 meson.build                               |   29 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   10 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 1982 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 +++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   75 +
 src/interfaces/libpq/libpq-int.h          |   14 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 23 files changed, 3199 insertions(+), 26 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/configure b/configure
index 76f06bd8fd..864bb9b7c3 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -861,6 +862,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1571,6 +1573,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8488,6 +8491,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13040,6 +13089,56 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14065,6 +14164,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index ab2d51c21c..6aceed898c 100644
--- a/configure.ac
+++ b/configure.ac
@@ -927,6 +927,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1423,6 +1443,10 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1614,6 +1638,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/meson.build b/meson.build
index 5387bb6d5f..b01bc91c09 100644
--- a/meson.build
+++ b/meson.build
@@ -840,6 +840,33 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  oauth = dependency('libcurl', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -2974,6 +3001,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3645,6 +3673,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index 246cecf382..3ffe1f52c2 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -124,6 +124,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 83b91fe916..6f6174811d 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..5ff3488bfb
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index f8d3e3b6b8..1ee7f3731d 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -246,6 +246,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -726,6 +729,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index b36a765764..0e0369cb63 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -61,6 +61,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -79,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..0504f96e4e
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,1982 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls to
+ * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by cURL, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by cURL to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"cURL multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two cURL handles,
+ * so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	char	   *content_type;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	/* Make sure the server thinks it's given us JSON. */
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		goto cleanup;
+	}
+	else if (strcasecmp(content_type, "application/json") != 0)
+	{
+		actx_error(actx, "unexpected content type \"%s\"", content_type);
+		goto cleanup;
+	}
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		goto cleanup;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.)
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return 1;				/* TODO this slows down the tests
+								 * considerably... */
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(authz->interval_str);
+	else
+	{
+		/* TODO: handle default interval of 5 seconds */
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * cURL Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * cURL multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the `data` field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * cURL multi handle. Rather than continually adding and removing the timer,
+ * we keep it in the set at all times and just disarm it when it's not
+ * needed.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means cURL wants us to call back immediately. That's
+		 * not technically an option for timerfd, but we can make the timeout
+		 * ridiculously short.
+		 *
+		 * TODO: maybe just signal drive_request() to immediately call back in
+		 * this case?
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Initializes the two cURL handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data	*curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * cURL for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create cURL multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create cURL handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
+	 */
+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS, return false);
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from cURL; appends the response body into actx->work_data.
+ * See start_request().
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * Sanity check.
+	 *
+	 * TODO: even though this is nominally an asynchronous process, there are
+	 * apparently operations that can synchronously fail by this point, such
+	 * as connections to closed local ports. Maybe we need to let this case
+	 * fall through to drive_request instead, or else perform a
+	 * curl_multi_info_read immediately.
+	 */
+	if (running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	err = curl_multi_socket_all(actx->curlm, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return PGRES_POLLING_FAILED;
+	}
+
+	if (running)
+	{
+		/* We'll come back again. */
+		return PGRES_POLLING_READING;
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future cURL versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "cURL easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * The top-level, nonblocking entry point for the cURL implementation. This will
+ * be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	struct token tok = {0};
+
+	/*
+	 * XXX This is not safe. cURL has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized cURL,
+	 * which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell cURL to initialize
+	 * everything else, because other pieces of our client executable may
+	 * already be using cURL for their own purposes. If we initialize libcurl
+	 * first, with only a subset of its features, we could break those other
+	 * clients nondeterministically, and that would probably be a nightmare to
+	 * debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	/* By default, the multiplexer is the altsock. Reassign as desired. */
+	*altsock = actx->mux;
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				PostgresPollingStatusType status;
+
+				status = drive_request(actx);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+				else if (status != PGRES_POLLING_OK)
+				{
+					/* not done yet */
+					free_token(&tok);
+					return status;
+				}
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			/* TODO check that the timer has expired */
+			break;
+	}
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			actx->errctx = "failed to fetch OpenID discovery document";
+			if (!start_discovery(actx, conn->oauth_discovery_uri))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DISCOVERY;
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+			if (!finish_discovery(actx))
+				goto error_return;
+
+			/* TODO: check issuer */
+
+			actx->errctx = "cannot run OAuth device authorization";
+			if (!check_for_device_flow(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain device authorization";
+			if (!start_device_authz(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+			break;
+
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			if (!finish_device_authz(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				const struct token_error *err;
+#ifdef HAVE_SYS_EPOLL_H
+				struct itimerspec spec = {0};
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				struct kevent ev = {0};
+#endif
+
+				if (!finish_token_request(actx, &tok))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					int			res;
+					PQpromptOAuthDevice prompt = {
+						.verification_uri = actx->authz.verification_uri,
+						.user_code = actx->authz.user_code,
+						/* TODO: optional fields */
+					};
+
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
+										 &prompt);
+
+					if (!res)
+					{
+						fprintf(stderr, "Visit %s and enter the code: %s",
+								prompt.verification_uri, prompt.user_code);
+					}
+					else if (res < 0)
+					{
+						actx_error(actx, "device prompt failed");
+						goto error_return;
+					}
+
+					actx->user_prompted = true;
+				}
+
+				if (tok.access_token)
+				{
+					/* Construct our Bearer token. */
+					resetPQExpBuffer(&actx->work_data);
+					appendPQExpBuffer(&actx->work_data, "Bearer %s",
+									  tok.access_token);
+
+					if (PQExpBufferDataBroken(actx->work_data))
+					{
+						actx_error(actx, "out of memory");
+						goto error_return;
+					}
+
+					state->token = strdup(actx->work_data.data);
+					break;
+				}
+
+				/*
+				 * authorization_pending and slow_down are the only acceptable
+				 * errors; anything else and we bail.
+				 */
+				err = &tok.err;
+				if (!err->error || (strcmp(err->error, "authorization_pending")
+									&& strcmp(err->error, "slow_down")))
+				{
+					/* TODO handle !err->error */
+					if (err->error_description)
+						appendPQExpBuffer(&actx->errbuf, "%s ",
+										  err->error_description);
+
+					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+
+					goto error_return;
+				}
+
+				/*
+				 * A slow_down error requires us to permanently increase our
+				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
+				 */
+				if (strcmp(err->error, "slow_down") == 0)
+				{
+					actx->authz.interval += 5;	/* TODO check for overflow? */
+				}
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				Assert(actx->authz.interval > 0);
+#ifdef HAVE_SYS_EPOLL_H
+				spec.it_value.tv_sec = actx->authz.interval;
+
+				if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+				{
+					actx_error(actx, "failed to set timerfd: %m");
+					goto error_return;
+				}
+
+				*altsock = actx->timerfd;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				/* XXX: I guess this wants to be hidden in a routine */
+				EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
+					   actx->authz.interval * 1000, 0);
+				if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
+				{
+					actx_error(actx, "failed to set kqueue timer: %m");
+					goto error_return;
+				}
+				/* XXX: why did we change the altsock in the epoll version? */
+#endif
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				break;
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+	}
+
+	free_token(&tok);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	free_token(&tok);
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..66ee8ff076
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..8d4ea45aa8
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2023, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 3b25d8afda..d02424e11b 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -419,7 +420,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -437,7 +438,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -524,6 +525,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -563,26 +573,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -625,7 +657,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -650,11 +682,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -956,12 +998,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1119,7 +1167,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1136,7 +1184,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1452,3 +1501,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 071b1b34aa..44f38f1836 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -364,6 +364,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -627,6 +644,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2644,6 +2662,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3680,6 +3699,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3835,6 +3855,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3868,7 +3898,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/* OK, we have processed the message; mark data consumed */
 				conn->inStart = conn->inCursor;
@@ -3901,6 +3941,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4582,6 +4657,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4699,6 +4775,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7181,6 +7262,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index f235bfbb41..aa1fee38c8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1041,10 +1041,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1061,7 +1064,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 87a6f3df07..25f216afcf 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -38,6 +38,8 @@ extern "C"
 #define LIBPQ_HAS_TRACE_FLAGS 1
 /* Indicates that PQsslAttribute(NULL, "library") is useful */
 #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -82,6 +84,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -163,6 +167,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -695,10 +706,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index f36d76bf3f..c9d9213cf3 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -357,6 +357,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -420,6 +422,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -491,6 +502,9 @@ struct pg_conn
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
 
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..0181e5cc03 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 5618050b30..830da57994 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -233,6 +233,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e6c1caf649..8f323da558 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -366,6 +367,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1711,6 +1714,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1775,6 +1779,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1935,11 +1940,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3438,6 +3446,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v23-0006-Review-comments.patchapplication/octet-stream; name=v23-0006-Review-comments.patchDownload
From 84c6893325491fc860bcb903741e894b60478f9a Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Thu, 28 Mar 2024 21:59:02 +0100
Subject: [PATCH v23 6/6] Review comments

Fixes and tidy-ups following a review of v21, a few items
are (listed in no specific order):

* Implement a version check for libcurl in autoconf, the
  equivalent check for Meson is still a TODO.
* Address a few TODOs in the code
* libpq JSON support memory management fixups [ed: these have been moved
  to an earlier commit]
---
 config/programs.m4                           |  21 ++
 configure                                    |  34 +++
 configure.ac                                 |   1 +
 src/backend/libpq/auth-oauth.c               |  28 +-
 src/include/common/oauth-common.h            |   2 +-
 src/interfaces/libpq/Makefile                |   4 +-
 src/interfaces/libpq/fe-auth-oauth-curl.c    | 275 +++++++++++--------
 src/interfaces/libpq/fe-auth-oauth.c         |   9 +-
 src/interfaces/libpq/fe-auth-oauth.h         |   2 +-
 src/test/modules/oauth_validator/validator.c |   2 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm |  10 +-
 src/tools/pgindent/typedefs.list             |   1 +
 12 files changed, 256 insertions(+), 133 deletions(-)

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..157da7eec5 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,28 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 8.4.0 or higher since earlier versions can be compiled
+# with a codepatch containing exit(), and PostgreSQL does not allow any lib
+# linked to libpq which can call exit.
 
+# PGAC_CHECK_LIBCURL
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR <= 8 && LIBCURL_VERSION_MINOR < 4
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 8.4.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 864bb9b7c3..1a785006b9 100755
--- a/configure
+++ b/configure
@@ -13137,6 +13137,40 @@ else
   as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
 fi
 
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR <= 8 && LIBCURL_VERSION_MINOR < 4
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 8.4.0 or later is required." "$LINENO" 5
+fi
 fi
 
 # for contrib/sepgsql
diff --git a/configure.ac b/configure.ac
index 6aceed898c..cdc6bea660 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1445,6 +1445,7 @@ AC_SUBST(LDAP_LIBS_BE)
 
 if test "$with_oauth" = curl ; then
   AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
 fi
 
 # for contrib/sepgsql
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 024f304e4d..ec1418c3fc 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -476,9 +476,9 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	initStringInfo(&buf);
 
 	/*
-	 * TODO: note that escaping here should be belt-and-suspenders, since
-	 * escapable characters aren't valid in either the issuer URI or the scope
-	 * list, but the HBA doesn't enforce that yet.
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
 	 */
 	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
 
@@ -533,7 +533,9 @@ validate_token_format(const char *header)
 	if (!header || strlen(header) <= 7)
 	{
 		ereport(COMMERROR,
-				(errmsg("malformed OAuth bearer token 1")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is less than 8 bytes."));
 		return NULL;
 	}
 
@@ -551,9 +553,9 @@ validate_token_format(const char *header)
 	if (!*token)
 	{
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 2"),
-				 errdetail("Bearer token is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
 		return NULL;
 	}
 
@@ -573,9 +575,9 @@ validate_token_format(const char *header)
 		 * of someone's password into the logs.
 		 */
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 3"),
-				 errdetail("Bearer token is not in the correct format.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
 		return NULL;
 	}
 
@@ -617,10 +619,10 @@ validate(Port *port, const char *auth)
 	/* Make sure the validator authenticated the user. */
 	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
-		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
-				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
-						port->user_name)));
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity"));
 		return false;
 	}
 
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
index 5ff3488bfb..8fe5626778 100644
--- a/src/include/common/oauth-common.h
+++ b/src/include/common/oauth-common.h
@@ -3,7 +3,7 @@
  * oauth-common.h
  *		Declarations for helper functions used for OAuth/OIDC authentication
  *
- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/include/common/oauth-common.h
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 0e0369cb63..9a290782f2 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -116,6 +116,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -123,7 +125,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 0504f96e4e..9dd8454cac 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -3,7 +3,7 @@
  * fe-auth-oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication.
  *
- * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
@@ -31,6 +31,8 @@
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
 
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
 /*
  * Parsed JSON Representations
  *
@@ -143,7 +145,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -151,8 +153,9 @@ typedef enum
 } OAuthStep;
 
 /*
- * The async_ctx holds onto state that needs to persist across multiple calls to
- * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
  */
 struct async_ctx
 {
@@ -162,9 +165,10 @@ struct async_ctx
 	int			timerfd;		/* a timerfd for signaling async timeouts */
 #endif
 	pgsocket	mux;			/* the multiplexer socket containing all
-								 * descriptors tracked by cURL, plus the
+								 * descriptors tracked by libcurl, plus the
 								 * timerfd */
-	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
 	CURL	   *curl;			/* the (single) easy handle for serial
 								 * requests */
 
@@ -183,7 +187,7 @@ struct async_ctx
 	 *				actx_error[_str] to manipulate this. This must be filled
 	 *				with something useful on an error.
 	 *
-	 * - curl_err:	an optional static error buffer used by cURL to put
+	 * - curl_err:	an optional static error buffer used by libcurl to put
 	 *				detailed information about failures. Unfortunately
 	 *				untranslatable.
 	 *
@@ -195,7 +199,7 @@ struct async_ctx
 	 */
 	const char *errctx;			/* not freed; must point to static allocation */
 	PQExpBufferData errbuf;
-	char		curl_err[CURL_ERROR_SIZE];
+	PQExpBufferData curl_err;
 
 	/*
 	 * These documents need to survive over multiple calls, and are therefore
@@ -205,6 +209,8 @@ struct async_ctx
 	struct device_authz authz;
 
 	bool		user_prompted;	/* have we already sent the authz prompt? */
+
+	int			running;
 };
 
 /*
@@ -238,7 +244,7 @@ free_curl_async_ctx(PGconn *conn, void *ctx)
 
 		if (err)
 			libpq_append_conn_error(conn,
-									"cURL easy handle removal failed: %s",
+									"libcurl easy handle removal failed: %s",
 									curl_multi_strerror(err));
 	}
 
@@ -258,7 +264,7 @@ free_curl_async_ctx(PGconn *conn, void *ctx)
 
 		if (err)
 			libpq_append_conn_error(conn,
-									"cURL multi handle cleanup failed: %s",
+									"libcurl multi handle cleanup failed: %s",
 									curl_multi_strerror(err));
 	}
 
@@ -292,8 +298,8 @@ free_curl_async_ctx(PGconn *conn, void *ctx)
 	appendPQExpBufferStr(&(ACTX)->errbuf, S)
 
 /*
- * Macros for getting and setting state for the connection's two cURL handles,
- * so you don't have to write out the error handling every time.
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
  */
 
 #define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
@@ -622,19 +628,28 @@ parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
 		actx_error(actx, "no content type was provided");
 		goto cleanup;
 	}
-	else if (strcasecmp(content_type, "application/json") != 0)
+
+	/*
+	 * We only check the media-type and not the parameters, so we need to
+	 * perform a length limited comparison and not compare the whole string.
+	 */
+	if (pg_strncasecmp(content_type, "application/json", strlen("application/json")) != 0)
 	{
-		actx_error(actx, "unexpected content type \"%s\"", content_type);
-		goto cleanup;
+		actx_error(actx, "unexpected content type: \"%s\"", content_type);
+		return false;
 	}
 
 	if (strlen(resp->data) != resp->len)
 	{
 		actx_error(actx, "response contains embedded NULLs");
-		goto cleanup;
+		return false;
 	}
 
-	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	if (!makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	ctx.errbuf = &actx->errbuf;
 	ctx.fields = fields;
@@ -787,7 +802,11 @@ parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
 		authz->interval = parse_interval(authz->interval_str);
 	else
 	{
-		/* TODO: handle default interval of 5 seconds */
+		/*
+		 * RFC 8628 specify 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
 	}
 
 	return true;
@@ -838,7 +857,7 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 }
 
 /*
- * cURL Multi Setup/Callbacks
+ * libcurl Multi Setup/Callbacks
  */
 
 /*
@@ -894,7 +913,7 @@ setup_multiplexer(struct async_ctx *actx)
 
 /*
  * Adds and removes sockets from the multiplexer set, as directed by the
- * cURL multi handle.
+ * libcurl multi handle.
  */
 static int
 register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
@@ -925,7 +944,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 			break;
 
 		default:
-			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
 			return -1;
 	}
 
@@ -997,7 +1016,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 			break;
 
 		default:
-			actx_error(actx, "unknown cURL socket operation (%d)", what);
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
 			return -1;
 	}
 
@@ -1018,7 +1037,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 		/*
 		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
 		 * whether successful or not. Failed entries contain a non-zero errno
-		 * in the `data` field.
+		 * in the data field.
 		 */
 		Assert(ev_out[i].flags & EV_ERROR);
 
@@ -1043,9 +1062,8 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 
 /*
  * Adds or removes timeouts from the multiplexer set, as directed by the
- * cURL multi handle. Rather than continually adding and removing the timer,
- * we keep it in the set at all times and just disarm it when it's not
- * needed.
+ * libcurl multi handle. Rather than continually adding and removing the timer,
+ * we keep it in the set at all times and just disarm it when it's not needed.
  */
 static int
 register_timer(CURLM *curlm, long timeout, void *ctx)
@@ -1061,9 +1079,9 @@ register_timer(CURLM *curlm, long timeout, void *ctx)
 	else if (timeout == 0)
 	{
 		/*
-		 * A zero timeout means cURL wants us to call back immediately. That's
-		 * not technically an option for timerfd, but we can make the timeout
-		 * ridiculously short.
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
 		 *
 		 * TODO: maybe just signal drive_request() to immediately call back in
 		 * this case?
@@ -1098,8 +1116,21 @@ register_timer(CURLM *curlm, long timeout, void *ctx)
 	return 0;
 }
 
+static int
+debug_callback(CURL *handle, curl_infotype *type, char *data, size_t size,
+			   void *clientp)
+{
+	struct async_ctx *actx = (struct async_ctx *) clientp;
+
+	/* For now we only store TEXT debug information, extending is a TODO */
+	if (type == CURLINFO_TEXT)
+		appendBinaryPQExpBuffer(&actx->curl_err, data, size);
+
+	return 0;
+}
+
 /*
- * Initializes the two cURL handles in the async_ctx. The multi handle,
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
  * actx->curlm, is what drives the asynchronous engine and tells us what to do
  * next. The easy handle, actx->curl, encapsulates the state for a single
  * request/response. It's added to the multi handle as needed, during
@@ -1108,17 +1139,17 @@ register_timer(CURLM *curlm, long timeout, void *ctx)
 static bool
 setup_curl_handles(struct async_ctx *actx)
 {
-	curl_version_info_data	*curl_info;
+	curl_version_info_data *curl_info;
 
 	/*
 	 * Create our multi handle. This encapsulates the entire conversation with
-	 * cURL for this connection.
+	 * libcurl for this connection.
 	 */
 	actx->curlm = curl_multi_init();
 	if (!actx->curlm)
 	{
 		/* We don't get a lot of feedback on the failure reason. */
-		actx_error(actx, "failed to create cURL multi handle");
+		actx_error(actx, "failed to create libcurl multi handle");
 		return false;
 	}
 
@@ -1143,7 +1174,7 @@ setup_curl_handles(struct async_ctx *actx)
 	actx->curl = curl_easy_init();
 	if (!actx->curl)
 	{
-		actx_error(actx, "failed to create cURL handle");
+		actx_error(actx, "failed to create libcurl handle");
 		return false;
 	}
 
@@ -1160,9 +1191,14 @@ setup_curl_handles(struct async_ctx *actx)
 		/* No alternative resolver, TODO: warn about timeouts */
 	}
 
-	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+	/*
+	 * Set a callback for retrieving error information from libcurl, the
+	 * function only takes effect when CURLOPT_VERBOSE has been set so make
+	 * sure the order is kept.
+	 */
+	CHECK_SETOPT(actx, CURLOPT_DEBUGDATA, actx, return false);
+	CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
 	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
-	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
 
 	/*
 	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
@@ -1175,7 +1211,12 @@ setup_curl_handles(struct async_ctx *actx)
 	 * pretty strict when it comes to provider behavior, so we have to check
 	 * what comes back anyway.)
 	 */
-	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
 
 	return true;
@@ -1186,8 +1227,10 @@ setup_curl_handles(struct async_ctx *actx)
  */
 
 /*
- * Response callback from cURL; appends the response body into actx->work_data.
- * See start_request().
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
  */
 static size_t
 append_data(char *buf, size_t size, size_t nmemb, void *userdata)
@@ -1195,9 +1238,19 @@ append_data(char *buf, size_t size, size_t nmemb, void *userdata)
 	PQExpBuffer resp = userdata;
 	size_t		len = size * nmemb;
 
-	/* TODO: cap the maximum size */
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+		return 0;
+
+	/* The data passed from libcurl is not null-terminated */
 	appendBinaryPQExpBuffer(resp, buf, len);
-	/* TODO: check for broken buffer */
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+		return 0;
 
 	return len;
 }
@@ -1214,7 +1267,6 @@ static bool
 start_request(struct async_ctx *actx)
 {
 	CURLMcode	err;
-	int			running;
 
 	resetPQExpBuffer(&actx->work_data);
 	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
@@ -1228,7 +1280,7 @@ start_request(struct async_ctx *actx)
 		return false;
 	}
 
-	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
 	if (err)
 	{
 		actx_error(actx, "asynchronous HTTP request failed: %s",
@@ -1237,19 +1289,11 @@ start_request(struct async_ctx *actx)
 	}
 
 	/*
-	 * Sanity check.
-	 *
-	 * TODO: even though this is nominally an asynchronous process, there are
-	 * apparently operations that can synchronously fail by this point, such
-	 * as connections to closed local ports. Maybe we need to let this case
-	 * fall through to drive_request instead, or else perform a
-	 * curl_multi_info_read immediately.
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point like connections
+	 * to closed local ports. Fall through and leave the sanity check for the
+	 * next state consuming actx.
 	 */
-	if (running != 1)
-	{
-		actx_error(actx, "failed to queue HTTP request");
-		return false;
-	}
 
 	return true;
 }
@@ -1262,12 +1306,18 @@ static PostgresPollingStatusType
 drive_request(struct async_ctx *actx)
 {
 	CURLMcode	err;
-	int			running;
 	CURLMsg    *msg;
 	int			msgs_left;
 	bool		done;
 
-	err = curl_multi_socket_all(actx->curlm, &running);
+	/* Sanity check the previous operation */
+	if (actx->running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	err = curl_multi_socket_all(actx->curlm, &actx->running);
 	if (err)
 	{
 		actx_error(actx, "asynchronous HTTP request failed: %s",
@@ -1275,7 +1325,7 @@ drive_request(struct async_ctx *actx)
 		return PGRES_POLLING_FAILED;
 	}
 
-	if (running)
+	if (actx->running)
 	{
 		/* We'll come back again. */
 		return PGRES_POLLING_READING;
@@ -1287,7 +1337,7 @@ drive_request(struct async_ctx *actx)
 		if (msg->msg != CURLMSG_DONE)
 		{
 			/*
-			 * Future cURL versions may define new message types; we don't
+			 * Future libcurl versions may define new message types; we don't
 			 * know how to handle them, so we'll ignore them.
 			 */
 			continue;
@@ -1304,7 +1354,7 @@ drive_request(struct async_ctx *actx)
 		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
 		if (err)
 		{
-			actx_error(actx, "cURL easy handle removal failed: %s",
+			actx_error(actx, "libcurl easy handle removal failed: %s",
 					   curl_multi_strerror(err));
 			return PGRES_POLLING_FAILED;
 		}
@@ -1489,7 +1539,12 @@ start_device_authz(struct async_ctx *actx, PGconn *conn)
 	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
 	if (conn->oauth_scope)
 		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
-	/* TODO check for broken buffer */
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	/* Make our request. */
 	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
@@ -1631,37 +1686,40 @@ finish_token_request(struct async_ctx *actx, struct token *tok)
 	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
-	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
-	 * response uses either 400 Bad Request or 401 Unauthorized.
-	 *
-	 * TODO: there are references online to 403 appearing in the wild...
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
 	 */
-	if (response_code != 200
-		&& response_code != 400
-		 /* && response_code != 401 TODO */ )
+	if (response_code == 200)
 	{
-		actx_error(actx, "unexpected response code %ld", response_code);
-		return false;
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
 	}
 
 	/*
-	 * Pull the fields we care about from the document.
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
 	 */
-	if (response_code == 200)
+	if (response_code == 400 || response_code == 401)
 	{
-		actx->errctx = "failed to parse access token response";
-		if (!parse_access_token(actx, tok))
-			return false;		/* error message already set */
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
 	}
-	else if (!parse_token_error(actx, &tok->err))
-		return false;
 
-	return true;
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
 }
 
+
 /*
- * The top-level, nonblocking entry point for the cURL implementation. This will
- * be called several times to pump the async engine.
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
  *
  * The architecture is based on PQconnectPoll(). The first half drives the
  * connection state forward as necessary, returning if we're not ready to
@@ -1682,7 +1740,7 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 	struct token tok = {0};
 
 	/*
-	 * XXX This is not safe. cURL has stringent requirements for the thread
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
 	 * context in which you call curl_global_init(), because it's going to try
 	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
 	 * probably need to consider both the TLS backend libcurl is compiled
@@ -1691,16 +1749,16 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 	 * Recent versions of libcurl have improved the thread-safety situation,
 	 * but you apparently can't check at compile time whether the
 	 * implementation is thread-safe, and there's a chicken-and-egg problem
-	 * where you can't check the thread safety until you've initialized cURL,
-	 * which you can't do before you've made sure it's thread-safe...
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
 	 *
 	 * We know we've already initialized Winsock by this point, so we should
-	 * be able to safely skip that bit. But we have to tell cURL to initialize
-	 * everything else, because other pieces of our client executable may
-	 * already be using cURL for their own purposes. If we initialize libcurl
-	 * first, with only a subset of its features, we could break those other
-	 * clients nondeterministically, and that would probably be a nightmare to
-	 * debug.
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
 	 */
 	curl_global_init(CURL_GLOBAL_ALL
 					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
@@ -1729,6 +1787,7 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 
 		initPQExpBuffer(&actx->work_data);
 		initPQExpBuffer(&actx->errbuf);
+		initPQExpBuffer(&actx->curl_err);
 
 		if (!setup_multiplexer(actx))
 			goto error_return;
@@ -1873,16 +1932,20 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 				 * errors; anything else and we bail.
 				 */
 				err = &tok.err;
-				if (!err->error || (strcmp(err->error, "authorization_pending")
-									&& strcmp(err->error, "slow_down")))
+				if (!err->error)
+				{
+					actx_error(actx, "unknown error");
+					goto error_return;
+				}
+
+				if (strcmp(err->error, "authorization_pending") != 0 &&
+					strcmp(err->error, "slow_down") != 0)
 				{
-					/* TODO handle !err->error */
 					if (err->error_description)
 						appendPQExpBuffer(&actx->errbuf, "%s ",
 										  err->error_description);
 
 					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
-
 					goto error_return;
 				}
 
@@ -1892,7 +1955,14 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 				 */
 				if (strcmp(err->error, "slow_down") == 0)
 				{
-					actx->authz.interval += 5;	/* TODO check for overflow? */
+					int			prev_interval = actx->authz.interval;
+
+					actx->authz.interval += 5;
+					if (actx->authz.interval < prev_interval)
+					{
+						actx_error(actx, "slow_down interval overflow");
+						goto error_return;
+					}
 				}
 
 				/*
@@ -1959,21 +2029,8 @@ error_return:
 	else
 		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
 
-	if (actx->curl_err[0])
-	{
-		size_t		len;
-
-		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
-
-		/* Sometimes libcurl adds a newline to the error buffer. :( */
-		len = conn->errorMessage.len;
-		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
-		{
-			conn->errorMessage.data[len - 2] = ')';
-			conn->errorMessage.data[len - 1] = '\0';
-			conn->errorMessage.len--;
-		}
-	}
+	if (actx->curl_err.len > 0)
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err.data);
 
 	appendPQExpBufferStr(&conn->errorMessage, "\n");
 
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 66ee8ff076..61de9ac451 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -3,7 +3,7 @@
  * fe-auth-oauth.c
  *	   The front-end (client) implementation of OAuth/OIDC authentication.
  *
- * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
@@ -247,7 +247,12 @@ handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
 		return false;
 	}
 
-	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	if (!makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+		return false;
+	}
 
 	initPQExpBuffer(&ctx.errbuf);
 	sem.semstate = &ctx;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 8d4ea45aa8..6e5e946364 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -4,7 +4,7 @@
  *
  *	  Definitions for OAuth authentication implementations
  *
- * Portions Copyright (c) 2023, PostgreSQL Global Development Group
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
  *
  * src/interfaces/libpq/fe-auth-oauth.h
  *
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
index 09a4bf61d2..7b4dc9c494 100644
--- a/src/test/modules/oauth_validator/validator.c
+++ b/src/test/modules/oauth_validator/validator.c
@@ -66,7 +66,7 @@ validate_token(ValidatorModuleState *state, const char *token, const char *role)
 	/* Check to make sure our private state still exists. */
 	if (state->private_data != PRIVATE_COOKIE)
 		elog(ERROR, "oauth_validator: private state cookie changed to %p",
-				state->private_data);
+			 state->private_data);
 
 	res = palloc(sizeof(ValidatorModuleResult));
 
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
index d96733f531..9e18186f23 100644
--- a/src/test/perl/PostgreSQL/Test/OAuthServer.pm
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -4,10 +4,10 @@ package PostgreSQL::Test::OAuthServer;
 
 use warnings;
 use strict;
-use threads;
 use Scalar::Util;
 use Socket;
 use IO::Select;
+use Test::More;
 
 local *server_socket;
 
@@ -34,9 +34,9 @@ sub run
 	my $port;
 
 	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
-		// die "failed to start OAuth server: $!";
+		or die "failed to start OAuth server: $!";
 
-	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	read($read_fh, $port, 7) or die "failed to read port number: $!";
 	chomp $port;
 	die "server did not advertise a valid port"
 		unless Scalar::Util::looks_like_number($port);
@@ -45,14 +45,14 @@ sub run
 	$self->{'port'} = $port;
 	$self->{'child'} = $read_fh;
 
-	print("# OAuth provider (PID $pid) is listening on port $port\n");
+	diag("OAuth provider (PID $pid) is listening on port $port\n");
 }
 
 sub stop
 {
 	my $self = shift;
 
-	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
 
 	kill(15, $self->{'pid'});
 	$self->{'pid'} = undef;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b02ee48898..7e9b1f564a 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3057,6 +3057,7 @@ VacuumStmt
 ValidIOData
 ValidateIndexState
 ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
-- 
2.34.1

v23-0001-Revert-ECPG-s-use-of-pnstrdup.patchapplication/octet-stream; name=v23-0001-Revert-ECPG-s-use-of-pnstrdup.patchDownload
From aa553b7700376d5eba055b21fa15dd355e1a3939 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Jul 2024 12:26:04 -0700
Subject: [PATCH v23 1/6] Revert ECPG's use of pnstrdup()

Commit 0b9466fce added a dependency on fe_memutils' pnstrdup() inside
informix.c. This 1) makes it hard to remove fe_memutils from
libpgcommon_shlib, and 2) adds an exit() path where it perhaps should
not be. (See the !str check after the call to pnstrdup, which looks like
it should not be possible.)

Revert that part of the patch for now, pending further discussion on the
thread.
---
 src/interfaces/ecpg/compatlib/informix.c | 23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/src/interfaces/ecpg/compatlib/informix.c b/src/interfaces/ecpg/compatlib/informix.c
index 8ea89e640a..65a0b2e46c 100644
--- a/src/interfaces/ecpg/compatlib/informix.c
+++ b/src/interfaces/ecpg/compatlib/informix.c
@@ -175,6 +175,25 @@ deccopy(decimal *src, decimal *target)
 	memcpy(target, src, sizeof(decimal));
 }
 
+static char *
+ecpg_strndup(const char *str, size_t len)
+{
+	size_t		real_len = strlen(str);
+	int			use_len = (int) ((real_len > len) ? len : real_len);
+
+	char	   *new = malloc(use_len + 1);
+
+	if (new)
+	{
+		memcpy(new, str, use_len);
+		new[use_len] = '\0';
+	}
+	else
+		errno = ENOMEM;
+
+	return new;
+}
+
 int
 deccvasc(const char *cp, int len, decimal *np)
 {
@@ -186,8 +205,8 @@ deccvasc(const char *cp, int len, decimal *np)
 	if (risnull(CSTRINGTYPE, cp))
 		return 0;
 
-	str = pnstrdup(cp, len);	/* decimal_in always converts the complete
-								 * string */
+	str = ecpg_strndup(cp, len);	/* decimal_in always converts the complete
+									 * string */
 	if (!str)
 		ret = ECPG_INFORMIX_NUM_UNDERFLOW;
 	else
-- 
2.34.1

#104Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#102)
8 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi Daniel,

On Mon, Apr 1, 2024 at 3:07 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Of the Cirrus machines, it looks like only FreeBSD has a new enough
libcurl for that. Ubuntu won't until 24.04, Debian Bookworm doesn't
have it unless you're running backports, RHEL 9 is still on 7.x... I
think requiring libcurl 8 is effectively saying no one will be able to
use this for a long time. Is there an alternative?

Since the exit() checks appear to be happy now that fe_memutils is
out, I've lowered the requirement to the version of libcurl that seems
to be shipped in RHEL 8 (7.61.0). This also happens to be when TLS 1.3
ciphersuite control was added to Curl, which seems like something we
may want in the very near future, so I'm taking that as a good sign
for what is otherwise a very arbitrary cutoff point. Counterproposals
welcome :D

Good catch. application/json no longer defines charsets officially
[1], so I think we should be able to just ignore them. The new
strncasecmp needs to handle a spurious prefix, too; I have that on my
TODO list.

I've expanded this handling in v24, attached.

This new way doesn't do the same thing. Here's a sample error:

connection to server at "127.0.0.1", port 56619 failed: failed to
fetch OpenID discovery document: Weird server reply ( Trying
127.0.0.1:36647...
Connected to localhost (127.0.0.1) port 36647 (#0)
Mark bundle as not supporting multiuse
HTTP 1.0, assume close after body
Invalid Content-Length: value
Closing connection 0
)

IMO that's too much noise. Prior to the change, the same error would have been

connection to server at "127.0.0.1", port 56619 failed: failed to
fetch OpenID discovery document: Weird server reply (Invalid
Content-Length: value)

I have reverted this change for now, but I'm still hoping there's an
alternative that can help us clean up?

`running` can be set to zero on success, too. I'm having trouble
forcing that code path in a test so far, but we're going to have to do
something special in that case.

For whatever reason, the magic timing for this is popping up more and
more often on Cirrus, leading to really annoying test failures. So I
may have to abandon the search for a perfect test case and just fix
it.

I did drop
the Python pytest patch since I feel that it's unlikely to go in from this
thread (adding pytest seems worthy of its own thread and discussion), and the
weight of it makes this seem scarier than it is.

Until its coverage gets ported over, can we keep it as a `DO NOT
MERGE` patch? Otherwise there's not much to run in Cirrus.

I have added this back (marked loudly as don't-merge) so that we keep
the test coverage for now. The Perl suite (plus Python server) has
been tricked out a lot more in v24, so it shouldn't be too bad to get
things ported.

Next I intend to work on writing documentation for this.

Awesome, thank you! I will start adding coverage to the new code paths.

Peter E asked for some documentation stubs to ease review, which I've
added. Hopefully that doesn't step on your toes any.

A large portion of your "Review comments" patch has been pulled
backwards into the previous commits; the remaining pieces are things
I'm still peering at and/or writing tests for. I also owe this thread
an updated roadmap and summary, to make it a little less daunting for
new reviewers. Soon (tm).

Thanks!
--Jacob

Attachments:

since-v23.diff.txttext/plain; charset=US-ASCII; name=since-v23.diff.txtDownload
1:  aa553b7700 = 1:  9fc1df7509 Revert ECPG's use of pnstrdup()
2:  d6cae9157e = 2:  fdd89bdee0 Remove fe_memutils from libpgcommon_shlib
3:  5543539169 = 3:  ae3ae1cfaa common/jsonapi: support libpq as a client
4:  1639f7eb9a ! 4:  92b257643e libpq: add OAUTHBEARER SASL mechanism
    @@ Commit message
         - fix intermittent failure in the cleanup callback tests (race
           condition?)
         - support require_auth
    +    - fill in documentation stubs
         - ...and more.
     
         Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
     
    + ## config/programs.m4 ##
    +@@ config/programs.m4: if test "$pgac_cv_ldap_safe" != yes; then
    + *** also uses LDAP will crash on exit.])
    + fi])
    + 
    ++# PGAC_CHECK_LIBCURL
    ++# ------------------
    ++# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
    ++# explicitly set TLS 1.3 ciphersuites).
    + 
    ++AC_DEFUN([PGAC_CHECK_LIBCURL],
    ++[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
    ++[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
    ++[#include <curl/curlver.h>
    ++#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
    ++choke me
    ++#endif], [])],
    ++[pgac_cv_check_libcurl=yes],
    ++[pgac_cv_check_libcurl=no])])
    ++
    ++if test "$pgac_cv_check_libcurl" != yes; then
    ++    AC_MSG_ERROR([
    ++*** The installed version of libcurl is too old to use with PostgreSQL.
    ++*** libcurl version 7.61.0 or later is required.])
    ++fi])
    + 
    + # PGAC_CHECK_READLINE
    + # -------------------
    +
      ## configure ##
     @@ configure: with_uuid
      with_readline
    @@ configure: fi
     +  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
     +fi
     +
    ++  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
    ++$as_echo_n "checking for compatible libcurl... " >&6; }
    ++if ${pgac_cv_check_libcurl+:} false; then :
    ++  $as_echo_n "(cached) " >&6
    ++else
    ++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
    ++/* end confdefs.h.  */
    ++#include <curl/curlver.h>
    ++#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
    ++choke me
    ++#endif
    ++int
    ++main ()
    ++{
    ++
    ++  ;
    ++  return 0;
    ++}
    ++_ACEOF
    ++if ac_fn_c_try_compile "$LINENO"; then :
    ++  pgac_cv_check_libcurl=yes
    ++else
    ++  pgac_cv_check_libcurl=no
    ++fi
    ++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
    ++fi
    ++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
    ++$as_echo "$pgac_cv_check_libcurl" >&6; }
    ++
    ++if test "$pgac_cv_check_libcurl" != yes; then
    ++    as_fn_error $? "
    ++*** The installed version of libcurl is too old to use with PostgreSQL.
    ++*** libcurl version 7.61.0 or later is required." "$LINENO" 5
    ++fi
     +fi
     +
      # for contrib/sepgsql
    @@ configure.ac: fi
      
     +if test "$with_oauth" = curl ; then
     +  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
    ++  PGAC_CHECK_LIBCURL
     +fi
     +
      # for contrib/sepgsql
    @@ configure.ac: elif test "$with_uuid" = ossp ; then
         AC_CHECK_HEADERS(crtdefs.h)
      fi
     
    + ## doc/src/sgml/libpq.sgml ##
    +@@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
    +        </para>
    +       </listitem>
    +      </varlistentry>
    ++
    ++     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
    ++      <term><literal>oauth_client_id</literal></term>
    ++      <listitem>
    ++       <para>
    ++        TODO
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
    ++      <term><literal>oauth_client_secret</literal></term>
    ++      <listitem>
    ++       <para>
    ++        TODO
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
    ++      <term><literal>oauth_issuer</literal></term>
    ++      <listitem>
    ++       <para>
    ++        TODO
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
    ++      <term><literal>oauth_scope</literal></term>
    ++      <listitem>
    ++       <para>
    ++        TODO
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    +     </variablelist>
    +    </para>
    +   </sect2>
    +@@ doc/src/sgml/libpq.sgml: void PQinitSSL(int do_ssl);
    + 
    +  </sect1>
    + 
    ++ <sect1 id="libpq-oauth">
    ++  <title>OAuth Support</title>
    ++
    ++  <para>
    ++   TODO
    ++  </para>
    ++
    ++  <para>
    ++   <variablelist>
    ++    <varlistentry id="libpq-PQsetAuthDataHook">
    ++     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
    ++
    ++     <listitem>
    ++      <para>
    ++       TODO
    ++<synopsis>
    ++void PQsetAuthDataHook(PQauthDataHook_type hook);
    ++</synopsis>
    ++      </para>
    ++     </listitem>
    ++    </varlistentry>
    ++
    ++    <varlistentry id="libpq-PQgetAuthDataHook">
    ++     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
    ++
    ++     <listitem>
    ++      <para>
    ++       TODO
    ++<synopsis>
    ++PQauthDataHook_type PQgetAuthDataHook(void);
    ++</synopsis>
    ++      </para>
    ++     </listitem>
    ++    </varlistentry>
    ++   </variablelist>
    ++  </para>
    ++
    ++ </sect1>
    ++
    + 
    +  <sect1 id="libpq-threading">
    +   <title>Behavior in Threaded Programs</title>
    +
      ## meson.build ##
     @@ meson.build: endif
      
    @@ meson.build: endif
     +endif
     +
     +if oauthopt in ['auto', 'curl']
    -+  oauth = dependency('libcurl', required: (oauthopt == 'curl'))
    ++  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
    ++  # to explicitly set TLS 1.3 ciphersuites).
    ++  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
     +
     +  if oauth.found()
     +    oauth_library = 'curl'
    @@ src/include/common/oauth-common.h (new)
     + * oauth-common.h
     + *		Declarations for helper functions used for OAuth/OIDC authentication
     + *
    -+ * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
    ++ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
     + * Portions Copyright (c) 1994, Regents of the University of California
     + *
     + * src/include/common/oauth-common.h
    @@ src/interfaces/libpq/Makefile: endif
      else
      SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
      endif
    +@@ src/interfaces/libpq/Makefile: backend_src = $(top_srcdir)/src/backend
    + # which seems to insert references to that even in pure C code. Excluding
    + # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
    + # which use this function for instrumentation of function exit.
    ++# libcurl registers an exit handler in the memory debugging code when running
    ++# with LeakSanitizer.
    + # Skip the test when profiling, as gcc may insert exit() calls for that.
    + # Also skip the test on platforms where libpq infrastructure may be provided
    + # by statically-linked libraries, as we can't expect them to honor this
    +@@ src/interfaces/libpq/Makefile: backend_src = $(top_srcdir)/src/backend
    + libpq-refs-stamp: $(shlib)
    + ifneq ($(enable_coverage), yes)
    + ifeq (,$(filter solaris,$(PORTNAME)))
    +-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
    ++	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
    + 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
    + 	fi
    + endif
     
      ## src/interfaces/libpq/exports.txt ##
     @@ src/interfaces/libpq/exports.txt: PQcancelFinish            202
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + * fe-auth-oauth-curl.c
     + *	   The libcurl implementation of OAuth/OIDC authentication.
     + *
    -+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
    ++ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
     + * Portions Copyright (c) 1994, Regents of the University of California
     + *
     + * IDENTIFICATION
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +} OAuthStep;
     +
     +/*
    -+ * The async_ctx holds onto state that needs to persist across multiple calls to
    -+ * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
    ++ * The async_ctx holds onto state that needs to persist across multiple calls
    ++ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
    ++ * way.
     + */
     +struct async_ctx
     +{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	int			timerfd;		/* a timerfd for signaling async timeouts */
     +#endif
     +	pgsocket	mux;			/* the multiplexer socket containing all
    -+								 * descriptors tracked by cURL, plus the
    ++								 * descriptors tracked by libcurl, plus the
     +								 * timerfd */
    -+	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
    ++	CURLM	   *curlm;			/* top-level multi handle for libcurl
    ++								 * operations */
     +	CURL	   *curl;			/* the (single) easy handle for serial
     +								 * requests */
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 *				actx_error[_str] to manipulate this. This must be filled
     +	 *				with something useful on an error.
     +	 *
    -+	 * - curl_err:	an optional static error buffer used by cURL to put
    ++	 * - curl_err:	an optional static error buffer used by libcurl to put
     +	 *				detailed information about failures. Unfortunately
     +	 *				untranslatable.
     +	 *
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +		if (err)
     +			libpq_append_conn_error(conn,
    -+									"cURL easy handle removal failed: %s",
    ++									"libcurl easy handle removal failed: %s",
     +									curl_multi_strerror(err));
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +		if (err)
     +			libpq_append_conn_error(conn,
    -+									"cURL multi handle cleanup failed: %s",
    ++									"libcurl multi handle cleanup failed: %s",
     +									curl_multi_strerror(err));
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	appendPQExpBufferStr(&(ACTX)->errbuf, S)
     +
     +/*
    -+ * Macros for getting and setting state for the connection's two cURL handles,
    -+ * so you don't have to write out the error handling every time.
    ++ * Macros for getting and setting state for the connection's two libcurl
    ++ * handles, so you don't have to write out the error handling every time.
     + */
     +
     +#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +}
     +
     +/*
    ++ * Checks the Content-Type header against the expected type. Parameters are
    ++ * allowed but ignored.
    ++ */
    ++static bool
    ++check_content_type(struct async_ctx *actx, const char *type)
    ++{
    ++	const size_t type_len = strlen(type);
    ++	char	   *content_type;
    ++
    ++	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
    ++
    ++	if (!content_type)
    ++	{
    ++		actx_error(actx, "no content type was provided");
    ++		return false;
    ++	}
    ++
    ++	/*
    ++	 * We need to perform a length limited comparison and not compare the whole
    ++	 * string.
    ++	 */
    ++	if (pg_strncasecmp(content_type, type, type_len) != 0)
    ++		goto fail;
    ++
    ++	/* On an exact match, we're done. */
    ++	Assert(strlen(content_type) >= type_len);
    ++	if (content_type[type_len] == '\0')
    ++		return true;
    ++
    ++	/*
    ++	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
    ++	 * acceptable after the prefix we checked. This marks the start of media
    ++	 * type parameters, which we currently have no use for.
    ++	 */
    ++	for (size_t i = type_len; content_type[i]; ++i)
    ++	{
    ++		switch (content_type[i])
    ++		{
    ++			case ';':
    ++				return true;	/* success! */
    ++
    ++			/* HTTP optional whitespace allows only spaces and htabs. */
    ++			case ' ':
    ++			case '\t':
    ++				break;
    ++
    ++			default:
    ++				goto fail;
    ++		}
    ++	}
    ++
    ++fail:
    ++	actx_error(actx, "unexpected content type: \"%s\"", content_type);
    ++	return false;
    ++}
    ++
    ++/*
     + * A helper function for general JSON parsing. fields is the array of field
     + * definitions with their backing pointers. The response will be parsed from
     + * actx->curl and actx->work_data (as set up by start_request()), and any
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
     +{
     +	PQExpBuffer resp = &actx->work_data;
    -+	char	   *content_type;
     +	JsonLexContext lex = {0};
     +	JsonSemAction sem = {0};
     +	JsonParseErrorType err;
     +	struct oauth_parse ctx = {0};
     +	bool		success = false;
     +
    -+	/* Make sure the server thinks it's given us JSON. */
    -+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
    -+
    -+	if (!content_type)
    -+	{
    -+		actx_error(actx, "no content type was provided");
    -+		goto cleanup;
    -+	}
    -+	else if (strcasecmp(content_type, "application/json") != 0)
    -+	{
    -+		actx_error(actx, "unexpected content type \"%s\"", content_type);
    -+		goto cleanup;
    -+	}
    ++	if (!check_content_type(actx, "application/json"))
    ++		return false;
     +
     +	if (strlen(resp->data) != resp->len)
     +	{
     +		actx_error(actx, "response contains embedded NULLs");
    -+		goto cleanup;
    ++		return false;
     +	}
     +
     +	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		authz->interval = parse_interval(authz->interval_str);
     +	else
     +	{
    -+		/* TODO: handle default interval of 5 seconds */
    ++		/*
    ++		 * RFC 8628 specify 5 seconds as the default value if the server
    ++		 * doesn't provide an interval.
    ++		 */
    ++		authz->interval = 5;
     +	}
     +
     +	return true;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +}
     +
     +/*
    -+ * cURL Multi Setup/Callbacks
    ++ * libcurl Multi Setup/Callbacks
     + */
     +
     +/*
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +/*
     + * Adds and removes sockets from the multiplexer set, as directed by the
    -+ * cURL multi handle.
    ++ * libcurl multi handle.
     + */
     +static int
     +register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			break;
     +
     +		default:
    -+			actx_error(actx, "unknown cURL socket operation (%d)", what);
    ++			actx_error(actx, "unknown libcurl socket operation: %d", what);
     +			return -1;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			break;
     +
     +		default:
    -+			actx_error(actx, "unknown cURL socket operation (%d)", what);
    ++			actx_error(actx, "unknown libcurl socket operation: %d", what);
     +			return -1;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		/*
     +		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
     +		 * whether successful or not. Failed entries contain a non-zero errno
    -+		 * in the `data` field.
    ++		 * in the data field.
     +		 */
     +		Assert(ev_out[i].flags & EV_ERROR);
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +/*
     + * Adds or removes timeouts from the multiplexer set, as directed by the
    -+ * cURL multi handle. Rather than continually adding and removing the timer,
    -+ * we keep it in the set at all times and just disarm it when it's not
    -+ * needed.
    ++ * libcurl multi handle. Rather than continually adding and removing the timer,
    ++ * we keep it in the set at all times and just disarm it when it's not needed.
     + */
     +static int
     +register_timer(CURLM *curlm, long timeout, void *ctx)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	else if (timeout == 0)
     +	{
     +		/*
    -+		 * A zero timeout means cURL wants us to call back immediately. That's
    -+		 * not technically an option for timerfd, but we can make the timeout
    -+		 * ridiculously short.
    ++		 * A zero timeout means libcurl wants us to call back immediately.
    ++		 * That's not technically an option for timerfd, but we can make the
    ++		 * timeout ridiculously short.
     +		 *
     +		 * TODO: maybe just signal drive_request() to immediately call back in
     +		 * this case?
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +}
     +
     +/*
    -+ * Initializes the two cURL handles in the async_ctx. The multi handle,
    ++ * Initializes the two libcurl handles in the async_ctx. The multi handle,
     + * actx->curlm, is what drives the asynchronous engine and tells us what to do
     + * next. The easy handle, actx->curl, encapsulates the state for a single
     + * request/response. It's added to the multi handle as needed, during
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +static bool
     +setup_curl_handles(struct async_ctx *actx)
     +{
    -+	curl_version_info_data	*curl_info;
    ++	curl_version_info_data *curl_info;
     +
     +	/*
     +	 * Create our multi handle. This encapsulates the entire conversation with
    -+	 * cURL for this connection.
    ++	 * libcurl for this connection.
     +	 */
     +	actx->curlm = curl_multi_init();
     +	if (!actx->curlm)
     +	{
     +		/* We don't get a lot of feedback on the failure reason. */
    -+		actx_error(actx, "failed to create cURL multi handle");
    ++		actx_error(actx, "failed to create libcurl multi handle");
     +		return false;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	actx->curl = curl_easy_init();
     +	if (!actx->curl)
     +	{
    -+		actx_error(actx, "failed to create cURL handle");
    ++		actx_error(actx, "failed to create libcurl handle");
     +		return false;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + */
     +
     +/*
    -+ * Response callback from cURL; appends the response body into actx->work_data.
    -+ * See start_request().
    ++ * Response callback from libcurl which appends the response body into
    ++ * actx->work_data (see start_request()). The maximum size of the data is
    ++ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
    ++ * changed by recompiling libcurl).
     + */
     +static size_t
     +append_data(char *buf, size_t size, size_t nmemb, void *userdata)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		if (msg->msg != CURLMSG_DONE)
     +		{
     +			/*
    -+			 * Future cURL versions may define new message types; we don't
    ++			 * Future libcurl versions may define new message types; we don't
     +			 * know how to handle them, so we'll ignore them.
     +			 */
     +			continue;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
     +		if (err)
     +		{
    -+			actx_error(actx, "cURL easy handle removal failed: %s",
    ++			actx_error(actx, "libcurl easy handle removal failed: %s",
     +					   curl_multi_strerror(err));
     +			return PGRES_POLLING_FAILED;
     +		}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	return true;
     +}
     +
    ++
     +/*
    -+ * The top-level, nonblocking entry point for the cURL implementation. This will
    -+ * be called several times to pump the async engine.
    ++ * The top-level, nonblocking entry point for the libcurl implementation. This
    ++ * will be called several times to pump the async engine.
     + *
     + * The architecture is based on PQconnectPoll(). The first half drives the
     + * connection state forward as necessary, returning if we're not ready to
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	struct token tok = {0};
     +
     +	/*
    -+	 * XXX This is not safe. cURL has stringent requirements for the thread
    ++	 * XXX This is not safe. libcurl has stringent requirements for the thread
     +	 * context in which you call curl_global_init(), because it's going to try
     +	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
     +	 * probably need to consider both the TLS backend libcurl is compiled
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 * Recent versions of libcurl have improved the thread-safety situation,
     +	 * but you apparently can't check at compile time whether the
     +	 * implementation is thread-safe, and there's a chicken-and-egg problem
    -+	 * where you can't check the thread safety until you've initialized cURL,
    -+	 * which you can't do before you've made sure it's thread-safe...
    ++	 * where you can't check the thread safety until you've initialized
    ++	 * libcurl, which you can't do before you've made sure it's thread-safe...
     +	 *
     +	 * We know we've already initialized Winsock by this point, so we should
    -+	 * be able to safely skip that bit. But we have to tell cURL to initialize
    -+	 * everything else, because other pieces of our client executable may
    -+	 * already be using cURL for their own purposes. If we initialize libcurl
    -+	 * first, with only a subset of its features, we could break those other
    -+	 * clients nondeterministically, and that would probably be a nightmare to
    -+	 * debug.
    ++	 * be able to safely skip that bit. But we have to tell libcurl to
    ++	 * initialize everything else, because other pieces of our client
    ++	 * executable may already be using libcurl for their own purposes. If we
    ++	 * initialize libcurl first, with only a subset of its features, we could
    ++	 * break those other clients nondeterministically, and that would probably
    ++	 * be a nightmare to debug.
     +	 */
     +	curl_global_init(CURL_GLOBAL_ALL
     +					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +				 */
     +				if (strcmp(err->error, "slow_down") == 0)
     +				{
    -+					actx->authz.interval += 5;	/* TODO check for overflow? */
    ++					int			prev_interval = actx->authz.interval;
    ++
    ++					actx->authz.interval += 5;
    ++					if (actx->authz.interval < prev_interval)
    ++					{
    ++						actx_error(actx, "slow_down interval overflow");
    ++						goto error_return;
    ++					}
     +				}
     +
     +				/*
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     + * fe-auth-oauth.c
     + *	   The front-end (client) implementation of OAuth/OIDC authentication.
     + *
    -+ * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
    ++ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
     + * Portions Copyright (c) 1994, Regents of the University of California
     + *
     + * IDENTIFICATION
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     + *
     + *	  Definitions for OAuth authentication implementations
     + *
    -+ * Portions Copyright (c) 2023, PostgreSQL Global Development Group
    ++ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
     + *
     + * src/interfaces/libpq/fe-auth-oauth.h
     + *
5:  044d8b08e9 ! 5:  16994b449d backend: add OAUTHBEARER SASL mechanism
    @@ Commit message
         - use logdetail during auth failures
         - allow passing the configured issuer to the oauth_validator_command, to
           deal with multi-issuer setups
    +    - fill in documentation stubs
         - ...and more.
     
         Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
    @@ .cirrus.tasks.yml: task:
        ###
        # Test that code can be built with gcc/clang without warnings
     
    + ## doc/src/sgml/client-auth.sgml ##
    +@@ doc/src/sgml/client-auth.sgml: include_dir         <replaceable>directory</replaceable>
    +          </para>
    +         </listitem>
    +        </varlistentry>
    ++
    ++       <varlistentry>
    ++        <term><literal>oauth</literal></term>
    ++        <listitem>
    ++         <para>
    ++          Authorize and optionally authenticate using a third-party OAuth 2.0
    ++          identity provider. See <xref linkend="auth-oauth"/> for details.
    ++         </para>
    ++        </listitem>
    ++       </varlistentry>
    +       </variablelist>
    + 
    +       </para>
    +@@ doc/src/sgml/client-auth.sgml: omicron         bryanh                  guest1
    +       only on OpenBSD).
    +      </para>
    +     </listitem>
    ++    <listitem>
    ++     <para>
    ++      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
    ++      which relies on an external OAuth 2.0 identity provider.
    ++     </para>
    ++    </listitem>
    +    </itemizedlist>
    +   </para>
    + 
    +@@ doc/src/sgml/client-auth.sgml: host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    +    </note>
    +   </sect1>
    + 
    ++  <sect1 id="auth-oauth">
    ++   <title>OAuth Authorization/Authentication</title>
    ++
    ++   <indexterm zone="auth-oauth">
    ++    <primary>OAuth Authorization/Authentication</primary>
    ++   </indexterm>
    ++
    ++   <para>
    ++    TODO
    ++   </para>
    ++  </sect1>
    ++
    +   <sect1 id="client-authentication-problems">
    +    <title>Authentication Problems</title>
    + 
    +
    + ## doc/src/sgml/filelist.sgml ##
    +@@
    + <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
    + <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
    + <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
    ++<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
    + 
    + <!-- contrib information -->
    + <!ENTITY contrib         SYSTEM "contrib.sgml">
    +
    + ## doc/src/sgml/oauth-validators.sgml (new) ##
    +@@
    ++<!-- doc/src/sgml/oauth-validators.sgml -->
    ++
    ++<chapter id="oauth-validators">
    ++ <title>Implementing OAuth Validator Modules</title>
    ++
    ++ <para>
    ++  TODO
    ++ </para>
    ++</chapter>
    +
    + ## doc/src/sgml/postgres.sgml ##
    +@@ doc/src/sgml/postgres.sgml: break is not needed in a wider output rendering.
    +   &bki;
    +   &planstats;
    +   &backup-manifest;
    ++  &oauth-validators;
    + 
    +  </part>
    + 
    +
      ## src/backend/libpq/Makefile ##
     @@ src/backend/libpq/Makefile: include $(top_builddir)/src/Makefile.global
      # be-fsstubs is here for historical reasons, probably belongs elsewhere
    @@ src/backend/libpq/auth-oauth.c (new)
     +	initStringInfo(&buf);
     +
     +	/*
    -+	 * TODO: note that escaping here should be belt-and-suspenders, since
    -+	 * escapable characters aren't valid in either the issuer URI or the scope
    -+	 * list, but the HBA doesn't enforce that yet.
    ++	 * Escaping the string here is belt-and-suspenders defensive programming
    ++	 * since escapable characters aren't valid in either the issuer URI or the
    ++	 * scope list, but the HBA doesn't enforce that yet.
     +	 */
     +	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
     +
    @@ src/include/libpq/sasl.h: typedef struct pg_be_sasl_mech
      
      /* Common implementation for auth.c */
     
    + ## src/interfaces/libpq/fe-auth-oauth-curl.c ##
    +@@ src/interfaces/libpq/fe-auth-oauth-curl.c: free_token(struct token *tok)
    + /* States for the overall async machine. */
    + typedef enum
    + {
    +-	OAUTH_STEP_INIT,
    ++	OAUTH_STEP_INIT = 0,
    + 	OAUTH_STEP_DISCOVERY,
    + 	OAUTH_STEP_DEVICE_AUTHORIZATION,
    + 	OAUTH_STEP_TOKEN_REQUEST,
    +
      ## src/test/modules/Makefile ##
     @@ src/test/modules/Makefile: SUBDIRS = \
      		  dummy_index_am \
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +use strict;
     +use warnings FATAL => 'all';
     +
    ++use JSON::PP qw(encode_json);
    ++use MIME::Base64 qw(encode_base64);
     +use PostgreSQL::Test::Cluster;
     +use PostgreSQL::Test::Utils;
     +use PostgreSQL::Test::OAuthServer;
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +
     +$node->safe_psql('postgres', 'CREATE USER test;');
     +$node->safe_psql('postgres', 'CREATE USER testalt;');
    ++$node->safe_psql('postgres', 'CREATE USER testparam;');
     +
     +my $webserver = PostgreSQL::Test::OAuthServer->new();
     +$webserver->run();
     +
    ++END
    ++{
    ++	my $exit_code = $?;
    ++
    ++	$webserver->stop() if defined $webserver; # might have been SKIP'd
    ++
    ++	$? = $exit_code;
    ++}
    ++
     +my $port = $webserver->port();
     +my $issuer = "127.0.0.1:$port";
     +
     +unlink($node->data_dir . '/pg_hba.conf');
     +$node->append_conf('pg_hba.conf', qq{
    -+local all test    oauth issuer="$issuer"           scope="openid postgres"
    -+local all testalt oauth issuer="$issuer/alternate" scope="openid postgres alt"
    ++local all test      oauth issuer="$issuer"           scope="openid postgres"
    ++local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
    ++local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
     +});
     +$node->reload;
     +
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +$log_start = $node->wait_for_log(qr/reloading configuration files/);
     +
     +my $user = "test";
    -+$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
    -+				  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@);
    -+
    -+$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
    -+$node->log_check("user $user: validator receives correct parameters", $log_start,
    -+				 log_like => [
    -+					 qr/oauth_validator: token="9243959234", role="$user"/,
    -+					 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
    -+				 ]);
    -+$node->log_check("user $user: validator sets authenticated identity", $log_start,
    -+				 log_like => [
    -+					 qr/connection authenticated: identity="test" method=oauth/,
    -+				 ]);
    -+$log_start = $log_end;
    ++if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
    ++					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
    ++{
    ++	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
    ++	$node->log_check("user $user: validator receives correct parameters", $log_start,
    ++					 log_like => [
    ++						 qr/oauth_validator: token="9243959234", role="$user"/,
    ++						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
    ++					 ]);
    ++	$node->log_check("user $user: validator sets authenticated identity", $log_start,
    ++					 log_like => [
    ++						 qr/connection authenticated: identity="test" method=oauth/,
    ++					 ]);
    ++	$log_start = $log_end;
    ++}
     +
     +# The /alternate issuer uses slightly different parameters.
     +$user = "testalt";
    -+$node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
    -+				  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@);
    -+
    -+$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
    -+$node->log_check("user $user: validator receives correct parameters", $log_start,
    -+				 log_like => [
    -+					 qr/oauth_validator: token="9243959234-alt", role="$user"/,
    -+					 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
    -+				 ]);
    -+$node->log_check("user $user: validator sets authenticated identity", $log_start,
    -+				 log_like => [
    -+					 qr/connection authenticated: identity="testalt" method=oauth/,
    -+				 ]);
    -+$log_start = $log_end;
    -+
    -+$webserver->stop();
    ++if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
    ++					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
    ++{
    ++	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
    ++	$node->log_check("user $user: validator receives correct parameters", $log_start,
    ++					 log_like => [
    ++						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
    ++						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
    ++					 ]);
    ++	$node->log_check("user $user: validator sets authenticated identity", $log_start,
    ++					 log_like => [
    ++						 qr/connection authenticated: identity="testalt" method=oauth/,
    ++					 ]);
    ++	$log_start = $log_end;
    ++}
    ++
    ++#
    ++# Further tests rely on support for specific behaviors in oauth_server.py. To
    ++# trigger these behaviors, we ask for the special issuer .../param (which is set
    ++# up in HBA for the testparam user) and encode magic instructions into the
    ++# oauth_client_id.
    ++#
    ++
    ++my $common_connstr = "user=testparam dbname=postgres ";
    ++
    ++sub connstr
    ++{
    ++	my (%params) = @_;
    ++
    ++	my $json = encode_json(\%params);
    ++	my $encoded = encode_base64($json, "");
    ++
    ++	return "$common_connstr oauth_client_id=$encoded";
    ++}
    ++
    ++# Make sure the param system works end-to-end first.
    ++$node->connect_ok(
    ++	connstr(),
    ++	"connect to /param",
    ++	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    ++);
    ++
    ++$node->connect_ok(
    ++	connstr(stage => 'token', retries => 1),
    ++	"token retry",
    ++	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    ++);
    ++$node->connect_ok(
    ++	connstr(stage => 'token', retries => 2),
    ++	"token retry (twice)",
    ++	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    ++);
    ++$node->connect_ok(
    ++	connstr(stage => 'all', retries => 1, interval => 2),
    ++	"token retry (two second interval)",
    ++	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    ++);
    ++$node->connect_ok(
    ++	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
    ++	"token retry (default interval)",
    ++	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    ++);
    ++
    ++$node->connect_ok(
    ++	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
    ++	"content type with charset",
    ++	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    ++);
    ++$node->connect_ok(
    ++	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
    ++	"content type with charset (whitespace)",
    ++	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    ++);
    ++
    ++$node->connect_fails(
    ++	connstr(stage => 'device', content_type => 'text/plain'),
    ++	"bad device authz response: wrong content type",
    ++	expected_stderr => qr/failed to parse device authorization: unexpected content type/
    ++);
    ++$node->connect_fails(
    ++	connstr(stage => 'token', content_type => 'text/plain'),
    ++	"bad token response: wrong content type",
    ++	expected_stderr => qr/failed to parse access token response: unexpected content type/
    ++);
    ++$node->connect_fails(
    ++	connstr(stage => 'token', content_type => 'application/jsonx'),
    ++	"bad token response: wrong content type (correct prefix)",
    ++	expected_stderr => qr/failed to parse access token response: unexpected content type/
    ++);
    ++
    ++$node->connect_fails(
    ++	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
    ++	"bad token response: server overflows the device authz interval",
    ++	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
    ++);
    ++
     +$node->stop;
     +
     +done_testing();
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     @@
     +#! /usr/bin/env python3
     +
    ++import base64
     +import http.server
     +import json
     +import os
     +import sys
    ++import time
    ++import urllib.parse
    ++from collections import defaultdict
     +
     +
     +class OAuthHandler(http.server.BaseHTTPRequestHandler):
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        Switches the behavior of the provider depending on the issuer URI.
     +        """
     +        self._alt_issuer = self.path.startswith("/alternate/")
    ++        self._parameterized = self.path.startswith("/param/")
    ++
     +        if self._alt_issuer:
     +            self.path = self.path.removeprefix("/alternate")
    ++        elif self._parameterized:
    ++            self.path = self.path.removeprefix("/param")
     +
     +    def do_GET(self):
    ++        self._response_code = 200
     +        self._check_issuer()
     +
     +        if self.path == "/.well-known/openid-configuration":
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +
     +        self._send_json(resp)
     +
    ++    def _parse_params(self) -> dict[str, str]:
    ++        """
    ++        Parses apart the form-urlencoded request body and returns the resulting
    ++        dict. For use by do_POST().
    ++        """
    ++        size = int(self.headers["Content-Length"])
    ++        form = self.rfile.read(size)
    ++
    ++        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
    ++        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
    ++
    ++    @property
    ++    def client_id(self) -> str:
    ++        """
    ++        Returns the client_id sent in the POST body. self._parse_params() must
    ++        have been called first.
    ++        """
    ++        return self._params["client_id"][0]
    ++
     +    def do_POST(self):
    ++        self._response_code = 200
     +        self._check_issuer()
     +
    ++        self._params = self._parse_params()
    ++        if self._parameterized:
    ++            # Pull encoded test parameters out of the peer's client_id field.
    ++            # This is expected to be Base64-encoded JSON.
    ++            js = base64.b64decode(self.client_id)
    ++            self._test_params = json.loads(js)
    ++
     +        if self.path == "/authorize":
     +            resp = self.authorization()
     +        elif self.path == "/token":
     +            resp = self.token()
     +        else:
    -+            self.send_error(404, "Not Found")
    ++            self.send_error(404)
     +            return
     +
     +        self._send_json(resp)
     +
    ++    def _should_modify(self) -> bool:
    ++        """
    ++        Returns True if the client has requested a modification to this stage of
    ++        the exchange.
    ++        """
    ++        if not hasattr(self, "_test_params"):
    ++            return False
    ++
    ++        stage = self._test_params.get("stage")
    ++
    ++        return (
    ++            stage == "all"
    ++            or (stage == "device" and self.path == "/authorize")
    ++            or (stage == "token" and self.path == "/token")
    ++        )
    ++
    ++    def _content_type(self) -> str:
    ++        """
    ++        Returns "application/json" unless the test has requested something
    ++        different.
    ++        """
    ++        if self._should_modify() and "content_type" in self._test_params:
    ++            return self._test_params["content_type"]
    ++
    ++        return "application/json"
    ++
    ++    def _interval(self) -> int:
    ++        """
    ++        Returns 0 unless the test has requested something different.
    ++        """
    ++        if self._should_modify() and "interval" in self._test_params:
    ++            return self._test_params["interval"]
    ++
    ++        return 0
    ++
    ++    def _retry_code(self) -> str:
    ++        """
    ++        Returns "authorization_pending" unless the test has requested something
    ++        different.
    ++        """
    ++        if self._should_modify() and "retry_code" in self._test_params:
    ++            return self._test_params["retry_code"]
    ++
    ++        return "authorization_pending"
    ++
     +    def _send_json(self, js: JsonObject) -> None:
     +        """
     +        Sends the provided JSON dict as an application/json response.
    ++        self._response_code can be modified to send JSON error responses.
     +        """
    -+
     +        resp = json.dumps(js).encode("ascii")
    ++        self.log_message("sending JSON response: %s", resp)
     +
    -+        self.send_response(200, "OK")
    -+        self.send_header("Content-Type", "application/json")
    ++        self.send_response(self._response_code)
    ++        self.send_header("Content-Type", self._content_type())
     +        self.send_header("Content-Length", str(len(resp)))
     +        self.end_headers()
     +
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +
     +    def config(self) -> JsonObject:
     +        port = self.server.socket.getsockname()[1]
    ++
     +        issuer = f"http://localhost:{port}"
     +        if self._alt_issuer:
     +            issuer += "/alternate"
    ++        elif self._parameterized:
    ++            issuer += "/param"
     +
     +        return {
     +            "issuer": issuer,
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
     +        }
     +
    ++    @property
    ++    def _token_state(self):
    ++        """
    ++        A cached _TokenState object for the connected client (as determined by
    ++        the request's client_id), or a new one if it doesn't already exist.
    ++
    ++        This relies on the existence of a defaultdict attached to the server;
    ++        see main() below.
    ++        """
    ++        return self.server.token_state[self.client_id]
    ++
    ++    def _remove_token_state(self):
    ++        """
    ++        Removes any cached _TokenState for the current client_id. Call this
    ++        after the token exchange ends to get rid of unnecessary state.
    ++        """
    ++        if self.client_id in self.server.token_state:
    ++            del self.server.token_state[self.client_id]
    ++
     +    def authorization(self) -> JsonObject:
     +        uri = "https://example.com/"
     +        if self._alt_issuer:
     +            uri = "https://example.org/"
     +
    -+        return {
    ++        resp = {
     +            "device_code": "postgres",
     +            "user_code": "postgresuser",
    -+            "interval": 0,
     +            "verification_uri": uri,
     +            "expires-in": 5,
     +        }
     +
    ++        interval = self._interval()
    ++        if interval is not None:
    ++            resp["interval"] = interval
    ++            self._token_state.min_delay = interval
    ++        else:
    ++            self._token_state.min_delay = 5  # default
    ++
    ++        return resp
    ++
     +    def token(self) -> JsonObject:
    ++        if self._should_modify() and "retries" in self._test_params:
    ++            retries = self._test_params["retries"]
    ++
    ++            # Check to make sure the token interval is being respected.
    ++            now = time.monotonic()
    ++            if self._token_state.last_try is not None:
    ++                delay = now - self._token_state.last_try
    ++                assert (
    ++                    delay > self._token_state.min_delay
    ++                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
    ++
    ++            self._token_state.last_try = now
    ++
    ++            # If we haven't reached the required number of retries yet, return a
    ++            # "pending" response.
    ++            if self._token_state.retries < retries:
    ++                self._token_state.retries += 1
    ++
    ++                self._response_code = 400
    ++                return {"error": self._retry_code()}
    ++
    ++        # Clean up any retry tracking state now that the exchange is ending.
    ++        self._remove_token_state()
    ++
     +        token = "9243959234"
     +        if self._alt_issuer:
     +            token += "-alt"
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +def main():
     +    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
     +
    ++    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
    ++    # track state across token requests. The use of defaultdict ensures that new
    ++    # entries will be created automatically.
    ++    class _TokenState:
    ++        retries = 0
    ++        min_delay = None
    ++        last_try = None
    ++
    ++    s.token_state = defaultdict(_TokenState)
    ++
     +    # Give the parent the port number to contact (this is also the signal that
     +    # we're ready to receive requests).
     +    port = s.socket.getsockname()[1]
    @@ src/test/modules/oauth_validator/validator.c (new)
     +	/* Check to make sure our private state still exists. */
     +	if (state->private_data != PRIVATE_COOKIE)
     +		elog(ERROR, "oauth_validator: private state cookie changed to %p",
    -+				state->private_data);
    ++			 state->private_data);
     +
     +	res = palloc(sizeof(ValidatorModuleResult));
     +
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm (new)
     +
     +use warnings;
     +use strict;
    -+use threads;
     +use Scalar::Util;
     +use Socket;
     +use IO::Select;
    ++use Test::More;
     +
     +local *server_socket;
     +
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm (new)
     +	my $port;
     +
     +	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
    -+		// die "failed to start OAuth server: $!";
    ++		or die "failed to start OAuth server: $!";
     +
     +	read($read_fh, $port, 7) // die "failed to read port number: $!";
     +	chomp $port;
    @@ src/test/perl/PostgreSQL/Test/OAuthServer.pm (new)
     +	$self->{'port'} = $port;
     +	$self->{'child'} = $read_fh;
     +
    -+	print("# OAuth provider (PID $pid) is listening on port $port\n");
    ++	diag("OAuth provider (PID $pid) is listening on port $port\n");
     +}
     +
     +sub stop
     +{
     +	my $self = shift;
     +
    -+	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
    ++	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
     +
     +	kill(15, $self->{'pid'});
     +	$self->{'pid'} = undef;
    @@ src/tools/pgindent/typedefs.list: VacuumRelation
      ValidIOData
      ValidateIndexState
     +ValidatorModuleState
    ++ValidatorModuleResult
      ValuesScan
      ValuesScanState
      Var
6:  84c6893325 ! 6:  d163d2ca0a Review comments
    @@ Commit message
         Fixes and tidy-ups following a review of v21, a few items
         are (listed in no specific order):
     
    -    * Implement a version check for libcurl in autoconf, the
    -      equivalent check for Meson is still a TODO.
    +    * Implement a version check for libcurl in autoconf, the equivalent
    +      check for Meson is still a TODO. [ed: moved to an earlier commit]
         * Address a few TODOs in the code
    -    * libpq JSON support memory management fixups [ed: these have been moved
    -      to an earlier commit]
    -
    - ## config/programs.m4 ##
    -@@ config/programs.m4: if test "$pgac_cv_ldap_safe" != yes; then
    - *** also uses LDAP will crash on exit.])
    - fi])
    - 
    -+# PGAC_CHECK_LIBCURL
    -+# ------------------
    -+# Check for libcurl 8.4.0 or higher since earlier versions can be compiled
    -+# with a codepatch containing exit(), and PostgreSQL does not allow any lib
    -+# linked to libpq which can call exit.
    - 
    -+# PGAC_CHECK_LIBCURL
    -+AC_DEFUN([PGAC_CHECK_LIBCURL],
    -+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
    -+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
    -+[#include <curl/curlver.h>
    -+#if LIBCURL_VERSION_MAJOR <= 8 && LIBCURL_VERSION_MINOR < 4
    -+choke me
    -+#endif], [])],
    -+[pgac_cv_check_libcurl=yes],
    -+[pgac_cv_check_libcurl=no])])
    -+
    -+if test "$pgac_cv_check_libcurl" != yes; then
    -+    AC_MSG_ERROR([
    -+*** The installed version of libcurl is too old to use with PostgreSQL.
    -+*** libcurl version 8.4.0 or later is required.])
    -+fi])
    - 
    - # PGAC_CHECK_READLINE
    - # -------------------
    -
    - ## configure ##
    -@@ configure: else
    -   as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
    - fi
    - 
    -+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
    -+$as_echo_n "checking for compatible libcurl... " >&6; }
    -+if ${pgac_cv_check_libcurl+:} false; then :
    -+  $as_echo_n "(cached) " >&6
    -+else
    -+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
    -+/* end confdefs.h.  */
    -+#include <curl/curlver.h>
    -+#if LIBCURL_VERSION_MAJOR <= 8 && LIBCURL_VERSION_MINOR < 4
    -+choke me
    -+#endif
    -+int
    -+main ()
    -+{
    -+
    -+  ;
    -+  return 0;
    -+}
    -+_ACEOF
    -+if ac_fn_c_try_compile "$LINENO"; then :
    -+  pgac_cv_check_libcurl=yes
    -+else
    -+  pgac_cv_check_libcurl=no
    -+fi
    -+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
    -+fi
    -+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
    -+$as_echo "$pgac_cv_check_libcurl" >&6; }
    -+
    -+if test "$pgac_cv_check_libcurl" != yes; then
    -+    as_fn_error $? "
    -+*** The installed version of libcurl is too old to use with PostgreSQL.
    -+*** libcurl version 8.4.0 or later is required." "$LINENO" 5
    -+fi
    - fi
    - 
    - # for contrib/sepgsql
    -
    - ## configure.ac ##
    -@@ configure.ac: AC_SUBST(LDAP_LIBS_BE)
    - 
    - if test "$with_oauth" = curl ; then
    -   AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
    -+  PGAC_CHECK_LIBCURL
    - fi
    - 
    - # for contrib/sepgsql
    +    * libpq JSON support memory management fixups [ed: moved to an earlier
    +      commit]
     
      ## src/backend/libpq/auth-oauth.c ##
    -@@ src/backend/libpq/auth-oauth.c: generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
    - 	initStringInfo(&buf);
    - 
    - 	/*
    --	 * TODO: note that escaping here should be belt-and-suspenders, since
    --	 * escapable characters aren't valid in either the issuer URI or the scope
    --	 * list, but the HBA doesn't enforce that yet.
    -+	 * Escaping the string here is belt-and-suspenders defensive programming
    -+	 * since escapable characters aren't valid in either the issuer URI or the
    -+	 * scope list, but the HBA doesn't enforce that yet.
    - 	 */
    - 	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
    - 
     @@ src/backend/libpq/auth-oauth.c: validate_token_format(const char *header)
      	if (!header || strlen(header) <= 7)
      	{
    @@ src/backend/libpq/auth-oauth.c: validate(Port *port, const char *auth)
      	}
      
     
    - ## src/include/common/oauth-common.h ##
    -@@
    -  * oauth-common.h
    -  *		Declarations for helper functions used for OAuth/OIDC authentication
    -  *
    -- * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group
    -+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
    -  * Portions Copyright (c) 1994, Regents of the University of California
    -  *
    -  * src/include/common/oauth-common.h
    -
    - ## src/interfaces/libpq/Makefile ##
    -@@ src/interfaces/libpq/Makefile: backend_src = $(top_srcdir)/src/backend
    - # which seems to insert references to that even in pure C code. Excluding
    - # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
    - # which use this function for instrumentation of function exit.
    -+# libcurl registers an exit handler in the memory debugging code when running
    -+# with LeakSanitizer.
    - # Skip the test when profiling, as gcc may insert exit() calls for that.
    - # Also skip the test on platforms where libpq infrastructure may be provided
    - # by statically-linked libraries, as we can't expect them to honor this
    -@@ src/interfaces/libpq/Makefile: backend_src = $(top_srcdir)/src/backend
    - libpq-refs-stamp: $(shlib)
    - ifneq ($(enable_coverage), yes)
    - ifeq (,$(filter solaris,$(PORTNAME)))
    --	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
    -+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
    - 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
    - 	fi
    - endif
    -
      ## src/interfaces/libpq/fe-auth-oauth-curl.c ##
    -@@
    -  * fe-auth-oauth-curl.c
    -  *	   The libcurl implementation of OAuth/OIDC authentication.
    -  *
    -- * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
    -+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
    -  * Portions Copyright (c) 1994, Regents of the University of California
    -  *
    -  * IDENTIFICATION
     @@
      #include "libpq-int.h"
      #include "mb/pg_wchar.h"
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c
      /*
       * Parsed JSON Representations
       *
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: free_token(struct token *tok)
    - /* States for the overall async machine. */
    - typedef enum
    - {
    --	OAUTH_STEP_INIT,
    -+	OAUTH_STEP_INIT = 0,
    - 	OAUTH_STEP_DISCOVERY,
    - 	OAUTH_STEP_DEVICE_AUTHORIZATION,
    - 	OAUTH_STEP_TOKEN_REQUEST,
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: typedef enum
    - } OAuthStep;
    - 
    - /*
    -- * The async_ctx holds onto state that needs to persist across multiple calls to
    -- * pg_fe_run_oauth_flow(). Almost everything interacts with this in some way.
    -+ * The async_ctx holds onto state that needs to persist across multiple calls
    -+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
    -+ * way.
    -  */
    - struct async_ctx
    - {
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: struct async_ctx
    - 	int			timerfd;		/* a timerfd for signaling async timeouts */
    - #endif
    - 	pgsocket	mux;			/* the multiplexer socket containing all
    --								 * descriptors tracked by cURL, plus the
    -+								 * descriptors tracked by libcurl, plus the
    - 								 * timerfd */
    --	CURLM	   *curlm;			/* top-level multi handle for cURL operations */
    -+	CURLM	   *curlm;			/* top-level multi handle for libcurl
    -+								 * operations */
    - 	CURL	   *curl;			/* the (single) easy handle for serial
    - 								 * requests */
    - 
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: struct async_ctx
    - 	 *				actx_error[_str] to manipulate this. This must be filled
    - 	 *				with something useful on an error.
    - 	 *
    --	 * - curl_err:	an optional static error buffer used by cURL to put
    -+	 * - curl_err:	an optional static error buffer used by libcurl to put
    - 	 *				detailed information about failures. Unfortunately
    - 	 *				untranslatable.
    - 	 *
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: struct async_ctx
    - 	 */
    - 	const char *errctx;			/* not freed; must point to static allocation */
    - 	PQExpBufferData errbuf;
    --	char		curl_err[CURL_ERROR_SIZE];
    -+	PQExpBufferData curl_err;
    - 
    - 	/*
    - 	 * These documents need to survive over multiple calls, and are therefore
     @@ src/interfaces/libpq/fe-auth-oauth-curl.c: struct async_ctx
      	struct device_authz authz;
      
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c: struct async_ctx
      };
      
      /*
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: free_curl_async_ctx(PGconn *conn, void *ctx)
    - 
    - 		if (err)
    - 			libpq_append_conn_error(conn,
    --									"cURL easy handle removal failed: %s",
    -+									"libcurl easy handle removal failed: %s",
    - 									curl_multi_strerror(err));
    - 	}
    - 
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: free_curl_async_ctx(PGconn *conn, void *ctx)
    - 
    - 		if (err)
    - 			libpq_append_conn_error(conn,
    --									"cURL multi handle cleanup failed: %s",
    -+									"libcurl multi handle cleanup failed: %s",
    - 									curl_multi_strerror(err));
    - 	}
    - 
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: free_curl_async_ctx(PGconn *conn, void *ctx)
    - 	appendPQExpBufferStr(&(ACTX)->errbuf, S)
    - 
    - /*
    -- * Macros for getting and setting state for the connection's two cURL handles,
    -- * so you don't have to write out the error handling every time.
    -+ * Macros for getting and setting state for the connection's two libcurl
    -+ * handles, so you don't have to write out the error handling every time.
    -  */
    - 
    - #define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
     @@ src/interfaces/libpq/fe-auth-oauth-curl.c: parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
    - 		actx_error(actx, "no content type was provided");
    - 		goto cleanup;
    - 	}
    --	else if (strcasecmp(content_type, "application/json") != 0)
    -+
    -+	/*
    -+	 * We only check the media-type and not the parameters, so we need to
    -+	 * perform a length limited comparison and not compare the whole string.
    -+	 */
    -+	if (pg_strncasecmp(content_type, "application/json", strlen("application/json")) != 0)
    - 	{
    --		actx_error(actx, "unexpected content type \"%s\"", content_type);
    --		goto cleanup;
    -+		actx_error(actx, "unexpected content type: \"%s\"", content_type);
    -+		return false;
    - 	}
    - 
    - 	if (strlen(resp->data) != resp->len)
    - 	{
    - 		actx_error(actx, "response contains embedded NULLs");
    --		goto cleanup;
    -+		return false;
    + 		return false;
      	}
      
     -	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c: parse_oauth_json(struct async_ctx *ac
      
      	ctx.errbuf = &actx->errbuf;
      	ctx.fields = fields;
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
    - 		authz->interval = parse_interval(authz->interval_str);
    - 	else
    - 	{
    --		/* TODO: handle default interval of 5 seconds */
    -+		/*
    -+		 * RFC 8628 specify 5 seconds as the default value if the server
    -+		 * doesn't provide an interval.
    -+		 */
    -+		authz->interval = 5;
    - 	}
    - 
    - 	return true;
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: parse_access_token(struct async_ctx *actx, struct token *tok)
    - }
    - 
    - /*
    -- * cURL Multi Setup/Callbacks
    -+ * libcurl Multi Setup/Callbacks
    -  */
    - 
    - /*
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: setup_multiplexer(struct async_ctx *actx)
    - 
    - /*
    -  * Adds and removes sockets from the multiplexer set, as directed by the
    -- * cURL multi handle.
    -+ * libcurl multi handle.
    -  */
    - static int
    - register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
    - 			break;
    - 
    - 		default:
    --			actx_error(actx, "unknown cURL socket operation (%d)", what);
    -+			actx_error(actx, "unknown libcurl socket operation: %d", what);
    - 			return -1;
    - 	}
    - 
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
    - 			break;
    - 
    - 		default:
    --			actx_error(actx, "unknown cURL socket operation (%d)", what);
    -+			actx_error(actx, "unknown libcurl socket operation: %d", what);
    - 			return -1;
    - 	}
    - 
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
    - 		/*
    - 		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
    - 		 * whether successful or not. Failed entries contain a non-zero errno
    --		 * in the `data` field.
    -+		 * in the data field.
    - 		 */
    - 		Assert(ev_out[i].flags & EV_ERROR);
    - 
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
    - 
    - /*
    -  * Adds or removes timeouts from the multiplexer set, as directed by the
    -- * cURL multi handle. Rather than continually adding and removing the timer,
    -- * we keep it in the set at all times and just disarm it when it's not
    -- * needed.
    -+ * libcurl multi handle. Rather than continually adding and removing the timer,
    -+ * we keep it in the set at all times and just disarm it when it's not needed.
    -  */
    - static int
    - register_timer(CURLM *curlm, long timeout, void *ctx)
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: register_timer(CURLM *curlm, long timeout, void *ctx)
    - 	else if (timeout == 0)
    - 	{
    - 		/*
    --		 * A zero timeout means cURL wants us to call back immediately. That's
    --		 * not technically an option for timerfd, but we can make the timeout
    --		 * ridiculously short.
    -+		 * A zero timeout means libcurl wants us to call back immediately.
    -+		 * That's not technically an option for timerfd, but we can make the
    -+		 * timeout ridiculously short.
    - 		 *
    - 		 * TODO: maybe just signal drive_request() to immediately call back in
    - 		 * this case?
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: register_timer(CURLM *curlm, long timeout, void *ctx)
    - 	return 0;
    - }
    - 
    -+static int
    -+debug_callback(CURL *handle, curl_infotype *type, char *data, size_t size,
    -+			   void *clientp)
    -+{
    -+	struct async_ctx *actx = (struct async_ctx *) clientp;
    -+
    -+	/* For now we only store TEXT debug information, extending is a TODO */
    -+	if (type == CURLINFO_TEXT)
    -+		appendBinaryPQExpBuffer(&actx->curl_err, data, size);
    -+
    -+	return 0;
    -+}
    -+
    - /*
    -- * Initializes the two cURL handles in the async_ctx. The multi handle,
    -+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
    -  * actx->curlm, is what drives the asynchronous engine and tells us what to do
    -  * next. The easy handle, actx->curl, encapsulates the state for a single
    -  * request/response. It's added to the multi handle as needed, during
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: register_timer(CURLM *curlm, long timeout, void *ctx)
    - static bool
    - setup_curl_handles(struct async_ctx *actx)
    - {
    --	curl_version_info_data	*curl_info;
    -+	curl_version_info_data *curl_info;
    - 
    - 	/*
    - 	 * Create our multi handle. This encapsulates the entire conversation with
    --	 * cURL for this connection.
    -+	 * libcurl for this connection.
    - 	 */
    - 	actx->curlm = curl_multi_init();
    - 	if (!actx->curlm)
    - 	{
    - 		/* We don't get a lot of feedback on the failure reason. */
    --		actx_error(actx, "failed to create cURL multi handle");
    -+		actx_error(actx, "failed to create libcurl multi handle");
    - 		return false;
    - 	}
    - 
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: setup_curl_handles(struct async_ctx *actx)
    - 	actx->curl = curl_easy_init();
    - 	if (!actx->curl)
    - 	{
    --		actx_error(actx, "failed to create cURL handle");
    -+		actx_error(actx, "failed to create libcurl handle");
    - 		return false;
    - 	}
    - 
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: setup_curl_handles(struct async_ctx *actx)
    - 		/* No alternative resolver, TODO: warn about timeouts */
    - 	}
    - 
    --	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
    -+	/*
    -+	 * Set a callback for retrieving error information from libcurl, the
    -+	 * function only takes effect when CURLOPT_VERBOSE has been set so make
    -+	 * sure the order is kept.
    -+	 */
    -+	CHECK_SETOPT(actx, CURLOPT_DEBUGDATA, actx, return false);
    -+	CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
    - 	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
    --	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
    - 
    - 	/*
    - 	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
     @@ src/interfaces/libpq/fe-auth-oauth-curl.c: setup_curl_handles(struct async_ctx *actx)
      	 * pretty strict when it comes to provider behavior, so we have to check
      	 * what comes back anyway.)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c: setup_curl_handles(struct async_ctx *
      	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
      
      	return true;
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: setup_curl_handles(struct async_ctx *actx)
    -  */
    - 
    - /*
    -- * Response callback from cURL; appends the response body into actx->work_data.
    -- * See start_request().
    -+ * Response callback from libcurl which appends the response body into
    -+ * actx->work_data (see start_request()). The maximum size of the data is
    -+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
    -+ * changed by recompiling libcurl).
    -  */
    - static size_t
    - append_data(char *buf, size_t size, size_t nmemb, void *userdata)
     @@ src/interfaces/libpq/fe-auth-oauth-curl.c: append_data(char *buf, size_t size, size_t nmemb, void *userdata)
      	PQExpBuffer resp = userdata;
      	size_t		len = size * nmemb;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c: drive_request(struct async_ctx *actx)
      	{
      		/* We'll come back again. */
      		return PGRES_POLLING_READING;
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: drive_request(struct async_ctx *actx)
    - 		if (msg->msg != CURLMSG_DONE)
    - 		{
    - 			/*
    --			 * Future cURL versions may define new message types; we don't
    -+			 * Future libcurl versions may define new message types; we don't
    - 			 * know how to handle them, so we'll ignore them.
    - 			 */
    - 			continue;
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: drive_request(struct async_ctx *actx)
    - 		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
    - 		if (err)
    - 		{
    --			actx_error(actx, "cURL easy handle removal failed: %s",
    -+			actx_error(actx, "libcurl easy handle removal failed: %s",
    - 					   curl_multi_strerror(err));
    - 			return PGRES_POLLING_FAILED;
    - 		}
     @@ src/interfaces/libpq/fe-auth-oauth-curl.c: start_device_authz(struct async_ctx *actx, PGconn *conn)
      	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
      	if (conn->oauth_scope)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c: finish_token_request(struct async_ctx
     +	return false;
      }
      
    -+
    - /*
    -- * The top-level, nonblocking entry point for the cURL implementation. This will
    -- * be called several times to pump the async engine.
    -+ * The top-level, nonblocking entry point for the libcurl implementation. This
    -+ * will be called several times to pump the async engine.
    -  *
    -  * The architecture is based on PQconnectPoll(). The first half drives the
    -  * connection state forward as necessary, returning if we're not ready to
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
    - 	struct token tok = {0};
    - 
    - 	/*
    --	 * XXX This is not safe. cURL has stringent requirements for the thread
    -+	 * XXX This is not safe. libcurl has stringent requirements for the thread
    - 	 * context in which you call curl_global_init(), because it's going to try
    - 	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
    - 	 * probably need to consider both the TLS backend libcurl is compiled
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
    - 	 * Recent versions of libcurl have improved the thread-safety situation,
    - 	 * but you apparently can't check at compile time whether the
    - 	 * implementation is thread-safe, and there's a chicken-and-egg problem
    --	 * where you can't check the thread safety until you've initialized cURL,
    --	 * which you can't do before you've made sure it's thread-safe...
    -+	 * where you can't check the thread safety until you've initialized
    -+	 * libcurl, which you can't do before you've made sure it's thread-safe...
    - 	 *
    - 	 * We know we've already initialized Winsock by this point, so we should
    --	 * be able to safely skip that bit. But we have to tell cURL to initialize
    --	 * everything else, because other pieces of our client executable may
    --	 * already be using cURL for their own purposes. If we initialize libcurl
    --	 * first, with only a subset of its features, we could break those other
    --	 * clients nondeterministically, and that would probably be a nightmare to
    --	 * debug.
    -+	 * be able to safely skip that bit. But we have to tell libcurl to
    -+	 * initialize everything else, because other pieces of our client
    -+	 * executable may already be using libcurl for their own purposes. If we
    -+	 * initialize libcurl first, with only a subset of its features, we could
    -+	 * break those other clients nondeterministically, and that would probably
    -+	 * be a nightmare to debug.
    - 	 */
    - 	curl_global_init(CURL_GLOBAL_ALL
    - 					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
    - 
    - 		initPQExpBuffer(&actx->work_data);
    - 		initPQExpBuffer(&actx->errbuf);
    -+		initPQExpBuffer(&actx->curl_err);
      
    - 		if (!setup_multiplexer(actx))
    - 			goto error_return;
     @@ src/interfaces/libpq/fe-auth-oauth-curl.c: pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
      				 * errors; anything else and we bail.
      				 */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c: pg_fe_run_oauth_flow(PGconn *conn, pg
      					goto error_return;
      				}
      
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
    - 				 */
    - 				if (strcmp(err->error, "slow_down") == 0)
    - 				{
    --					actx->authz.interval += 5;	/* TODO check for overflow? */
    -+					int			prev_interval = actx->authz.interval;
    -+
    -+					actx->authz.interval += 5;
    -+					if (actx->authz.interval < prev_interval)
    -+					{
    -+						actx_error(actx, "slow_down interval overflow");
    -+						goto error_return;
    -+					}
    - 				}
    - 
    - 				/*
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: error_return:
    - 	else
    - 		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
    - 
    --	if (actx->curl_err[0])
    --	{
    --		size_t		len;
    --
    --		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
    --
    --		/* Sometimes libcurl adds a newline to the error buffer. :( */
    --		len = conn->errorMessage.len;
    --		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
    --		{
    --			conn->errorMessage.data[len - 2] = ')';
    --			conn->errorMessage.data[len - 1] = '\0';
    --			conn->errorMessage.len--;
    --		}
    --	}
    -+	if (actx->curl_err.len > 0)
    -+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err.data);
    - 
    - 	appendPQExpBufferStr(&conn->errorMessage, "\n");
    - 
     
      ## src/interfaces/libpq/fe-auth-oauth.c ##
    -@@
    -  * fe-auth-oauth.c
    -  *	   The front-end (client) implementation of OAuth/OIDC authentication.
    -  *
    -- * Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
    -+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
    -  * Portions Copyright (c) 1994, Regents of the University of California
    -  *
    -  * IDENTIFICATION
     @@ src/interfaces/libpq/fe-auth-oauth.c: handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
      		return false;
      	}
    @@ src/interfaces/libpq/fe-auth-oauth.c: handle_oauth_sasl_error(PGconn *conn, char
      
      	initPQExpBuffer(&ctx.errbuf);
      	sem.semstate = &ctx;
    -
    - ## src/interfaces/libpq/fe-auth-oauth.h ##
    -@@
    -  *
    -  *	  Definitions for OAuth authentication implementations
    -  *
    -- * Portions Copyright (c) 2023, PostgreSQL Global Development Group
    -+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
    -  *
    -  * src/interfaces/libpq/fe-auth-oauth.h
    -  *
    -
    - ## src/test/modules/oauth_validator/validator.c ##
    -@@ src/test/modules/oauth_validator/validator.c: validate_token(ValidatorModuleState *state, const char *token, const char *role)
    - 	/* Check to make sure our private state still exists. */
    - 	if (state->private_data != PRIVATE_COOKIE)
    - 		elog(ERROR, "oauth_validator: private state cookie changed to %p",
    --				state->private_data);
    -+			 state->private_data);
    - 
    - 	res = palloc(sizeof(ValidatorModuleResult));
    - 
    -
    - ## src/test/perl/PostgreSQL/Test/OAuthServer.pm ##
    -@@ src/test/perl/PostgreSQL/Test/OAuthServer.pm: package PostgreSQL::Test::OAuthServer;
    - 
    - use warnings;
    - use strict;
    --use threads;
    - use Scalar::Util;
    - use Socket;
    - use IO::Select;
    -+use Test::More;
    - 
    - local *server_socket;
    - 
    -@@ src/test/perl/PostgreSQL/Test/OAuthServer.pm: sub run
    - 	my $port;
    - 
    - 	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
    --		// die "failed to start OAuth server: $!";
    -+		or die "failed to start OAuth server: $!";
    - 
    --	read($read_fh, $port, 7) // die "failed to read port number: $!";
    -+	read($read_fh, $port, 7) or die "failed to read port number: $!";
    - 	chomp $port;
    - 	die "server did not advertise a valid port"
    - 		unless Scalar::Util::looks_like_number($port);
    -@@ src/test/perl/PostgreSQL/Test/OAuthServer.pm: sub run
    - 	$self->{'port'} = $port;
    - 	$self->{'child'} = $read_fh;
    - 
    --	print("# OAuth provider (PID $pid) is listening on port $port\n");
    -+	diag("OAuth provider (PID $pid) is listening on port $port\n");
    - }
    - 
    - sub stop
    - {
    - 	my $self = shift;
    - 
    --	print("# Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
    -+	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
    - 
    - 	kill(15, $self->{'pid'});
    - 	$self->{'pid'} = undef;
    -
    - ## src/tools/pgindent/typedefs.list ##
    -@@ src/tools/pgindent/typedefs.list: VacuumStmt
    - ValidIOData
    - ValidateIndexState
    - ValidatorModuleState
    -+ValidatorModuleResult
    - ValuesScan
    - ValuesScanState
    - Var
-:  ---------- > 7:  9117bf8be2 DO NOT MERGE: Add pytest suite for OAuth
v24-0003-common-jsonapi-support-libpq-as-a-client.patchapplication/octet-stream; name=v24-0003-common-jsonapi-support-libpq-as-a-client.patchDownload
From ae3ae1cfaa012bf760e75e77e0a86d4ab2553958 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v24 3/8] common/jsonapi: support libpq as a client

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed rather than exit()ing.

Co-authored-by: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/bin/pg_combinebackup/Makefile             |   4 +-
 src/bin/pg_combinebackup/meson.build          |   2 +-
 src/bin/pg_verifybackup/Makefile              |   2 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 448 +++++++++++++-----
 src/common/meson.build                        |   8 +-
 src/common/parse_manifest.c                   |   5 +-
 src/include/common/jsonapi.h                  |  20 +-
 src/test/modules/test_json_parser/Makefile    |   3 +
 src/test/modules/test_json_parser/meson.build |   4 +-
 10 files changed, 361 insertions(+), 137 deletions(-)

diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index c3729755ba..2f7dc1ed87 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -18,6 +18,8 @@ include $(top_builddir)/src/Makefile.global
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
 LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
@@ -30,7 +32,7 @@ OBJS = \
 
 all: pg_combinebackup
 
-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
 	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
 
 install: all installdirs
diff --git a/src/bin/pg_combinebackup/meson.build b/src/bin/pg_combinebackup/meson.build
index 1d4b9c218f..cab677b574 100644
--- a/src/bin/pg_combinebackup/meson.build
+++ b/src/bin/pg_combinebackup/meson.build
@@ -17,7 +17,7 @@ endif
 
 pg_combinebackup = executable('pg_combinebackup',
   pg_combinebackup_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args,
 )
 bin_targets += pg_combinebackup
diff --git a/src/bin/pg_verifybackup/Makefile b/src/bin/pg_verifybackup/Makefile
index 7c045f142e..3372fada01 100644
--- a/src/bin/pg_verifybackup/Makefile
+++ b/src/bin/pg_verifybackup/Makefile
@@ -17,7 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 # We need libpq only because fe_utils does.
-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
diff --git a/src/common/Makefile b/src/common/Makefile
index 5712078dae..f1da2ed13d 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 OBJS_COMMON = \
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 2527dbe1da..bb2e8ca2e1 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,66 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend,
+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+#define ALLOC0(size) calloc(1, size)
+#define REALLOC realloc
+#define FREE(s) free(s)
+
+#define appendStrVal			appendPQExpBuffer
+#define appendBinaryStrVal		appendBinaryPQExpBuffer
+#define appendStrValChar		appendPQExpBufferChar
+/* XXX should we add a macro version to PQExpBuffer? */
+#define appendStrValCharMacro	appendPQExpBufferChar
+#define createStrVal			createPQExpBuffer
+#define initStrVal				initPQExpBuffer
+#define resetStrVal				resetPQExpBuffer
+#define termStrVal				termPQExpBuffer
+#define destroyStrVal			destroyPQExpBuffer
+
+#else							/* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+#define ALLOC0(size) palloc0(size)
+#define REALLOC repalloc
+
+/*
+ * Backend pfree() doesn't handle NULL pointers like the frontend's does; smooth
+ * that over to reduce mental gymnastics. Avoid multiple evaluation of the macro
+ * argument to avoid future hair-pulling.
+ */
+#define FREE(s) do {	\
+	void *__v = (s);	\
+	if (__v)			\
+		pfree(__v);		\
+} while (0)
+
+#define appendStrVal			appendStringInfo
+#define appendBinaryStrVal		appendBinaryStringInfo
+#define appendStrValChar		appendStringInfoChar
+#define appendStrValCharMacro	appendStringInfoCharMacro
+#define createStrVal			makeStringInfo
+#define initStrVal				initStringInfo
+#define resetStrVal				resetStringInfo
+#define termStrVal(s)			pfree((s)->data)
+#define destroyStrVal			destroyStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -103,7 +159,7 @@ struct JsonIncrementalState
 {
 	bool		is_last_chunk;
 	bool		partial_completed;
-	StringInfoData partial_token;
+	StrValType	partial_token;
 };
 
 /*
@@ -219,6 +275,7 @@ static JsonParseErrorType parse_object(JsonLexContext *lex, JsonSemAction *sem);
 static JsonParseErrorType parse_array_element(JsonLexContext *lex, JsonSemAction *sem);
 static JsonParseErrorType parse_array(JsonLexContext *lex, JsonSemAction *sem);
 static JsonParseErrorType report_parse_error(JsonParseContext ctx, JsonLexContext *lex);
+static bool allocate_incremental_state(JsonLexContext *lex);
 
 /* the null action object used for pure validation */
 JsonSemAction nullSemAction =
@@ -273,15 +330,11 @@ IsValidJsonNumber(const char *str, size_t len)
 {
 	bool		numeric_error;
 	size_t		total_len;
-	JsonLexContext dummy_lex;
+	JsonLexContext dummy_lex = {0};
 
 	if (len <= 0)
 		return false;
 
-	dummy_lex.incremental = false;
-	dummy_lex.inc_state = NULL;
-	dummy_lex.pstack = NULL;
-
 	/*
 	 * json_lex_number expects a leading  '-' to have been eaten already.
 	 *
@@ -321,6 +374,9 @@ IsValidJsonNumber(const char *str, size_t len)
  * responsible for freeing the returned struct, either by calling
  * freeJsonLexContext() or (in backend environment) via memory context
  * cleanup.
+ *
+ * In frontend code this can return NULL on OOM, so callers must inspect the
+ * returned pointer.
  */
 JsonLexContext *
 makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
@@ -328,7 +384,9 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return NULL;
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -341,13 +399,70 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
 
 	return lex;
 }
 
+/*
+ * Allocates the internal bookkeeping structures for incremental parsing. This
+ * can only fail in-band with FRONTEND code.
+ */
+#define JS_STACK_CHUNK_SIZE 64
+#define JS_MAX_PROD_LEN 10		/* more than we need */
+#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
+								 * number */
+static bool
+allocate_incremental_state(JsonLexContext *lex)
+{
+	void	   *pstack,
+			   *prediction,
+			   *fnames,
+			   *fnull;
+
+	lex->inc_state = ALLOC0(sizeof(JsonIncrementalState));
+	pstack = ALLOC(sizeof(JsonParserStack));
+	prediction = ALLOC(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
+	fnames = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(char *));
+	fnull = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+#ifdef FRONTEND
+	if (!lex->inc_state
+		|| !pstack
+		|| !prediction
+		|| !fnames
+		|| !fnull)
+	{
+		FREE(lex->inc_state);
+		FREE(pstack);
+		FREE(prediction);
+		FREE(fnames);
+		FREE(fnull);
+
+		return false;
+	}
+#endif
+
+	initStrVal(&(lex->inc_state->partial_token));
+	lex->pstack = pstack;
+	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
+	lex->pstack->prediction = prediction;
+	lex->pstack->pred_index = 0;
+	lex->pstack->fnames = fnames;
+	lex->pstack->fnull = fnull;
+
+	lex->incremental = true;
+	return true;
+}
+
 
 /*
  * makeJsonLexContextIncremental
@@ -357,19 +472,20 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
  * we don't need the input, that will be handed in bit by bit to the
  * parse routine. We also need an accumulator for partial tokens in case
  * the boundary between chunks happens to fall in the middle of a token.
+ *
+ * In frontend code this can return NULL on OOM, so callers must inspect the
+ * returned pointer.
  */
-#define JS_STACK_CHUNK_SIZE 64
-#define JS_MAX_PROD_LEN 10		/* more than we need */
-#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
-								 * number */
-
 JsonLexContext *
 makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 							  bool need_escapes)
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return NULL;
+
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -377,42 +493,60 @@ makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 
 	lex->line_number = 1;
 	lex->input_encoding = encoding;
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+	if (!allocate_incremental_state(lex))
+	{
+		if (lex->flags & JSONLEX_FREE_STRUCT)
+			FREE(lex);
+		return NULL;
+	}
+
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+
 	return lex;
 }
 
-static inline void
+static inline bool
 inc_lex_level(JsonLexContext *lex)
 {
-	lex->lex_level += 1;
-
-	if (lex->incremental && lex->lex_level >= lex->pstack->stack_size)
+	if (lex->incremental && (lex->lex_level + 1) >= lex->pstack->stack_size)
 	{
-		lex->pstack->stack_size += JS_STACK_CHUNK_SIZE;
-		lex->pstack->prediction =
-			repalloc(lex->pstack->prediction,
-					 lex->pstack->stack_size * JS_MAX_PROD_LEN);
-		if (lex->pstack->fnames)
-			lex->pstack->fnames =
-				repalloc(lex->pstack->fnames,
-						 lex->pstack->stack_size * sizeof(char *));
-		if (lex->pstack->fnull)
-			lex->pstack->fnull =
-				repalloc(lex->pstack->fnull, lex->pstack->stack_size * sizeof(bool));
+		size_t		new_stack_size;
+		char	   *new_prediction;
+		char	  **new_fnames;
+		bool	   *new_fnull;
+
+		new_stack_size = lex->pstack->stack_size + JS_STACK_CHUNK_SIZE;
+
+		new_prediction = REALLOC(lex->pstack->prediction,
+								 new_stack_size * JS_MAX_PROD_LEN);
+		new_fnames = REALLOC(lex->pstack->fnames,
+							 new_stack_size * sizeof(char *));
+		new_fnull = REALLOC(lex->pstack->fnull, new_stack_size * sizeof(bool));
+
+#ifdef FRONTEND
+		if (!new_prediction || !new_fnames || !new_fnull)
+			return false;
+#endif
+
+		lex->pstack->stack_size = new_stack_size;
+		lex->pstack->prediction = new_prediction;
+		lex->pstack->fnames = new_fnames;
+		lex->pstack->fnull = new_fnull;
 	}
+
+	lex->lex_level += 1;
+	return true;
 }
 
 static inline void
@@ -482,24 +616,31 @@ get_fnull(JsonLexContext *lex)
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
+	if (!lex)
+		return;
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-		destroyStringInfo(lex->strval);
+		destroyStrVal(lex->strval);
 
 	if (lex->errormsg)
-		destroyStringInfo(lex->errormsg);
+		destroyStrVal(lex->errormsg);
 
 	if (lex->incremental)
 	{
-		pfree(lex->inc_state->partial_token.data);
-		pfree(lex->inc_state);
-		pfree(lex->pstack->prediction);
-		pfree(lex->pstack->fnames);
-		pfree(lex->pstack->fnull);
-		pfree(lex->pstack);
+		termStrVal(&lex->inc_state->partial_token);
+		FREE(lex->inc_state);
+		FREE(lex->pstack->prediction);
+		FREE(lex->pstack->fnames);
+		FREE(lex->pstack->fnull);
+		FREE(lex->pstack);
 	}
 
 	if (lex->flags & JSONLEX_FREE_STRUCT)
-		pfree(lex);
+		FREE(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -522,22 +663,13 @@ JsonParseErrorType
 pg_parse_json(JsonLexContext *lex, JsonSemAction *sem)
 {
 #ifdef FORCE_JSON_PSTACK
-
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-
 	/*
 	 * We don't need partial token processing, there is only one chunk. But we
 	 * still need to init the partial token string so that freeJsonLexContext
-	 * works.
+	 * works, so perform the full incremental initialization.
 	 */
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+	if (!allocate_incremental_state(lex))
+		return JSON_OUT_OF_MEMORY;
 
 	return pg_parse_json_incremental(lex, sem, lex->input, lex->input_length, true);
 
@@ -597,7 +729,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -737,7 +869,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_OEND:
@@ -766,7 +900,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_AEND:
@@ -793,9 +929,11 @@ pg_parse_json_incremental(JsonLexContext *lex,
 						json_ofield_action ostart = sem->object_field_start;
 						json_ofield_action oend = sem->object_field_end;
 
-						if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
+						if ((ostart != NULL || oend != NULL) && lex->parse_strval)
 						{
-							fname = pstrdup(lex->strval->data);
+							fname = STRDUP(lex->strval->data);
+							if (fname == NULL)
+								return JSON_OUT_OF_MEMORY;
 						}
 						set_fname(lex, fname);
 					}
@@ -883,14 +1021,21 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							 */
 							if (tok == JSON_TOKEN_STRING)
 							{
-								if (lex->strval != NULL)
-									pstack->scalar_val = pstrdup(lex->strval->data);
+								if (lex->parse_strval)
+								{
+									pstack->scalar_val = STRDUP(lex->strval->data);
+									if (pstack->scalar_val == NULL)
+										return JSON_OUT_OF_MEMORY;
+								}
 							}
 							else
 							{
 								ptrdiff_t	tlen = (lex->token_terminator - lex->token_start);
 
-								pstack->scalar_val = palloc(tlen + 1);
+								pstack->scalar_val = ALLOC(tlen + 1);
+								if (pstack->scalar_val == NULL)
+									return JSON_OUT_OF_MEMORY;
+
 								memcpy(pstack->scalar_val, lex->token_start, tlen);
 								pstack->scalar_val[tlen] = '\0';
 							}
@@ -1025,14 +1170,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -1066,8 +1218,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -1123,6 +1279,11 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -1312,15 +1473,24 @@ json_lex(JsonLexContext *lex)
 	const char *const end = lex->input + lex->input_length;
 	JsonParseErrorType result;
 
-	if (lex->incremental && lex->inc_state->partial_completed)
+	if (lex->incremental)
 	{
-		/*
-		 * We just lexed a completed partial token on the last call, so reset
-		 * everything
-		 */
-		resetStringInfo(&(lex->inc_state->partial_token));
-		lex->token_terminator = lex->input;
-		lex->inc_state->partial_completed = false;
+		if (lex->inc_state->partial_completed)
+		{
+			/*
+			 * We just lexed a completed partial token on the last call, so
+			 * reset everything
+			 */
+			resetStrVal(&(lex->inc_state->partial_token));
+			lex->token_terminator = lex->input;
+			lex->inc_state->partial_completed = false;
+		}
+
+#ifdef FRONTEND
+		/* Make sure our partial token buffer is valid before using it below. */
+		if (PQExpBufferDataBroken(lex->inc_state->partial_token))
+			return JSON_OUT_OF_MEMORY;
+#endif
 	}
 
 	s = lex->token_terminator;
@@ -1331,7 +1501,7 @@ json_lex(JsonLexContext *lex)
 		 * We have a partial token. Extend it and if completed lex it by a
 		 * recursive call
 		 */
-		StringInfo	ptok = &(lex->inc_state->partial_token);
+		StrValType *ptok = &(lex->inc_state->partial_token);
 		size_t		added = 0;
 		bool		tok_done = false;
 		JsonLexContext dummy_lex;
@@ -1358,7 +1528,7 @@ json_lex(JsonLexContext *lex)
 			{
 				char		c = lex->input[i];
 
-				appendStringInfoCharMacro(ptok, c);
+				appendStrValCharMacro(ptok, c);
 				added++;
 				if (c == '"' && escapes % 2 == 0)
 				{
@@ -1403,7 +1573,7 @@ json_lex(JsonLexContext *lex)
 						case '8':
 						case '9':
 							{
-								appendStringInfoCharMacro(ptok, cc);
+								appendStrValCharMacro(ptok, cc);
 								added++;
 							}
 							break;
@@ -1424,7 +1594,7 @@ json_lex(JsonLexContext *lex)
 
 				if (JSON_ALPHANUMERIC_CHAR(cc))
 				{
-					appendStringInfoCharMacro(ptok, cc);
+					appendStrValCharMacro(ptok, cc);
 					added++;
 				}
 				else
@@ -1467,6 +1637,7 @@ json_lex(JsonLexContext *lex)
 		dummy_lex.input_length = ptok->len;
 		dummy_lex.input_encoding = lex->input_encoding;
 		dummy_lex.incremental = false;
+		dummy_lex.parse_strval = lex->parse_strval;
 		dummy_lex.strval = lex->strval;
 
 		partial_result = json_lex(&dummy_lex);
@@ -1622,8 +1793,8 @@ json_lex(JsonLexContext *lex)
 					if (lex->incremental && !lex->inc_state->is_last_chunk &&
 						p == lex->input + lex->input_length)
 					{
-						appendBinaryStringInfo(
-											   &(lex->inc_state->partial_token), s, end - s);
+						appendBinaryStrVal(
+										   &(lex->inc_state->partial_token), s, end - s);
 						return JSON_INCOMPLETE;
 					}
 
@@ -1680,8 +1851,8 @@ json_lex_string(JsonLexContext *lex)
 	do { \
 		if (lex->incremental && !lex->inc_state->is_last_chunk) \
 		{ \
-			appendBinaryStringInfo(&lex->inc_state->partial_token, \
-								   lex->token_start, end - lex->token_start); \
+			appendBinaryStrVal(&lex->inc_state->partial_token, \
+							   lex->token_start, end - lex->token_start); \
 			return JSON_INCOMPLETE; \
 		} \
 		lex->token_terminator = s; \
@@ -1694,8 +1865,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -1732,7 +1910,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -1789,19 +1967,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -1811,22 +1989,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -1861,7 +2039,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to appendBinaryStrVal.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -1885,8 +2063,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				appendBinaryStrVal(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -1902,6 +2080,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -2019,8 +2202,8 @@ json_lex_number(JsonLexContext *lex, const char *s,
 	if (lex->incremental && !lex->inc_state->is_last_chunk &&
 		len >= lex->input_length)
 	{
-		appendBinaryStringInfo(&lex->inc_state->partial_token,
-							   lex->token_start, s - lex->token_start);
+		appendBinaryStrVal(&lex->inc_state->partial_token,
+						   lex->token_start, s - lex->token_start);
 		if (num_err != NULL)
 			*num_err = error;
 
@@ -2096,19 +2279,25 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
 	if (lex->errormsg)
-		resetStringInfo(lex->errormsg);
+		resetStrVal(lex->errormsg);
 	else
-		lex->errormsg = makeStringInfo();
+		lex->errormsg = createStrVal();
 
 	/*
 	 * A helper for error messages that should print the current token. The
 	 * format must contain exactly one %.*s specifier.
 	 */
 #define json_token_error(lex, format) \
-	appendStringInfo((lex)->errormsg, _(format), \
-					 (int) ((lex)->token_terminator - (lex)->token_start), \
-					 (lex)->token_start);
+	appendStrVal((lex)->errormsg, _(format), \
+				 (int) ((lex)->token_terminator - (lex)->token_start), \
+				 (lex)->token_start);
 
 	switch (error)
 	{
@@ -2127,9 +2316,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			json_token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
 			break;
 		case JSON_ESCAPING_REQUIRED:
-			appendStringInfo(lex->errormsg,
-							 _("Character with value 0x%02x must be escaped."),
-							 (unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
 			break;
 		case JSON_EXPECTED_END:
 			json_token_error(lex, "Expected end of input, but found \"%.*s\".");
@@ -2160,6 +2349,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 		case JSON_INVALID_TOKEN:
 			json_token_error(lex, "Token \"%.*s\" is invalid.");
 			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -2191,15 +2383,23 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 	}
 #undef json_token_error
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	if (lex->errormsg->len == 0)
-		appendStringInfo(lex->errormsg,
-						 "unexpected json parse error type: %d",
-						 (int) error);
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && lex->errormsg->len == 0)
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d",
+					 (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
+#endif
 
 	return lex->errormsg->data;
 }
diff --git a/src/common/meson.build b/src/common/meson.build
index 7eb604c608..ea5f19e89e 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -125,13 +125,18 @@ common_sources_frontend_static += files(
 # least cryptohash_openssl.c, hmac_openssl.c depend on it.
 # controldata_utils.c depends on wait_event_types_h. That's arguably a
 # layering violation, but ...
+#
+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
+# appropriately. This seems completely broken.
 pgcommon = {}
 pgcommon_variants = {
   '_srv': internal_lib_args + {
+    'include_directories': include_directories('.'),
     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
     'dependencies': [backend_common_code],
   },
   '': default_lib_args + {
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_static,
     'dependencies': [frontend_common_code],
     # Files in libpgcommon.a should use/export the "xxx_private" versions
@@ -140,6 +145,7 @@ pgcommon_variants = {
   },
   '_shlib': default_lib_args + {
     'pic': true,
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
   },
@@ -157,7 +163,6 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'sources': sources,
         'c_args': c_args,
@@ -170,7 +175,6 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'dependencies': opts['dependencies'] + [ssl],
       }
diff --git a/src/common/parse_manifest.c b/src/common/parse_manifest.c
index 612e120b17..0da6272336 100644
--- a/src/common/parse_manifest.c
+++ b/src/common/parse_manifest.c
@@ -139,7 +139,8 @@ json_parse_manifest_incremental_init(JsonManifestParseContext *context)
 	parse->state = JM_EXPECT_TOPLEVEL_START;
 	parse->saw_version_field = false;
 
-	makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true);
+	if (!makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true))
+		context->error_cb(context, "out of memory");
 
 	incstate->sem.semstate = parse;
 	incstate->sem.object_start = json_manifest_object_start;
@@ -240,6 +241,8 @@ json_parse_manifest(JsonManifestParseContext *context, const char *buffer,
 
 	/* Create a JSON lexing context. */
 	lex = makeJsonLexContextCstringLen(NULL, buffer, size, PG_UTF8, true);
+	if (!lex)
+		json_manifest_parse_failure(context, "out of memory");
 
 	/* Set up semantic actions. */
 	sem.semstate = &parse;
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 71a491d72d..d03a61fcd6 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -51,6 +49,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -64,6 +63,18 @@ typedef enum JsonParseErrorType
 typedef struct JsonParserStack JsonParserStack;
 typedef struct JsonIncrementalState JsonIncrementalState;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
+
 /*
  * All the fields in this structure should be treated as read-only.
  *
@@ -102,8 +113,9 @@ typedef struct JsonLexContext
 	const char *line_start;		/* where that line starts within input */
 	JsonParserStack *pstack;
 	JsonIncrementalState *inc_state;
-	StringInfo	strval;
-	StringInfo	errormsg;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
diff --git a/src/test/modules/test_json_parser/Makefile b/src/test/modules/test_json_parser/Makefile
index 2dc7175b7c..f410e04cf1 100644
--- a/src/test/modules/test_json_parser/Makefile
+++ b/src/test/modules/test_json_parser/Makefile
@@ -19,6 +19,9 @@ include $(top_builddir)/src/Makefile.global
 include $(top_srcdir)/contrib/contrib-global.mk
 endif
 
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
+
 all: test_json_parser_incremental$(X) test_json_parser_perf$(X)
 
 %.o: $(top_srcdir)/$(subdir)/%.c
diff --git a/src/test/modules/test_json_parser/meson.build b/src/test/modules/test_json_parser/meson.build
index b224f3e07e..8136070233 100644
--- a/src/test/modules/test_json_parser/meson.build
+++ b/src/test/modules/test_json_parser/meson.build
@@ -13,7 +13,7 @@ endif
 
 test_json_parser_incremental = executable('test_json_parser_incremental',
   test_json_parser_incremental_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args + {
     'install': false,
   },
@@ -32,7 +32,7 @@ endif
 
 test_json_parser_perf = executable('test_json_parser_perf',
   test_json_parser_perf_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args + {
     'install': false,
   },
-- 
2.34.1

v24-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v24-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 92b257643e1e3a5124ce405af641afc4468a6d42 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v24 4/8] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                        |   19 +
 configure                                 |  144 ++
 configure.ac                              |   29 +
 doc/src/sgml/libpq.sgml                   |   76 +
 meson.build                               |   31 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   14 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2042 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 +++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   75 +
 src/interfaces/libpq/libpq-int.h          |   14 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 25 files changed, 3394 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 76f06bd8fd..ad090ec121 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -861,6 +862,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1571,6 +1573,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8488,6 +8491,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13040,6 +13089,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14065,6 +14198,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index ab2d51c21c..cdc6bea660 100644
--- a/configure.ac
+++ b/configure.ac
@@ -927,6 +927,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1423,6 +1443,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1614,6 +1639,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 068ee60771..6a20247ef9 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2335,6 +2335,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9938,6 +9975,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/meson.build b/meson.build
index 5387bb6d5f..2ec2ebb6bf 100644
--- a/meson.build
+++ b/meson.build
@@ -840,6 +840,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -2974,6 +3003,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3645,6 +3675,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index 246cecf382..3ffe1f52c2 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -124,6 +124,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 83b91fe916..6f6174811d 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index f8d3e3b6b8..1ee7f3731d 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -246,6 +246,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -726,6 +729,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index b36a765764..9a290782f2 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -61,6 +61,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -79,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -108,6 +116,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -115,7 +125,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..27f5be8f63
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2042 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the whole
+	 * string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			/* HTTP optional whitespace allows only spaces and htabs. */
+			case ' ':
+			case '\t':
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.)
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return 1;				/* TODO this slows down the tests
+								 * considerably... */
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specify 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle. Rather than continually adding and removing the timer,
+ * we keep it in the set at all times and just disarm it when it's not needed.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 *
+		 * TODO: maybe just signal drive_request() to immediately call back in
+		 * this case?
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return -1;
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
+	CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
+	 */
+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS, return false);
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * Sanity check.
+	 *
+	 * TODO: even though this is nominally an asynchronous process, there are
+	 * apparently operations that can synchronously fail by this point, such
+	 * as connections to closed local ports. Maybe we need to let this case
+	 * fall through to drive_request instead, or else perform a
+	 * curl_multi_info_read immediately.
+	 */
+	if (running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	int			running;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	err = curl_multi_socket_all(actx->curlm, &running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return PGRES_POLLING_FAILED;
+	}
+
+	if (running)
+	{
+		/* We'll come back again. */
+		return PGRES_POLLING_READING;
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	struct token tok = {0};
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	/* By default, the multiplexer is the altsock. Reassign as desired. */
+	*altsock = actx->mux;
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				PostgresPollingStatusType status;
+
+				status = drive_request(actx);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+				else if (status != PGRES_POLLING_OK)
+				{
+					/* not done yet */
+					free_token(&tok);
+					return status;
+				}
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			/* TODO check that the timer has expired */
+			break;
+	}
+
+	switch (actx->step)
+	{
+		case OAUTH_STEP_INIT:
+			actx->errctx = "failed to fetch OpenID discovery document";
+			if (!start_discovery(actx, conn->oauth_discovery_uri))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DISCOVERY;
+			break;
+
+		case OAUTH_STEP_DISCOVERY:
+			if (!finish_discovery(actx))
+				goto error_return;
+
+			/* TODO: check issuer */
+
+			actx->errctx = "cannot run OAuth device authorization";
+			if (!check_for_device_flow(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain device authorization";
+			if (!start_device_authz(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+			break;
+
+		case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			if (!finish_device_authz(actx))
+				goto error_return;
+
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+
+		case OAUTH_STEP_TOKEN_REQUEST:
+			{
+				const struct token_error *err;
+#ifdef HAVE_SYS_EPOLL_H
+				struct itimerspec spec = {0};
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				struct kevent ev = {0};
+#endif
+
+				if (!finish_token_request(actx, &tok))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					int			res;
+					PQpromptOAuthDevice prompt = {
+						.verification_uri = actx->authz.verification_uri,
+						.user_code = actx->authz.user_code,
+						/* TODO: optional fields */
+					};
+
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
+										 &prompt);
+
+					if (!res)
+					{
+						fprintf(stderr, "Visit %s and enter the code: %s",
+								prompt.verification_uri, prompt.user_code);
+					}
+					else if (res < 0)
+					{
+						actx_error(actx, "device prompt failed");
+						goto error_return;
+					}
+
+					actx->user_prompted = true;
+				}
+
+				if (tok.access_token)
+				{
+					/* Construct our Bearer token. */
+					resetPQExpBuffer(&actx->work_data);
+					appendPQExpBuffer(&actx->work_data, "Bearer %s",
+									  tok.access_token);
+
+					if (PQExpBufferDataBroken(actx->work_data))
+					{
+						actx_error(actx, "out of memory");
+						goto error_return;
+					}
+
+					state->token = strdup(actx->work_data.data);
+					break;
+				}
+
+				/*
+				 * authorization_pending and slow_down are the only acceptable
+				 * errors; anything else and we bail.
+				 */
+				err = &tok.err;
+				if (!err->error || (strcmp(err->error, "authorization_pending")
+									&& strcmp(err->error, "slow_down")))
+				{
+					/* TODO handle !err->error */
+					if (err->error_description)
+						appendPQExpBuffer(&actx->errbuf, "%s ",
+										  err->error_description);
+
+					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+
+					goto error_return;
+				}
+
+				/*
+				 * A slow_down error requires us to permanently increase our
+				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
+				 */
+				if (strcmp(err->error, "slow_down") == 0)
+				{
+					int			prev_interval = actx->authz.interval;
+
+					actx->authz.interval += 5;
+					if (actx->authz.interval < prev_interval)
+					{
+						actx_error(actx, "slow_down interval overflow");
+						goto error_return;
+					}
+				}
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				Assert(actx->authz.interval > 0);
+#ifdef HAVE_SYS_EPOLL_H
+				spec.it_value.tv_sec = actx->authz.interval;
+
+				if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+				{
+					actx_error(actx, "failed to set timerfd: %m");
+					goto error_return;
+				}
+
+				*altsock = actx->timerfd;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+				/* XXX: I guess this wants to be hidden in a routine */
+				EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
+					   actx->authz.interval * 1000, 0);
+				if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
+				{
+					actx_error(actx, "failed to set kqueue timer: %m");
+					goto error_return;
+				}
+				/* XXX: why did we change the altsock in the epoll version? */
+#endif
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				break;
+			}
+
+		case OAUTH_STEP_WAIT_INTERVAL:
+			actx->errctx = "failed to obtain access token";
+			if (!start_token_request(actx, conn))
+				goto error_return;
+
+			actx->step = OAUTH_STEP_TOKEN_REQUEST;
+			break;
+	}
+
+	free_token(&tok);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	free_token(&tok);
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..f943a31cc0
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 3b25d8afda..d02424e11b 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -419,7 +420,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -437,7 +438,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -524,6 +525,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -563,26 +573,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -625,7 +657,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -650,11 +682,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -956,12 +998,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1119,7 +1167,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1136,7 +1184,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1452,3 +1501,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index e003279fb6..82ef38ea0a 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -364,6 +364,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -627,6 +644,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2644,6 +2662,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3680,6 +3699,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3835,6 +3855,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3868,7 +3898,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/* OK, we have processed the message; mark data consumed */
 				conn->inStart = conn->inCursor;
@@ -3901,6 +3941,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4582,6 +4657,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4699,6 +4775,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7181,6 +7262,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index f235bfbb41..aa1fee38c8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1041,10 +1041,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1061,7 +1064,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 87a6f3df07..25f216afcf 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -38,6 +38,8 @@ extern "C"
 #define LIBPQ_HAS_TRACE_FLAGS 1
 /* Indicates that PQsslAttribute(NULL, "library") is useful */
 #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -82,6 +84,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -163,6 +167,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -695,10 +706,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index f36d76bf3f..c9d9213cf3 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -357,6 +357,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -420,6 +422,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -491,6 +502,9 @@ struct pg_conn
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
 
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..0181e5cc03 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 5618050b30..830da57994 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -233,6 +233,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9320e4d808..0fa4750906 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -367,6 +368,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1712,6 +1715,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1776,6 +1780,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1936,11 +1941,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3439,6 +3447,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v24-0002-Remove-fe_memutils-from-libpgcommon_shlib.patchapplication/octet-stream; name=v24-0002-Remove-fe_memutils-from-libpgcommon_shlib.patchDownload
From fdd89bdee0323444d28fbb9873ec3cc9f8b959e7 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 1 Jul 2024 14:18:33 -0700
Subject: [PATCH v24 2/8] Remove fe_memutils from libpgcommon_shlib

libpq appears to have no need for this, and the exit() references cause
our libpq-refs-stamp test to fail if the linker doesn't strip out the
unused code.
---
 src/common/Makefile    | 2 +-
 src/common/meson.build | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/common/Makefile b/src/common/Makefile
index 3d83299432..5712078dae 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -105,11 +105,11 @@ endif
 # libraries such as libpq to report errors directly.
 OBJS_FRONTEND_SHLIB = \
 	$(OBJS_COMMON) \
-	fe_memutils.o \
 	restricted_token.o \
 	sprompt.o
 OBJS_FRONTEND = \
 	$(OBJS_FRONTEND_SHLIB) \
+	fe_memutils.o \
 	logging.o
 
 # foo.o, foo_shlib.o, and foo_srv.o are all built from foo.c
diff --git a/src/common/meson.build b/src/common/meson.build
index de68e408fa..7eb604c608 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -105,13 +105,13 @@ common_sources_cflags = {
 
 common_sources_frontend_shlib = common_sources
 common_sources_frontend_shlib += files(
-  'fe_memutils.c',
   'restricted_token.c',
   'sprompt.c',
 )
 
 common_sources_frontend_static = common_sources_frontend_shlib
 common_sources_frontend_static += files(
+  'fe_memutils.c',
   'logging.c',
 )
 
-- 
2.34.1

v24-0001-Revert-ECPG-s-use-of-pnstrdup.patchapplication/octet-stream; name=v24-0001-Revert-ECPG-s-use-of-pnstrdup.patchDownload
From 9fc1df75094dbdd10b41bfd995fe0e9149bfad55 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Jul 2024 12:26:04 -0700
Subject: [PATCH v24 1/8] Revert ECPG's use of pnstrdup()

Commit 0b9466fce added a dependency on fe_memutils' pnstrdup() inside
informix.c. This 1) makes it hard to remove fe_memutils from
libpgcommon_shlib, and 2) adds an exit() path where it perhaps should
not be. (See the !str check after the call to pnstrdup, which looks like
it should not be possible.)

Revert that part of the patch for now, pending further discussion on the
thread.
---
 src/interfaces/ecpg/compatlib/informix.c | 23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/src/interfaces/ecpg/compatlib/informix.c b/src/interfaces/ecpg/compatlib/informix.c
index 8ea89e640a..65a0b2e46c 100644
--- a/src/interfaces/ecpg/compatlib/informix.c
+++ b/src/interfaces/ecpg/compatlib/informix.c
@@ -175,6 +175,25 @@ deccopy(decimal *src, decimal *target)
 	memcpy(target, src, sizeof(decimal));
 }
 
+static char *
+ecpg_strndup(const char *str, size_t len)
+{
+	size_t		real_len = strlen(str);
+	int			use_len = (int) ((real_len > len) ? len : real_len);
+
+	char	   *new = malloc(use_len + 1);
+
+	if (new)
+	{
+		memcpy(new, str, use_len);
+		new[use_len] = '\0';
+	}
+	else
+		errno = ENOMEM;
+
+	return new;
+}
+
 int
 deccvasc(const char *cp, int len, decimal *np)
 {
@@ -186,8 +205,8 @@ deccvasc(const char *cp, int len, decimal *np)
 	if (risnull(CSTRINGTYPE, cp))
 		return 0;
 
-	str = pnstrdup(cp, len);	/* decimal_in always converts the complete
-								 * string */
+	str = ecpg_strndup(cp, len);	/* decimal_in always converts the complete
+									 * string */
 	if (!str)
 		ret = ECPG_INFORMIX_NUM_UNDERFLOW;
 	else
-- 
2.34.1

v24-0006-Review-comments.patchapplication/octet-stream; name=v24-0006-Review-comments.patchDownload
From d163d2ca0a7581fcde16c93e1b87143d956e3e4f Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Thu, 28 Mar 2024 21:59:02 +0100
Subject: [PATCH v24 6/8] Review comments

Fixes and tidy-ups following a review of v21, a few items
are (listed in no specific order):

* Implement a version check for libcurl in autoconf, the equivalent
  check for Meson is still a TODO. [ed: moved to an earlier commit]
* Address a few TODOs in the code
* libpq JSON support memory management fixups [ed: moved to an earlier
  commit]
---
 src/backend/libpq/auth-oauth.c            |  22 ++--
 src/interfaces/libpq/fe-auth-oauth-curl.c | 117 ++++++++++++++--------
 src/interfaces/libpq/fe-auth-oauth.c      |   7 +-
 3 files changed, 92 insertions(+), 54 deletions(-)

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 2a0d74a079..ec1418c3fc 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -533,7 +533,9 @@ validate_token_format(const char *header)
 	if (!header || strlen(header) <= 7)
 	{
 		ereport(COMMERROR,
-				(errmsg("malformed OAuth bearer token 1")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is less than 8 bytes."));
 		return NULL;
 	}
 
@@ -551,9 +553,9 @@ validate_token_format(const char *header)
 	if (!*token)
 	{
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 2"),
-				 errdetail("Bearer token is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
 		return NULL;
 	}
 
@@ -573,9 +575,9 @@ validate_token_format(const char *header)
 		 * of someone's password into the logs.
 		 */
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 3"),
-				 errdetail("Bearer token is not in the correct format.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
 		return NULL;
 	}
 
@@ -617,10 +619,10 @@ validate(Port *port, const char *auth)
 	/* Make sure the validator authenticated the user. */
 	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
-		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
-				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
-						port->user_name)));
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity"));
 		return false;
 	}
 
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index c9866c222a..944f450fec 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -31,6 +31,8 @@
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
 
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
 /*
  * Parsed JSON Representations
  *
@@ -207,6 +209,8 @@ struct async_ctx
 	struct device_authz authz;
 
 	bool		user_prompted;	/* have we already sent the authz prompt? */
+
+	int			running;
 };
 
 /*
@@ -681,7 +685,11 @@ parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
 		return false;
 	}
 
-	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	if (!makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	ctx.errbuf = &actx->errbuf;
 	ctx.fields = fields;
@@ -1225,7 +1233,12 @@ setup_curl_handles(struct async_ctx *actx)
 	 * pretty strict when it comes to provider behavior, so we have to check
 	 * what comes back anyway.)
 	 */
-	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
 
 	return true;
@@ -1247,9 +1260,19 @@ append_data(char *buf, size_t size, size_t nmemb, void *userdata)
 	PQExpBuffer resp = userdata;
 	size_t		len = size * nmemb;
 
-	/* TODO: cap the maximum size */
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+		return 0;
+
+	/* The data passed from libcurl is not null-terminated */
 	appendBinaryPQExpBuffer(resp, buf, len);
-	/* TODO: check for broken buffer */
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+		return 0;
 
 	return len;
 }
@@ -1266,7 +1289,6 @@ static bool
 start_request(struct async_ctx *actx)
 {
 	CURLMcode	err;
-	int			running;
 
 	resetPQExpBuffer(&actx->work_data);
 	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
@@ -1280,7 +1302,7 @@ start_request(struct async_ctx *actx)
 		return false;
 	}
 
-	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
 	if (err)
 	{
 		actx_error(actx, "asynchronous HTTP request failed: %s",
@@ -1289,19 +1311,11 @@ start_request(struct async_ctx *actx)
 	}
 
 	/*
-	 * Sanity check.
-	 *
-	 * TODO: even though this is nominally an asynchronous process, there are
-	 * apparently operations that can synchronously fail by this point, such
-	 * as connections to closed local ports. Maybe we need to let this case
-	 * fall through to drive_request instead, or else perform a
-	 * curl_multi_info_read immediately.
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point like connections
+	 * to closed local ports. Fall through and leave the sanity check for the
+	 * next state consuming actx.
 	 */
-	if (running != 1)
-	{
-		actx_error(actx, "failed to queue HTTP request");
-		return false;
-	}
 
 	return true;
 }
@@ -1314,12 +1328,18 @@ static PostgresPollingStatusType
 drive_request(struct async_ctx *actx)
 {
 	CURLMcode	err;
-	int			running;
 	CURLMsg    *msg;
 	int			msgs_left;
 	bool		done;
 
-	err = curl_multi_socket_all(actx->curlm, &running);
+	/* Sanity check the previous operation */
+	if (actx->running != 1)
+	{
+		actx_error(actx, "failed to queue HTTP request");
+		return false;
+	}
+
+	err = curl_multi_socket_all(actx->curlm, &actx->running);
 	if (err)
 	{
 		actx_error(actx, "asynchronous HTTP request failed: %s",
@@ -1327,7 +1347,7 @@ drive_request(struct async_ctx *actx)
 		return PGRES_POLLING_FAILED;
 	}
 
-	if (running)
+	if (actx->running)
 	{
 		/* We'll come back again. */
 		return PGRES_POLLING_READING;
@@ -1541,7 +1561,12 @@ start_device_authz(struct async_ctx *actx, PGconn *conn)
 	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
 	if (conn->oauth_scope)
 		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
-	/* TODO check for broken buffer */
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	/* Make our request. */
 	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
@@ -1683,32 +1708,34 @@ finish_token_request(struct async_ctx *actx, struct token *tok)
 	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
-	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
-	 * response uses either 400 Bad Request or 401 Unauthorized.
-	 *
-	 * TODO: there are references online to 403 appearing in the wild...
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
 	 */
-	if (response_code != 200
-		&& response_code != 400
-		 /* && response_code != 401 TODO */ )
+	if (response_code == 200)
 	{
-		actx_error(actx, "unexpected response code %ld", response_code);
-		return false;
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
 	}
 
 	/*
-	 * Pull the fields we care about from the document.
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
 	 */
-	if (response_code == 200)
+	if (response_code == 400 || response_code == 401)
 	{
-		actx->errctx = "failed to parse access token response";
-		if (!parse_access_token(actx, tok))
-			return false;		/* error message already set */
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
 	}
-	else if (!parse_token_error(actx, &tok->err))
-		return false;
 
-	return true;
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
 }
 
 
@@ -1926,16 +1953,20 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 				 * errors; anything else and we bail.
 				 */
 				err = &tok.err;
-				if (!err->error || (strcmp(err->error, "authorization_pending")
-									&& strcmp(err->error, "slow_down")))
+				if (!err->error)
+				{
+					actx_error(actx, "unknown error");
+					goto error_return;
+				}
+
+				if (strcmp(err->error, "authorization_pending") != 0 &&
+					strcmp(err->error, "slow_down") != 0)
 				{
-					/* TODO handle !err->error */
 					if (err->error_description)
 						appendPQExpBuffer(&actx->errbuf, "%s ",
 										  err->error_description);
 
 					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
-
 					goto error_return;
 				}
 
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index f943a31cc0..61de9ac451 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -247,7 +247,12 @@ handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
 		return false;
 	}
 
-	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	if (!makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+		return false;
+	}
 
 	initPQExpBuffer(&ctx.errbuf);
 	sem.semstate = &ctx;
-- 
2.34.1

v24-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v24-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 16994b449da1a1d1e8f93ea9f10dda31a87c8b37 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v24 5/8] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- port to platforms other than "modern Linux/BSD"
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 doc/src/sgml/client-auth.sgml                 |  28 +
 doc/src/sgml/filelist.sgml                    |   1 +
 doc/src/sgml/oauth-validators.sgml            |   9 +
 doc/src/sgml/postgres.sgml                    |   1 +
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 666 ++++++++++++++++++
 src/backend/libpq/auth-sasl.c                 |  10 +-
 src/backend/libpq/auth-scram.c                |   4 +-
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/common/Makefile                           |   2 +-
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/include/libpq/sasl.h                      |  11 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     |   2 +-
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  22 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  37 +
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   | 173 +++++
 .../modules/oauth_validator/t/oauth_server.py | 260 +++++++
 src/test/modules/oauth_validator/validator.c  |  82 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  14 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   4 +
 32 files changed, 1522 insertions(+), 40 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 33646faead..95f131baa9 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -163,7 +163,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -175,6 +175,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -223,6 +224,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -235,6 +237,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -310,8 +313,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bullseye - Autoconf
@@ -676,8 +681,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index f1eb3b279e..9aab8eb6cd 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2295,6 +2311,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a7ff5f8264..91cf16678e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..3c7884baf9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,9 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+
+ <para>
+  TODO
+ </para>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index ec9f90e283..bfb73991e7 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -263,6 +263,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..2a0d74a079
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 2"),
+				 errdetail("Bearer token is empty.")));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 3"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 4161959914..486a34e719 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2b607c5270..0a5a8640fc 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 18271def2e..aabe0b0e68 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1743,6 +1744,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2062,8 +2065,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2446,6 +2450,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 630ed0f162..9b362897f0 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -48,6 +48,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4707,6 +4708,17 @@ struct config_string ConfigureNamesString[] =
 		check_synchronized_standby_slots, assign_synchronized_standby_slots, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/common/Makefile b/src/common/Makefile
index f1da2ed13d..beb9830030 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -41,7 +41,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
 override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
-LIBS += $(PTHREAD_LIBS)
+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
 
 OBJS_COMMON = \
 	archive.o \
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 27f5be8f63..c9866c222a 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -143,7 +143,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..655ce75796
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,22 @@
+export PYTHON
+export with_oauth
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..3db2ddea1c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,37 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..63194afb18
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,173 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..b734b83a64
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,260 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "content_type" in self._test_params:
+            return self._test_params["content_type"]
+
+        return "application/json"
+
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        if self._should_modify() and "interval" in self._test_params:
+            return self._test_params["interval"]
+
+        return 0
+
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "retry_code" in self._test_params:
+            return self._test_params["retry_code"]
+
+        return "authorization_pending"
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type())
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            "verification_uri": uri,
+            "expires-in": 5,
+        }
+
+        interval = self._interval()
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code()}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return {
+            "access_token": token,
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..7b4dc9c494
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,82 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 0135c5a795..f14839f4c5 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2388,6 +2388,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2431,7 +2436,14 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..abdff5a3c3
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	diag("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 0fa4750906..950b0eb191 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1716,6 +1716,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3056,6 +3057,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3648,6 +3651,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v24-0007-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v24-0007-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 9117bf8be2b694e373da1a5d6deffd3986ea2e2f Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v24 7/8] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  137 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1797 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1074 +++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5509 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 95f131baa9..cb3c804834 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -317,6 +317,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bullseye - Autoconf
@@ -371,6 +372,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -381,7 +384,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.32-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index 2ec2ebb6bf..bab4509a2e 100644
--- a/meson.build
+++ b/meson.build
@@ -3316,6 +3316,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3477,6 +3480,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index c3d0dfedf1..f401ec179e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -7,6 +7,7 @@ subdir('authentication')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..94f3620af3
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,137 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            self._pump_async(conn)
+            conn.close()
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..667b643938
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1797 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..dbb8b8823c
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..427ab063e6
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1074 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

#105Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#104)
Re: [PoC] Federated Authn/z with OAUTHBEARER

I have some comments about the first three patches, that deal with
memory management.

v24-0001-Revert-ECPG-s-use-of-pnstrdup.patch

This looks right.

I suppose another approach would be to put a full replacement for
strndup() into src/port/. But since there is currently only one user,
and most other users should be using pnstrdup(), the presented approach
seems ok.

We should take the check for exit() calls from libpq and expand it to
cover the other libraries as well. Maybe there are other problems like
this?

v24-0002-Remove-fe_memutils-from-libpgcommon_shlib.patch

I don't quite understand how this problem can arise. The description says

"""
libpq appears to have no need for this, and the exit() references cause
our libpq-refs-stamp test to fail if the linker doesn't strip out the
unused code.
"""

But under what circumstances does "the linker doesn't strip out" happen?
If this happens accidentally, then we should have seen some buildfarm
failures or something?

Also, one could look further and notice that restricted_token.c and
sprompt.c both a) are not needed by libpq and b) can trigger exit()
calls. Then it's not clear why those are not affected.

v24-0003-common-jsonapi-support-libpq-as-a-client.patch

I'm reminded of thread [0]/messages/by-id/16d0beac-a141-e5d3-60e9-323da75f49bf@eisentraut.org. I think there is quite a bit of confusion
about the pqexpbuffer vs. stringinfo APIs, and they are probably used
incorrectly quite a bit. There are now also programs that use both of
them! This patch now introduces another layer on top of them. I fear,
at the end, nobody is going to understand any of this anymore. Also,
changing all the programs to link in libpq for pqexpbuffer seems like
the opposite direction from what was suggested in [0]/messages/by-id/16d0beac-a141-e5d3-60e9-323da75f49bf@eisentraut.org.

I think we need to do some deeper thinking here about how we want the
memory management on the client side to work. Maybe we could just use
one API but have some flags or callbacks to control the out-of-memory
behavior.

[0]: /messages/by-id/16d0beac-a141-e5d3-60e9-323da75f49bf@eisentraut.org
/messages/by-id/16d0beac-a141-e5d3-60e9-323da75f49bf@eisentraut.org

#106Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#104)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Thanks for working on this patchset, I'm looking over 0004 and 0005 but came
across a thing I wanted to bring up one thing sooner than waiting for the
review. In parse_device_authz we have this:

{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},

/*
* The following fields are technically REQUIRED, but we don't use
* them anywhere yet:
*
* - expires_in
*/

{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},

Together with a colleage we found the Azure provider use "verification_url"
rather than xxx_uri. Another discrepancy is that it uses a string for the
interval (ie: "interval":"5"). One can of course argue that Azure is wrong and
should feel bad, but I fear that virtually all (major) providers will have
differences like this, so we will have to deal with it in an extensible fashion
(compile time, not runtime configurable).

I was toying with making the name json_field name member an array, to allow
variations. That won't help with the fieldtype differences though, so another
train of thought was to have some form of REQUIRED_XOR where fields can tied
together. What do you think about something along these lines?

Another thing, shouldn't we really parse and interpret *all* REQUIRED fields
even if we don't use them to ensure that the JSON is wellformed? If the JSON
we get is malformed in any way it seems like the safe/conservative option to
error out.

--
Daniel Gustafsson

#107Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#105)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Jul 29, 2024 at 5:02 AM Peter Eisentraut <peter@eisentraut.org> wrote:

We should take the check for exit() calls from libpq and expand it to
cover the other libraries as well. Maybe there are other problems like
this?

Seems reasonable, yeah.

But under what circumstances does "the linker doesn't strip out" happen?
If this happens accidentally, then we should have seen some buildfarm
failures or something?

On my machine, for example, I see differences with optimization
levels. Say you inadvertently call pfree() in a _shlib build, as I did
multiple times upthread. By itself, that shouldn't actually be a
problem (it eventually redirects to free()), so it should be legal to
call pfree(), and with -O2 the build succeeds. But with -Og, the
exit() check trips, and when I disassemble I see that pg_malloc() et
all have infected the shared object. After all, we did tell the linker
to put that object file in, and we don't ask it to garbage-collect
sections.

Also, one could look further and notice that restricted_token.c and
sprompt.c both a) are not needed by libpq and b) can trigger exit()
calls. Then it's not clear why those are not affected.

I think it's easier for the linker to omit whole object files rather
than partial ones. If libpq doesn't use any of those APIs there's not
really a reason to trip over it.

(Maybe the _shlib variants should just contain the minimum objects
required to compile.)

I'm reminded of thread [0]. I think there is quite a bit of confusion
about the pqexpbuffer vs. stringinfo APIs, and they are probably used
incorrectly quite a bit. There are now also programs that use both of
them! This patch now introduces another layer on top of them. I fear,
at the end, nobody is going to understand any of this anymore.

"anymore"? :)

In all seriousness -- I agree that this isn't sustainable. At the
moment the worst pain (the new layer) is isolated to jsonapi.c, which
seems like an okay place to try something new, since there aren't that
many clients. But to be honest I'm not excited about deciding the Best
Way Forward based on a sample size of JSON.

Also,
changing all the programs to link in libpq for pqexpbuffer seems like
the opposite direction from what was suggested in [0].

(I don't really want to keep that new libpq dependency. We'd just have
to decide where PQExpBuffer is going to go if we're not okay with it.)

I think we need to do some deeper thinking here about how we want the
memory management on the client side to work. Maybe we could just use
one API but have some flags or callbacks to control the out-of-memory
behavior.

Any src/common code that needs to handle both in-band and out-of-band
failure modes will still have to decide whether it's going to 1)
duplicate code paths or 2) just act as if in-band failures can always
happen. I think that's probably essential complexity; an ideal API
might make it nicer to deal with but it can't abstract it away.

Thanks!
--Jacob

#108Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#106)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Jul 29, 2024 at 1:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:

Together with a colleage we found the Azure provider use "verification_url"
rather than xxx_uri.

Yeah, I think that's originally a Google-ism. (As far as I can tell
they helped author the spec for this and then didn't follow it. :/ ) I
didn't recall Azure having used it back when I was testing against it,
though, so that's good to know.

Another discrepancy is that it uses a string for the
interval (ie: "interval":"5").

Oh, that's a new one. I don't remember needing to hack around that
either; maybe iddawc handled it silently?

One can of course argue that Azure is wrong and
should feel bad, but I fear that virtually all (major) providers will have
differences like this, so we will have to deal with it in an extensible fashion
(compile time, not runtime configurable).

Such is life... verification_url we will just have to deal with by
default, I think, since Google does/did it too. Not sure about
interval -- but do we want to make our distribution maintainers deal
with a compile-time setting for libpq, just to support various OAuth
flavors? To me it seems like we should just hold our noses and support
known (large) departures in the core.

I was toying with making the name json_field name member an array, to allow
variations. That won't help with the fieldtype differences though, so another
train of thought was to have some form of REQUIRED_XOR where fields can tied
together. What do you think about something along these lines?

If I designed it right, just adding alternative spellings directly to
the fields list should work. (The "required" check is by struct
member, not name, so both spellings can point to the same
destination.) The alternative typing on the other hand might require
something like a new sentinel "type" that will accept both... I hadn't
expected that.

Another thing, shouldn't we really parse and interpret *all* REQUIRED fields
even if we don't use them to ensure that the JSON is wellformed? If the JSON
we get is malformed in any way it seems like the safe/conservative option to
error out.

Good, I was hoping to have a conversation about that. I am fine with
either option in principle. In practice I expect to add code to use
`expires_in` (so that we can pass it to custom OAuth hook
implementations) and `scope` (to check if the server has changed it on
us).

That leaves the provider... Forcing the provider itself to implement
unused stuff in order to interoperate seems like it could backfire on
us, especially since IETF standardized an alternate .well-known URI
[1]: https://www.rfc-editor.org/rfc/rfc8414.html
for us to interpret this: those fields may be required for OpenID, but
your OAuth provider might not be an OpenID provider, and our code
doesn't require OpenID.) I think we should probably tread lightly in
that particular case. Thoughts on that?

Thanks!
--Jacob

[1]: https://www.rfc-editor.org/rfc/rfc8414.html

#109Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#107)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 30.07.24 00:30, Jacob Champion wrote:

But under what circumstances does "the linker doesn't strip out" happen?
If this happens accidentally, then we should have seen some buildfarm
failures or something?

On my machine, for example, I see differences with optimization
levels. Say you inadvertently call pfree() in a _shlib build, as I did
multiple times upthread. By itself, that shouldn't actually be a
problem (it eventually redirects to free()), so it should be legal to
call pfree(), and with -O2 the build succeeds. But with -Og, the
exit() check trips, and when I disassemble I see that pg_malloc() et
all have infected the shared object. After all, we did tell the linker
to put that object file in, and we don't ask it to garbage-collect
sections.

I'm tempted to say, this is working as intended.

libpgcommon is built as a static library. So we can put all the object
files in the library, and its users only use the object files they
really need. So this garbage collection you allude to actually does
happen, on an object-file level.

You shouldn't use pfree() interchangeably with free(), even if that is
not enforced because it's the same thing underneath. First, it just
makes sense to keep the alloc and free pairs matched up. And second, on
Windows there is some additional restriction (vague knowledge) that the
allocate and free functions must be in the same library, so mixing them
freely might not even work.

#110Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#109)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Aug 2, 2024 at 10:13 AM Peter Eisentraut <peter@eisentraut.org> wrote:

You shouldn't use pfree() interchangeably with free(), even if that is
not enforced because it's the same thing underneath. First, it just
makes sense to keep the alloc and free pairs matched up. And second, on
Windows there is some additional restriction (vague knowledge) that the
allocate and free functions must be in the same library, so mixing them
freely might not even work.

Ah, I forgot about the CRT problems on Windows. So my statement of
"the linker might not garbage collect" is pretty much irrelevant.

But it sounds like we agree that we shouldn't be using fe_memutils at
all in shlib builds. (If you can't use palloc -- it calls exit -- then
you can't use pfree either.) Is 0002 still worth pursuing, once I've
correctly wordsmithed the commit? Or did I misunderstand your point?

Thanks!
--Jacob

#111Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#110)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 02.08.24 19:51, Jacob Champion wrote:

But it sounds like we agree that we shouldn't be using fe_memutils at
all in shlib builds. (If you can't use palloc -- it calls exit -- then
you can't use pfree either.) Is 0002 still worth pursuing, once I've
correctly wordsmithed the commit? Or did I misunderstand your point?

Yes, I think with an adjusted comment and commit message, the actual
change makes sense.

#112Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#111)
8 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Aug 2, 2024 at 11:48 AM Peter Eisentraut <peter@eisentraut.org> wrote:

Yes, I think with an adjusted comment and commit message, the actual
change makes sense.

Done in v25.

...along with a bunch of other stuff:

1. All the debug-mode things that we want for testing but not in
production have now been hidden behind a PGOAUTHDEBUG environment
variable, instead of being enabled by default. At the moment, that
means 1) sensitive HTTP traffic gets printed on stderr, 2) plaintext
HTTP is allowed, and 3) servers may DoS the client by sending a
zero-second retry interval (which speeds up testing a lot). I've
resurrected some of Daniel's CURLOPT_DEBUGFUNCTION implementation for
this.

I think this feature needs more thought, but I'm not sure how much. In
particular I don't think a connection string option would be
appropriate (imagine the "fun" a proxy solution would have with a
spray-my-password-to-stderr switch). But maybe it makes sense to
further divide the dangerous behavior up, so that for example you can
debug the HTTP stream without also allowing plaintext connections, or
something. And maybe stricter maintainers would like to compile the
feature out entirely?

2. The verification_url variant from Azure and Google is now directly supported.

@Daniel: I figured out why I wasn't seeing the string-based-interval
issue in my testing. I've been using Azure's v2.0 OpenID endpoint,
which seems to be much more compliant than the original. Since this is
a new feature, would it be okay to just push new users to that
endpoint rather than supporting the previous weirdness in our code?
(Either way, I think we should support verification_url.)

Along those lines, with Azure I'm now seeing that device_code is not
advertised in grant_types_supported... is that new behavior? Or did
iddawc just not care?

3. I've restructured the libcurl calls to allow
curl_multi_socket_action() to synchronously succeed on its first call,
which we've been seeing a lot in the CI as mentioned upthread. This
led to a bunch of refactoring of the top-level state machine, which
had gotten too complex. I'm much happier with the code organization
now, but it's a big diff.

4. I've changed things around to get rid of two modern libcurl
deprecation warnings. I need to ask curl-library about my use of
curl_multi_socket_all(), which seems like it's exactly what our use
case needs.

Thanks,
--Jacob

Attachments:

since-v24.diff.txttext/plain; charset=US-ASCII; name=since-v24.diff.txtDownload
1:  9fc1df7509 = 1:  76da087e0c Revert ECPG's use of pnstrdup()
2:  fdd89bdee0 ! 2:  6b6d48e001 Remove fe_memutils from libpgcommon_shlib
    @@ Metadata
      ## Commit message ##
         Remove fe_memutils from libpgcommon_shlib
     
    -    libpq appears to have no need for this, and the exit() references cause
    -    our libpq-refs-stamp test to fail if the linker doesn't strip out the
    -    unused code.
    +    libpq must not use palloc/pfree. It's not allowed to exit on allocation
    +    failure, and mixing the frontend pfree with malloc is architecturally
    +    unsound.
    +
    +    Remove fe_memutils from the shlib build entirely, to keep devs from
    +    accidentally depending on it in the future.
     
      ## src/common/Makefile ##
     @@ src/common/Makefile: endif
    - # libraries such as libpq to report errors directly.
    + # A few files are currently only built for frontend, not server.
    + # logging.c is excluded from OBJS_FRONTEND_SHLIB (shared library) as
    + # a matter of policy, because it is not appropriate for general purpose
    +-# libraries such as libpq to report errors directly.
    ++# libraries such as libpq to report errors directly. fe_memutils.c is
    ++# excluded because libpq must not exit() on allocation failure.
      OBJS_FRONTEND_SHLIB = \
      	$(OBJS_COMMON) \
     -	fe_memutils.o \
    @@ src/common/Makefile: endif
     
      ## src/common/meson.build ##
     @@ src/common/meson.build: common_sources_cflags = {
    + # A few files are currently only built for frontend, not server.
    + # logging.c is excluded from OBJS_FRONTEND_SHLIB (shared library) as
    + # a matter of policy, because it is not appropriate for general purpose
    +-# libraries such as libpq to report errors directly.
    ++# libraries such as libpq to report errors directly. fe_memutils.c is
    ++# excluded because libpq must not exit() on allocation failure.
      
      common_sources_frontend_shlib = common_sources
      common_sources_frontend_shlib += files(
3:  ae3ae1cfaa = 3:  faf3707623 common/jsonapi: support libpq as a client
4:  92b257643e ! 4:  4017611c19 libpq: add OAUTHBEARER SASL mechanism
    @@ Commit message
     
         Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!
     
    +    = Debug Mode =
    +
    +    A "dangerous debugging mode" may be enabled in libpq, by setting the
    +    environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
    +    that you will not want in a production system:
    +
    +    - permits the use of plaintext HTTP in the OAuth provider exchange
    +    - sprays HTTP traffic, containing several critical secrets, to stderr
    +    - permits the use of zero-second retry intervals, which can DoS the
    +      client
    +
         = PQauthDataHook =
     
         Clients may override two pieces of OAuth handling using the new
    @@ Commit message
           condition?)
         - support require_auth
         - fill in documentation stubs
    +    - support protocol "variants" implemented by major providers
         - ...and more.
     
         Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	struct provider provider;
     +	struct device_authz authz;
     +
    ++	int			running;		/* is asynchronous work in progress? */
     +	bool		user_prompted;	/* have we already sent the authz prompt? */
    ++	bool		debugging;		/* can we give unsafe developer assistance? */
     +};
     +
     +/*
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
     + * practicality, round any fractional intervals up to the next second, and clamp
     + * the result at a minimum of one. (Zero-second intervals would result in an
    -+ * expensive network polling loop.)
    ++ * expensive network polling loop.) Tests may remove the lower bound with
    ++ * PGOAUTHDEBUG, for improved performance.
     + *
     + * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
     + * code expiration time?
     + */
     +static int
    -+parse_interval(const char *interval_str)
    ++parse_interval(struct async_ctx *actx, const char *interval_str)
     +{
     +	double		parsed;
     +	int			cnt;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	parsed = ceil(parsed);
     +
     +	if (parsed < 1)
    -+		return 1;				/* TODO this slows down the tests
    -+								 * considerably... */
    ++		return actx->debugging ? 0 : 1;
    ++
     +	else if (INT_MAX <= parsed)
     +		return INT_MAX;
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
     +
     +		/*
    ++		 * Some services (Google, Azure) spell verification_uri differently. We
    ++		 * accept either.
    ++		 */
    ++		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
    ++
    ++		/*
     +		 * The following fields are technically REQUIRED, but we don't use
     +		 * them anywhere yet:
     +		 *
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 * we at least know they're valid JSON numbers.
     +	 */
     +	if (authz->interval_str)
    -+		authz->interval = parse_interval(authz->interval_str);
    ++		authz->interval = parse_interval(actx, authz->interval_str);
     +	else
     +	{
     +		/*
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +}
     +
     +/*
    -+ * Adds or removes timeouts from the multiplexer set, as directed by the
    -+ * libcurl multi handle. Rather than continually adding and removing the timer,
    -+ * we keep it in the set at all times and just disarm it when it's not needed.
    ++ * Enables or disables the timer in the multiplexer set. The timeout value is
    ++ * in milliseconds (negative values disable the timer). Rather than continually
    ++ * adding and removing the timer, we keep it in the set at all times and just
    ++ * disarm it when it's not needed.
     + */
    -+static int
    -+register_timer(CURLM *curlm, long timeout, void *ctx)
    ++static bool
    ++set_timer(struct async_ctx *actx, long timeout)
     +{
     +#if HAVE_SYS_EPOLL_H
    -+	struct async_ctx *actx = ctx;
     +	struct itimerspec spec = {0};
     +
     +	if (timeout < 0)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 * A zero timeout means libcurl wants us to call back immediately.
     +		 * That's not technically an option for timerfd, but we can make the
     +		 * timeout ridiculously short.
    -+		 *
    -+		 * TODO: maybe just signal drive_request() to immediately call back in
    -+		 * this case?
     +		 */
     +		spec.it_value.tv_nsec = 1;
     +	}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
     +	{
     +		actx_error(actx, "setting timerfd to %ld: %m", timeout);
    -+		return -1;
    ++		return false;
     +	}
     +#endif
     +#ifdef HAVE_SYS_EVENT_H
    -+	struct async_ctx *actx = ctx;
     +	struct kevent ev;
     +
     +	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
     +	{
     +		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
    -+		return -1;
    ++		return false;
     +	}
     +#endif
     +
    ++	return true;
    ++}
    ++
    ++/*
    ++ * Adds or removes timeouts from the multiplexer set, as directed by the
    ++ * libcurl multi handle.
    ++ */
    ++static int
    ++register_timer(CURLM *curlm, long timeout, void *ctx)
    ++{
    ++	struct async_ctx *actx = ctx;
    ++
    ++	/*
    ++	 * TODO: maybe just signal drive_request() to immediately call back in
    ++	 * the (timeout == 0) case?
    ++	 */
    ++	if (!set_timer(actx, timeout))
    ++		return -1; /* actx_error already called */
    ++
    ++	return 0;
    ++}
    ++
    ++/*
    ++ * Prints Curl request debugging information to stderr.
    ++ *
    ++ * Note that this will expose a number of critical secrets, so users have to opt
    ++ * into this (see PGOAUTHDEBUG).
    ++ */
    ++static int
    ++debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
    ++			   void *clientp)
    ++{
    ++	const char * const end = data + size;
    ++	const char *prefix;
    ++
    ++	/* Prefixes are modeled off of the default libcurl debug output. */
    ++	switch (type)
    ++	{
    ++		case CURLINFO_TEXT:
    ++			prefix = "*";
    ++			break;
    ++
    ++		case CURLINFO_HEADER_IN: /* fall through */
    ++		case CURLINFO_DATA_IN:
    ++			prefix = "<";
    ++			break;
    ++
    ++		case CURLINFO_HEADER_OUT: /* fall through */
    ++		case CURLINFO_DATA_OUT:
    ++			prefix = ">";
    ++			break;
    ++
    ++		default:
    ++			return 0;
    ++	}
    ++
    ++	/*
    ++	 * Split the output into lines for readability; sometimes multiple headers
    ++	 * are included in a single call.
    ++	 */
    ++	while (data < end)
    ++	{
    ++		size_t		len = end - data;
    ++		char	   *eol = memchr(data, '\n', len);
    ++
    ++		if (eol)
    ++			len = eol - data + 1;
    ++
    ++		/* TODO: handle unprintables */
    ++		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
    ++				eol ? "" : "\n");
    ++
    ++		data += len;
    ++	}
    ++
     +	return 0;
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		/* No alternative resolver, TODO: warn about timeouts */
     +	}
     +
    -+	/* TODO investigate using conn->Pfdebug and CURLOPT_DEBUGFUNCTION here */
    ++	if (actx->debugging)
    ++	{
    ++		/*
    ++		 * Set a callback for retrieving error information from libcurl, the
    ++		 * function only takes effect when CURLOPT_VERBOSE has been set so make
    ++		 * sure the order is kept.
    ++		 */
    ++		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
     +		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
    ++	}
    ++
     +	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
     +
     +	/*
    -+	 * Only HTTP[S] is allowed. TODO: disallow HTTP without user opt-in
    ++	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
    ++	 * intended for testing only.)
    ++	 *
    ++	 * There's a bit of unfortunate complexity around the choice of CURLoption.
    ++	 * CURLOPT_PROTOCOLS is deprecated in modern Curls, but its replacement
    ++	 * didn't show up until relatively recently.
     +	 */
    -+	CHECK_SETOPT(actx, CURLOPT_PROTOCOLS, CURLPROTO_HTTP | CURLPROTO_HTTPS, return false);
    ++	{
    ++#if CURL_AT_LEAST_VERSION(7, 85, 0)
    ++		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
    ++		const char *protos = "https";
    ++		const char * const unsafe = "https,http";
    ++#else
    ++		const CURLoption popt = CURLOPT_PROTOCOLS;
    ++		long		protos = CURLPROTO_HTTPS;
    ++		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
    ++#endif
    ++
    ++		if (actx->debugging)
    ++			protos = unsafe;
    ++
    ++		CHECK_SETOPT(actx, popt, protos, return false);
    ++	}
     +
     +	/*
     +	 * Suppress the Accept header to make our request as minimal as possible.
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + * anything important there across this call).
     + *
     + * Once a request is queued, it can be driven to completion via drive_request().
    ++ * If actx->running is zero upon return, the request has already finished and
    ++ * drive_request() can be called without returning control to the client.
     + */
     +static bool
     +start_request(struct async_ctx *actx)
     +{
     +	CURLMcode	err;
    -+	int			running;
     +
     +	resetPQExpBuffer(&actx->work_data);
     +	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		return false;
     +	}
     +
    -+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
    -+	if (err)
    -+	{
    -+		actx_error(actx, "asynchronous HTTP request failed: %s",
    -+				   curl_multi_strerror(err));
    -+		return false;
    -+	}
    -+
     +	/*
    -+	 * Sanity check.
    ++	 * actx->running tracks the number of running handles, so we can immediately
    ++	 * call back if no waiting is needed.
     +	 *
    -+	 * TODO: even though this is nominally an asynchronous process, there are
    -+	 * apparently operations that can synchronously fail by this point, such
    -+	 * as connections to closed local ports. Maybe we need to let this case
    -+	 * fall through to drive_request instead, or else perform a
    -+	 * curl_multi_info_read immediately.
    ++	 * Even though this is nominally an asynchronous process, there are some
    ++	 * operations that can synchronously fail by this point (e.g. connections
    ++	 * to closed local ports) or even synchronously succeed if the stars align
    ++	 * (all the libcurl connection caches hit and the server is fast).
     +	 */
    -+	if (running != 1)
    ++	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
    ++	if (err)
     +	{
    -+		actx_error(actx, "failed to queue HTTP request");
    ++		actx_error(actx, "asynchronous HTTP request failed: %s",
    ++				   curl_multi_strerror(err));
     +		return false;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +}
     +
     +/*
    ++ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
    ++ * it a no-op.
    ++ */
    ++#ifndef CURL_IGNORE_DEPRECATION
    ++#define CURL_IGNORE_DEPRECATION(x) x
    ++#endif
    ++
    ++/*
     + * Drives the multi handle towards completion. The caller should have already
     + * set up an asynchronous request via start_request().
     + */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +drive_request(struct async_ctx *actx)
     +{
     +	CURLMcode	err;
    -+	int			running;
     +	CURLMsg    *msg;
     +	int			msgs_left;
     +	bool		done;
     +
    -+	err = curl_multi_socket_all(actx->curlm, &running);
    ++	if (actx->running)
    ++	{
    ++		/*
    ++		 * There's an async request in progress. Pump the multi handle.
    ++		 *
    ++		 * TODO: curl_multi_socket_all() is deprecated, presumably because it's
    ++		 * inefficient and pointless if your event loop has already handed you
    ++		 * the exact sockets that are ready. But that's not our use case --
    ++		 * our client has no way to tell us which sockets are ready. (They don't
    ++		 * even know there are sockets to begin with.)
    ++		 *
    ++		 * We can grab the list of triggered events from the multiplexer
    ++		 * ourselves, but that's effectively what curl_multi_socket_all() is
    ++		 * going to do... so it appears to be exactly the API we need.
    ++		 *
    ++		 * Ignore the deprecation for now. This needs a followup on
    ++		 * curl-library@, to make sure we're not shooting ourselves in the foot
    ++		 * in some other way.
    ++		 */
    ++		CURL_IGNORE_DEPRECATION(
    ++			err = curl_multi_socket_all(actx->curlm, &actx->running);
    ++		)
    ++
     +		if (err)
     +		{
     +			actx_error(actx, "asynchronous HTTP request failed: %s",
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			return PGRES_POLLING_FAILED;
     +		}
     +
    -+	if (running)
    ++		if (actx->running)
     +		{
     +			/* We'll come back again. */
     +			return PGRES_POLLING_READING;
     +		}
    ++	}
     +
     +	done = false;
     +	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	return true;
     +}
     +
    ++/*
    ++ * Finishes the token request and examines the response. If the flow has
    ++ * completed, a valid token will be returned via the parameter list. Otherwise,
    ++ * the token parameter remains unchanged, and the caller needs to wait for
    ++ * another interval (which will have been increased in response to a slow_down
    ++ * message from the server) before starting a new token request.
    ++ *
    ++ * False is returned only for permanent error conditions.
    ++ */
    ++static bool
    ++handle_token_response(struct async_ctx *actx, char **token)
    ++{
    ++	bool		success = false;
    ++	struct token tok = {0};
    ++	const struct token_error *err;
    ++
    ++	if (!finish_token_request(actx, &tok))
    ++		goto token_cleanup;
    ++
    ++	if (tok.access_token)
    ++	{
    ++		/* Construct our Bearer token. */
    ++		resetPQExpBuffer(&actx->work_data);
    ++		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
    ++
    ++		if (PQExpBufferDataBroken(actx->work_data))
    ++		{
    ++			actx_error(actx, "out of memory");
    ++			goto token_cleanup;
    ++		}
    ++
    ++		*token = strdup(actx->work_data.data);
    ++		if (!*token)
    ++		{
    ++			actx_error(actx, "out of memory");
    ++			goto token_cleanup;
    ++		}
    ++
    ++		success = true;
    ++		goto token_cleanup;
    ++	}
    ++
    ++	/*
    ++	 * authorization_pending and slow_down are the only
    ++	 * acceptable errors; anything else and we bail.
    ++	 */
    ++	err = &tok.err;
    ++	if (!err->error)
    ++	{
    ++		/* TODO test */
    ++		actx_error(actx, "unknown error");
    ++		goto token_cleanup;
    ++	}
    ++
    ++	if (strcmp(err->error, "authorization_pending") != 0 &&
    ++		strcmp(err->error, "slow_down") != 0)
    ++	{
    ++		if (err->error_description)
    ++			appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
    ++
    ++		appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
    ++		goto token_cleanup;
    ++	}
    ++
    ++	/*
    ++	 * A slow_down error requires us to permanently increase
    ++	 * our retry interval by five seconds. RFC 8628, Sec. 3.5.
    ++	 */
    ++	if (strcmp(err->error, "slow_down") == 0)
    ++	{
    ++		int			prev_interval = actx->authz.interval;
    ++
    ++		actx->authz.interval += 5;
    ++		if (actx->authz.interval < prev_interval)
    ++		{
    ++			actx_error(actx, "slow_down interval overflow");
    ++			goto token_cleanup;
    ++		}
    ++	}
    ++
    ++	success = true;
    ++
    ++token_cleanup:
    ++	free_token(&tok);
    ++	return success;
    ++}
    ++
    ++/*
    ++ * Displays a device authorization prompt for action by the end user, either via
    ++ * the PQauthDataHook, or by a message on standard error if no hook is set.
    ++ */
    ++static bool
    ++prompt_user(struct async_ctx *actx, PGconn *conn)
    ++{
    ++	int			res;
    ++	PQpromptOAuthDevice prompt = {
    ++		.verification_uri = actx->authz.verification_uri,
    ++		.user_code = actx->authz.user_code,
    ++		/* TODO: optional fields */
    ++	};
    ++
    ++	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
    ++
    ++	if (!res)
    ++	{
    ++		fprintf(stderr, "Visit %s and enter the code: %s",
    ++				prompt.verification_uri, prompt.user_code);
    ++	}
    ++	else if (res < 0)
    ++	{
    ++		actx_error(actx, "device prompt failed");
    ++		return false;
    ++	}
    ++
    ++	return true;
    ++}
    ++
     +
     +/*
     + * The top-level, nonblocking entry point for the libcurl implementation. This
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	fe_oauth_state *state = conn->sasl_state;
     +	struct async_ctx *actx;
     +
    -+	struct token tok = {0};
    -+
     +	/*
     +	 * XXX This is not safe. libcurl has stringent requirements for the thread
     +	 * context in which you call curl_global_init(), because it's going to try
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (!state->async_ctx)
     +	{
    ++		const char *env;
    ++
     +		/*
     +		 * Create our asynchronous state, and hook it into the upper-level
     +		 * OAuth state immediately, so any failures below won't leak the
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		actx->timerfd = -1;
     +#endif
     +
    ++		/* Should we enable unsafe features? */
    ++		env = getenv("PGOAUTHDEBUG");
    ++		if (env && strcmp(env, "UNSAFE") == 0)
    ++			actx->debugging = true;
    ++
     +		state->async_ctx = actx;
     +		state->free_async_ctx = free_curl_async_ctx;
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	actx = state->async_ctx;
     +
    ++	do
    ++	{
     +		/* By default, the multiplexer is the altsock. Reassign as desired. */
     +		*altsock = actx->mux;
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +					else if (status != PGRES_POLLING_OK)
     +					{
     +						/* not done yet */
    -+					free_token(&tok);
     +						return status;
     +					}
     +				}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +				break;
     +		}
     +
    ++		/*
    ++		 * Each case here must ensure that actx->running is set while we're
    ++		 * waiting on some asynchronous work. Most cases rely on start_request()
    ++		 * to do that for them.
    ++		 */
     +		switch (actx->step)
     +		{
     +			case OAUTH_STEP_INIT:
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +				break;
     +
     +			case OAUTH_STEP_TOKEN_REQUEST:
    -+			{
    -+				const struct token_error *err;
    -+#ifdef HAVE_SYS_EPOLL_H
    -+				struct itimerspec spec = {0};
    -+#endif
    -+#ifdef HAVE_SYS_EVENT_H
    -+				struct kevent ev = {0};
    -+#endif
    -+
    -+				if (!finish_token_request(actx, &tok))
    ++				if (!handle_token_response(actx, &state->token))
     +					goto error_return;
     +
     +				if (!actx->user_prompted)
     +				{
    -+					int			res;
    -+					PQpromptOAuthDevice prompt = {
    -+						.verification_uri = actx->authz.verification_uri,
    -+						.user_code = actx->authz.user_code,
    -+						/* TODO: optional fields */
    -+					};
    -+
     +					/*
     +					 * Now that we know the token endpoint isn't broken, give
     +					 * the user the login instructions.
     +					 */
    -+					res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn,
    -+										 &prompt);
    -+
    -+					if (!res)
    -+					{
    -+						fprintf(stderr, "Visit %s and enter the code: %s",
    -+								prompt.verification_uri, prompt.user_code);
    -+					}
    -+					else if (res < 0)
    -+					{
    -+						actx_error(actx, "device prompt failed");
    ++					if (!prompt_user(actx, conn))
     +						goto error_return;
    -+					}
     +
     +					actx->user_prompted = true;
     +				}
     +
    -+				if (tok.access_token)
    -+				{
    -+					/* Construct our Bearer token. */
    -+					resetPQExpBuffer(&actx->work_data);
    -+					appendPQExpBuffer(&actx->work_data, "Bearer %s",
    -+									  tok.access_token);
    -+
    -+					if (PQExpBufferDataBroken(actx->work_data))
    -+					{
    -+						actx_error(actx, "out of memory");
    -+						goto error_return;
    -+					}
    -+
    -+					state->token = strdup(actx->work_data.data);
    -+					break;
    -+				}
    -+
    -+				/*
    -+				 * authorization_pending and slow_down are the only acceptable
    -+				 * errors; anything else and we bail.
    -+				 */
    -+				err = &tok.err;
    -+				if (!err->error || (strcmp(err->error, "authorization_pending")
    -+									&& strcmp(err->error, "slow_down")))
    -+				{
    -+					/* TODO handle !err->error */
    -+					if (err->error_description)
    -+						appendPQExpBuffer(&actx->errbuf, "%s ",
    -+										  err->error_description);
    -+
    -+					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
    -+
    -+					goto error_return;
    -+				}
    -+
    -+				/*
    -+				 * A slow_down error requires us to permanently increase our
    -+				 * retry interval by five seconds. RFC 8628, Sec. 3.5.
    -+				 */
    -+				if (strcmp(err->error, "slow_down") == 0)
    -+				{
    -+					int			prev_interval = actx->authz.interval;
    -+
    -+					actx->authz.interval += 5;
    -+					if (actx->authz.interval < prev_interval)
    -+					{
    -+						actx_error(actx, "slow_down interval overflow");
    -+						goto error_return;
    -+					}
    -+				}
    ++				if (state->token)
    ++					break; /* done! */
     +
     +				/*
     +				 * Wait for the required interval before issuing the next
     +				 * request.
     +				 */
    -+				Assert(actx->authz.interval > 0);
    -+#ifdef HAVE_SYS_EPOLL_H
    -+				spec.it_value.tv_sec = actx->authz.interval;
    -+
    -+				if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
    -+				{
    -+					actx_error(actx, "failed to set timerfd: %m");
    ++				if (!set_timer(actx, actx->authz.interval * 1000))
     +					goto error_return;
    -+				}
     +
    ++#ifdef HAVE_SYS_EPOLL_H
    ++				/*
    ++				 * No Curl requests are running, so we can simplify by
    ++				 * having the client wait directly on the timerfd rather
    ++				 * than the multiplexer. (This isn't possible for kqueue.)
    ++				 */
     +				*altsock = actx->timerfd;
     +#endif
    -+#ifdef HAVE_SYS_EVENT_H
    -+				/* XXX: I guess this wants to be hidden in a routine */
    -+				EV_SET(&ev, 1, EVFILT_TIMER, EV_ADD, 0,
    -+					   actx->authz.interval * 1000, 0);
    -+				if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
    -+				{
    -+					actx_error(actx, "failed to set kqueue timer: %m");
    -+					goto error_return;
    -+				}
    -+				/* XXX: why did we change the altsock in the epoll version? */
    -+#endif
    ++
     +				actx->step = OAUTH_STEP_WAIT_INTERVAL;
    ++				actx->running = 1;
     +				break;
    -+			}
     +
     +			case OAUTH_STEP_WAIT_INTERVAL:
     +				actx->errctx = "failed to obtain access token";
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +				break;
     +		}
     +
    -+	free_token(&tok);
    ++		/*
    ++		 * The vast majority of the time, if we don't have a token at this
    ++		 * point, actx->running will be set. But there are some corner cases
    ++		 * where we can immediately loop back around; see start_request().
    ++		 */
    ++	} while (!state->token && !actx->running);
     +
     +	/* If we've stored a token, we're done. Otherwise come back later. */
     +	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	appendPQExpBufferStr(&conn->errorMessage, "\n");
     +
    -+	free_token(&tok);
     +	return PGRES_POLLING_FAILED;
     +}
     
5:  16994b449d ! 5:  eaff43dc27 backend: add OAUTHBEARER SASL mechanism
    @@ Commit message
                   further checks are done.
     
         Several TODOs:
    -    - port to platforms other than "modern Linux/BSD"
         - implement more helpful handling of HBA misconfigurations
         - use logdetail during auth failures
         - allow passing the configured issuer to the oauth_validator_command, to
    @@ .cirrus.tasks.yml: task:
     +      libcurl4-openssl-dev:i386 \
      
        matrix:
    -     - name: Linux - Debian Bullseye - Autoconf
    +     - name: Linux - Debian Bookworm - Autoconf
     @@ .cirrus.tasks.yml: task:
          folder: $CCACHE_DIR
      
    @@ src/backend/utils/misc/guc_tables.c
      #include "nodes/queryjumble.h"
      #include "optimizer/cost.h"
     @@ src/backend/utils/misc/guc_tables.c: struct config_string ConfigureNamesString[] =
    - 		check_synchronized_standby_slots, assign_synchronized_standby_slots, NULL
    + 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
      	},
      
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c: free_token(struct token *tok)
      	OAUTH_STEP_DISCOVERY,
      	OAUTH_STEP_DEVICE_AUTHORIZATION,
      	OAUTH_STEP_TOKEN_REQUEST,
    +@@ src/interfaces/libpq/fe-auth-oauth-curl.c: handle_token_response(struct async_ctx *actx, char **token)
    + 	if (!finish_token_request(actx, &tok))
    + 		goto token_cleanup;
    + 
    ++	/* A successful token request gives either a token or an in-band error. */
    ++	Assert(tok.access_token || tok.err.error);
    ++
    + 	if (tok.access_token)
    + 	{
    + 		/* Construct our Bearer token. */
    +@@ src/interfaces/libpq/fe-auth-oauth-curl.c: handle_token_response(struct async_ctx *actx, char **token)
    + 	 * acceptable errors; anything else and we bail.
    + 	 */
    + 	err = &tok.err;
    +-	if (!err->error)
    +-	{
    +-		/* TODO test */
    +-		actx_error(actx, "unknown error");
    +-		goto token_cleanup;
    +-	}
    +-
    + 	if (strcmp(err->error, "authorization_pending") != 0 &&
    + 		strcmp(err->error, "slow_down") != 0)
    + 	{
     
      ## src/test/modules/Makefile ##
     @@ src/test/modules/Makefile: SUBDIRS = \
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +my ($log_start, $log_end);
     +$log_start = $node->wait_for_log(qr/reloading configuration files/);
     +
    ++
    ++# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
    ++# first, check to make sure the client refuses such connections by default.
    ++$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
    ++					 "HTTPS is required without debug mode",
    ++					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
    ++
    ++$ENV{PGOAUTHDEBUG} = "UNSAFE";
    ++
     +my $user = "test";
     +if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
     +					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +	"content type with charset (whitespace)",
     +	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
     +);
    ++$node->connect_ok(
    ++	connstr(stage => 'device', uri_spelling => "verification_url"),
    ++	"alternative spelling of verification_uri",
    ++	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    ++);
     +
     +$node->connect_fails(
     +	connstr(stage => 'device', content_type => 'text/plain'),
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +
     +        return "authorization_pending"
     +
    ++    def _uri_spelling(self) -> str:
    ++        """
    ++        Returns "verification_uri" unless the test has requested something
    ++        different.
    ++        """
    ++        if self._should_modify() and "uri_spelling" in self._test_params:
    ++            return self._test_params["uri_spelling"]
    ++
    ++        return "verification_uri"
    ++
     +    def _send_json(self, js: JsonObject) -> None:
     +        """
     +        Sends the provided JSON dict as an application/json response.
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        resp = {
     +            "device_code": "postgres",
     +            "user_code": "postgresuser",
    -+            "verification_uri": uri,
    ++            self._uri_spelling(): uri,
     +            "expires-in": 5,
     +        }
     +
    @@ src/test/perl/PostgreSQL/Test/Cluster.pm: sub connect_ok
     -	is($stderr, "", "$test_name: no stderr");
     +	if (defined($params{expected_stderr}))
     +	{
    -+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
    ++		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
    ++			&& ($ret != 0))
    ++		{
    ++			# In this case (failing test but matching stderr) we'll have
    ++			# swallowed the output needed to debug. Put it back into the logs.
    ++			diag("$test_name: full stderr:\n" . $stderr);
    ++		}
     +	}
     +	else
     +	{
6:  d163d2ca0a ! 6:  5253f7190a Review comments
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c
      /*
       * Parsed JSON Representations
       *
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: struct async_ctx
    - 	struct device_authz authz;
    - 
    - 	bool		user_prompted;	/* have we already sent the authz prompt? */
    -+
    -+	int			running;
    - };
    - 
    - /*
     @@ src/interfaces/libpq/fe-auth-oauth-curl.c: parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
      		return false;
      	}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c: append_data(char *buf, size_t size, s
      
      	return len;
      }
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: static bool
    - start_request(struct async_ctx *actx)
    - {
    - 	CURLMcode	err;
    --	int			running;
    - 
    - 	resetPQExpBuffer(&actx->work_data);
    - 	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: start_request(struct async_ctx *actx)
    - 		return false;
    - 	}
    - 
    --	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &running);
    -+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
    - 	if (err)
    - 	{
    - 		actx_error(actx, "asynchronous HTTP request failed: %s",
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: start_request(struct async_ctx *actx)
    - 	}
    - 
    - 	/*
    --	 * Sanity check.
    --	 *
    --	 * TODO: even though this is nominally an asynchronous process, there are
    --	 * apparently operations that can synchronously fail by this point, such
    --	 * as connections to closed local ports. Maybe we need to let this case
    --	 * fall through to drive_request instead, or else perform a
    --	 * curl_multi_info_read immediately.
    -+	 * Even though this is nominally an asynchronous process, there are some
    -+	 * operations that can synchronously fail by this point like connections
    -+	 * to closed local ports. Fall through and leave the sanity check for the
    -+	 * next state consuming actx.
    - 	 */
    --	if (running != 1)
    --	{
    --		actx_error(actx, "failed to queue HTTP request");
    --		return false;
    --	}
    - 
    - 	return true;
    - }
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: static PostgresPollingStatusType
    - drive_request(struct async_ctx *actx)
    - {
    - 	CURLMcode	err;
    --	int			running;
    - 	CURLMsg    *msg;
    - 	int			msgs_left;
    - 	bool		done;
    - 
    --	err = curl_multi_socket_all(actx->curlm, &running);
    -+	/* Sanity check the previous operation */
    -+	if (actx->running != 1)
    -+	{
    -+		actx_error(actx, "failed to queue HTTP request");
    -+		return false;
    -+	}
    -+
    -+	err = curl_multi_socket_all(actx->curlm, &actx->running);
    - 	if (err)
    - 	{
    - 		actx_error(actx, "asynchronous HTTP request failed: %s",
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: drive_request(struct async_ctx *actx)
    - 		return PGRES_POLLING_FAILED;
    - 	}
    - 
    --	if (running)
    -+	if (actx->running)
    - 	{
    - 		/* We'll come back again. */
    - 		return PGRES_POLLING_READING;
     @@ src/interfaces/libpq/fe-auth-oauth-curl.c: start_device_authz(struct async_ctx *actx, PGconn *conn)
      	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
      	if (conn->oauth_scope)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c: finish_token_request(struct async_ctx
     +	return false;
      }
      
    - 
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
    - 				 * errors; anything else and we bail.
    - 				 */
    - 				err = &tok.err;
    --				if (!err->error || (strcmp(err->error, "authorization_pending")
    --									&& strcmp(err->error, "slow_down")))
    -+				if (!err->error)
    -+				{
    -+					actx_error(actx, "unknown error");
    -+					goto error_return;
    -+				}
    -+
    -+				if (strcmp(err->error, "authorization_pending") != 0 &&
    -+					strcmp(err->error, "slow_down") != 0)
    - 				{
    --					/* TODO handle !err->error */
    - 					if (err->error_description)
    - 						appendPQExpBuffer(&actx->errbuf, "%s ",
    - 										  err->error_description);
    - 
    - 					appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
    --
    - 					goto error_return;
    - 				}
    - 
    + /*
     
      ## src/interfaces/libpq/fe-auth-oauth.c ##
     @@ src/interfaces/libpq/fe-auth-oauth.c: handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
7:  9117bf8be2 ! 7:  8b9641b122 DO NOT MERGE: Add pytest suite for OAuth
    @@ .cirrus.tasks.yml: task:
     +      python3-venv \
      
        matrix:
    -     - name: Linux - Debian Bullseye - Autoconf
    +     - name: Linux - Debian Bookworm - Autoconf
     @@ .cirrus.tasks.yml: task:
      
            # Also build & test in a 32bit build - it's gotten rare to test that
    @@ .cirrus.tasks.yml: task:
     @@ .cirrus.tasks.yml: task:
                  -Dllvm=disabled \
                  --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
    -             -DPERL=perl5.32-i386-linux-gnu \
    +             -DPERL=perl5.36-i386-linux-gnu \
     -            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
     +            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
                  build-32
    @@ src/test/python/client/conftest.py (new)
     +# SPDX-License-Identifier: PostgreSQL
     +#
     +
    ++import contextlib
     +import socket
     +import sys
     +import threading
    @@ src/test/python/client/conftest.py (new)
     +    def run(self):
     +        try:
     +            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
    ++            with contextlib.closing(conn):
     +                self._pump_async(conn)
    -+            conn.close()
     +        except Exception as e:
     +            self.exception = e
     +
    @@ src/test/python/client/test_oauth.py (new)
     +            self._handle(params=params)
     +
     +
    ++@pytest.fixture(autouse=True)
    ++def enable_client_oauth_debugging(monkeypatch):
    ++    """
    ++    HTTP providers aren't allowed by default; enable them via envvar.
    ++    """
    ++    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
    ++
    ++
     +@pytest.fixture
     +def openid_provider(unused_tcp_port_factory):
     +    """
    @@ src/test/python/client/test_oauth.py (new)
     +        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
     +    ],
     +)
    ++@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
     +@pytest.mark.parametrize(
     +    "asynchronous",
     +    [
    @@ src/test/python/client/test_oauth.py (new)
     +    accept,
     +    openid_provider,
     +    asynchronous,
    ++    uri_spelling,
     +    content_type,
     +    retries,
     +    scope,
    @@ src/test/python/client/test_oauth.py (new)
     +            "device_code": device_code,
     +            "user_code": user_code,
     +            "interval": 0,
    -+            "verification_uri": verification_url,
    ++            uri_spelling: verification_url,
     +            "expires_in": 5,
     +        }
     +
    @@ src/test/python/client/test_oauth.py (new)
     +
     +    expected_error = "slow_down interval overflow"
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
    ++        client.check_completed()
    ++
    ++
    ++def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
    ++    """
    ++    HTTP must be refused without PGOAUTHDEBUG.
    ++    """
    ++    monkeypatch.delenv("PGOAUTHDEBUG")
    ++    sock, client = accept(
    ++        oauth_client_id=secrets.token_hex(),
    ++    )
    ++
    ++    # No provider callbacks necessary; we should fail immediately.
    ++
    ++    with sock:
    ++        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    ++            initial = start_oauth_handshake(conn)
    ++
    ++            # Fail the SASL exchange and link to the HTTP provider.
    ++            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
    ++            resp = json.dumps(
    ++                {
    ++                    "status": "invalid_token",
    ++                    "openid-configuration": discovery_uri,
    ++                }
    ++            )
    ++
    ++            pq3.send(
    ++                conn,
    ++                pq3.types.AuthnRequest,
    ++                type=pq3.authn.SASLContinue,
    ++                body=resp.encode("ascii"),
    ++            )
    ++
    ++            # Per RFC, the client is required to send a dummy ^A response.
    ++            pkt = pq3.recv1(conn)
    ++            assert pkt.type == pq3.types.PasswordMessage
    ++            assert pkt.payload == b"\x01"
    ++
    ++            # Now fail the SASL exchange.
    ++            pq3.send(
    ++                conn,
    ++                pq3.types.ErrorResponse,
    ++                fields=[
    ++                    b"SFATAL",
    ++                    b"C28000",
    ++                    b"Mdoesn't matter",
    ++                    b"",
    ++                ],
    ++            )
    ++
    ++    # FIXME: We'll get a second connection, but it won't do anything.
    ++    sock, _ = accept()
    ++    expect_disconnected_handshake(sock)
    ++
    ++    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
    ++    with pytest.raises(psycopg2.OperationalError, match=expected_error):
     +        client.check_completed()
     
      ## src/test/python/conftest.py (new) ##
v25-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v25-0005-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From eaff43dc272899a842bc1d1f4a8d8219677a64c9 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v25 5/7] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 doc/src/sgml/client-auth.sgml                 |  28 +
 doc/src/sgml/filelist.sgml                    |   1 +
 doc/src/sgml/oauth-validators.sgml            |   9 +
 doc/src/sgml/postgres.sgml                    |   1 +
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 666 ++++++++++++++++++
 src/backend/libpq/auth-sasl.c                 |  10 +-
 src/backend/libpq/auth-scram.c                |   4 +-
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/common/Makefile                           |   2 +-
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/include/libpq/sasl.h                      |  11 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     |  12 +-
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  22 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  37 +
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   | 187 +++++
 .../modules/oauth_validator/t/oauth_server.py | 270 +++++++
 src/test/modules/oauth_validator/validator.c  |  82 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   4 +
 32 files changed, 1555 insertions(+), 47 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 1ce6c443a8..94187cea06 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +694,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..fb78b6c886 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a7ff5f8264..91cf16678e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..3c7884baf9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,9 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+
+ <para>
+  TODO
+ </para>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index ec9f90e283..bfb73991e7 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -263,6 +263,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..2a0d74a079
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 2"),
+				 errdetail("Bearer token is empty.")));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 3"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 03ddddc3c2..e4be4d499e 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2b607c5270..0a5a8640fc 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 75d588e36a..2245ae24a8 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1743,6 +1744,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2062,8 +2065,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2446,6 +2450,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index c0a52cdcc3..0739fe4b43 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4780,6 +4781,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/common/Makefile b/src/common/Makefile
index f9968e7d2b..45ad248c94 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -41,7 +41,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
 override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
-LIBS += $(PTHREAD_LIBS)
+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
 
 OBJS_COMMON = \
 	archive.o \
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 435abee56a..d9c9fc6cf9 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -143,7 +143,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -1864,6 +1864,9 @@ handle_token_response(struct async_ctx *actx, char **token)
 	if (!finish_token_request(actx, &tok))
 		goto token_cleanup;
 
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
 	if (tok.access_token)
 	{
 		/* Construct our Bearer token. */
@@ -1892,13 +1895,6 @@ handle_token_response(struct async_ctx *actx, char **token)
 	 * acceptable errors; anything else and we bail.
 	 */
 	err = &tok.err;
-	if (!err->error)
-	{
-		/* TODO test */
-		actx_error(actx, "unknown error");
-		goto token_cleanup;
-	}
-
 	if (strcmp(err->error, "authorization_pending") != 0 &&
 		strcmp(err->error, "slow_down") != 0)
 	{
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..655ce75796
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,22 @@
+export PYTHON
+export with_oauth
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..3db2ddea1c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,37 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..16ee8acd8f
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,187 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..b17198302b
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,270 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "content_type" in self._test_params:
+            return self._test_params["content_type"]
+
+        return "application/json"
+
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        if self._should_modify() and "interval" in self._test_params:
+            return self._test_params["interval"]
+
+        return 0
+
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "retry_code" in self._test_params:
+            return self._test_params["retry_code"]
+
+        return "authorization_pending"
+
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "uri_spelling" in self._test_params:
+            return self._test_params["uri_spelling"]
+
+        return "verification_uri"
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type())
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling(): uri,
+            "expires-in": 5,
+        }
+
+        interval = self._interval()
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code()}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return {
+            "access_token": token,
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..7b4dc9c494
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,82 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 32ee98aebc..cdebba1ad8 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2397,6 +2397,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2440,7 +2445,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..abdff5a3c3
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	diag("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d518fe91e2..ff537441dd 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1718,6 +1718,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3065,6 +3066,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3660,6 +3663,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v25-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v25-0004-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 4017611c1953c6c2091622861a39eee87d11d0f5 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v25 4/7] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                        |   19 +
 configure                                 |  144 ++
 configure.ac                              |   29 +
 doc/src/sgml/libpq.sgml                   |   76 +
 meson.build                               |   31 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   14 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2222 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 ++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   75 +
 src/interfaces/libpq/libpq-int.h          |   14 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 25 files changed, 3574 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 4f3aa44756..d502b2ec8e 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8479,6 +8482,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13010,6 +13059,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14035,6 +14168,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 049bc01491..d0f695a21d 100644
--- a/configure.ac
+++ b/configure.ac
@@ -924,6 +924,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1406,6 +1426,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1597,6 +1622,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 068ee60771..6a20247ef9 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2335,6 +2335,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9938,6 +9975,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/meson.build b/meson.build
index cc176f11b5..4e01a48cd8 100644
--- a/meson.build
+++ b/meson.build
@@ -912,6 +912,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3077,6 +3106,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3749,6 +3779,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 0e9b108e66..4e91c19ada 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -247,6 +247,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -730,6 +733,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 27f8499d8a..7d593778ec 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..435abee56a
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2222 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the whole
+	 * string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			/* HTTP optional whitespace allows only spaces and htabs. */
+			case ' ':
+			case '\t':
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently. We
+		 * accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specify 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in
+	 * the (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1; /* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char * const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN: /* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT: /* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so make
+		 * sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of CURLoption.
+	 * CURLOPT_PROTOCOLS is deprecated in modern Curls, but its replacement
+	 * didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char * const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can immediately
+	 * call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They don't
+		 * even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the foot
+		 * in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only
+	 * acceptable errors; anything else and we bail.
+	 */
+	err = &tok.err;
+	if (!err->error)
+	{
+		/* TODO test */
+		actx_error(actx, "unknown error");
+		goto token_cleanup;
+	}
+
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		if (err->error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase
+	 * our retry interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		fprintf(stderr, "Visit %s and enter the code: %s",
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on start_request()
+		 * to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break; /* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+				/*
+				 * No Curl requests are running, so we can simplify by
+				 * having the client wait directly on the timerfd rather
+				 * than the multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..f943a31cc0
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 3b25d8afda..d02424e11b 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -419,7 +420,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -437,7 +438,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -524,6 +525,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -563,26 +573,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -625,7 +657,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -650,11 +682,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -956,12 +998,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1119,7 +1167,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1136,7 +1184,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1452,3 +1501,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 360d9a4547..97118ce94b 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -364,6 +364,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -627,6 +644,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2644,6 +2662,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3697,6 +3716,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3852,6 +3872,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3885,7 +3915,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/* OK, we have processed the message; mark data consumed */
 				conn->inStart = conn->inCursor;
@@ -3918,6 +3958,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4599,6 +4674,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4716,6 +4792,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7198,6 +7279,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index f235bfbb41..aa1fee38c8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1041,10 +1041,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1061,7 +1064,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 87a6f3df07..25f216afcf 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -38,6 +38,8 @@ extern "C"
 #define LIBPQ_HAS_TRACE_FLAGS 1
 /* Indicates that PQsslAttribute(NULL, "library") is useful */
 #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -82,6 +84,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -163,6 +167,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -695,10 +706,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index f36d76bf3f..c9d9213cf3 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -357,6 +357,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -420,6 +422,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -491,6 +502,9 @@ struct pg_conn
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
 
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 7623aeadab..cf1da9c1a7 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..d518fe91e2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -368,6 +369,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1714,6 +1717,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1778,6 +1782,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1938,11 +1943,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3451,6 +3459,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v25-0001-Revert-ECPG-s-use-of-pnstrdup.patchapplication/octet-stream; name=v25-0001-Revert-ECPG-s-use-of-pnstrdup.patchDownload
From 76da087e0cd2d3f8cc179a08f675ad02f9fe9871 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 2 Jul 2024 12:26:04 -0700
Subject: [PATCH v25 1/7] Revert ECPG's use of pnstrdup()

Commit 0b9466fce added a dependency on fe_memutils' pnstrdup() inside
informix.c. This 1) makes it hard to remove fe_memutils from
libpgcommon_shlib, and 2) adds an exit() path where it perhaps should
not be. (See the !str check after the call to pnstrdup, which looks like
it should not be possible.)

Revert that part of the patch for now, pending further discussion on the
thread.
---
 src/interfaces/ecpg/compatlib/informix.c | 23 +++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/src/interfaces/ecpg/compatlib/informix.c b/src/interfaces/ecpg/compatlib/informix.c
index 8ea89e640a..65a0b2e46c 100644
--- a/src/interfaces/ecpg/compatlib/informix.c
+++ b/src/interfaces/ecpg/compatlib/informix.c
@@ -175,6 +175,25 @@ deccopy(decimal *src, decimal *target)
 	memcpy(target, src, sizeof(decimal));
 }
 
+static char *
+ecpg_strndup(const char *str, size_t len)
+{
+	size_t		real_len = strlen(str);
+	int			use_len = (int) ((real_len > len) ? len : real_len);
+
+	char	   *new = malloc(use_len + 1);
+
+	if (new)
+	{
+		memcpy(new, str, use_len);
+		new[use_len] = '\0';
+	}
+	else
+		errno = ENOMEM;
+
+	return new;
+}
+
 int
 deccvasc(const char *cp, int len, decimal *np)
 {
@@ -186,8 +205,8 @@ deccvasc(const char *cp, int len, decimal *np)
 	if (risnull(CSTRINGTYPE, cp))
 		return 0;
 
-	str = pnstrdup(cp, len);	/* decimal_in always converts the complete
-								 * string */
+	str = ecpg_strndup(cp, len);	/* decimal_in always converts the complete
+									 * string */
 	if (!str)
 		ret = ECPG_INFORMIX_NUM_UNDERFLOW;
 	else
-- 
2.34.1

v25-0002-Remove-fe_memutils-from-libpgcommon_shlib.patchapplication/octet-stream; name=v25-0002-Remove-fe_memutils-from-libpgcommon_shlib.patchDownload
From 6b6d48e0013fd9cb82e812779e78e70e8ebcf3ee Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 1 Jul 2024 14:18:33 -0700
Subject: [PATCH v25 2/7] Remove fe_memutils from libpgcommon_shlib

libpq must not use palloc/pfree. It's not allowed to exit on allocation
failure, and mixing the frontend pfree with malloc is architecturally
unsound.

Remove fe_memutils from the shlib build entirely, to keep devs from
accidentally depending on it in the future.
---
 src/common/Makefile    | 5 +++--
 src/common/meson.build | 5 +++--
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/src/common/Makefile b/src/common/Makefile
index 3d83299432..3049cf26ba 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -102,14 +102,15 @@ endif
 # A few files are currently only built for frontend, not server.
 # logging.c is excluded from OBJS_FRONTEND_SHLIB (shared library) as
 # a matter of policy, because it is not appropriate for general purpose
-# libraries such as libpq to report errors directly.
+# libraries such as libpq to report errors directly. fe_memutils.c is
+# excluded because libpq must not exit() on allocation failure.
 OBJS_FRONTEND_SHLIB = \
 	$(OBJS_COMMON) \
-	fe_memutils.o \
 	restricted_token.o \
 	sprompt.o
 OBJS_FRONTEND = \
 	$(OBJS_FRONTEND_SHLIB) \
+	fe_memutils.o \
 	logging.o
 
 # foo.o, foo_shlib.o, and foo_srv.o are all built from foo.c
diff --git a/src/common/meson.build b/src/common/meson.build
index de68e408fa..f72628a646 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -101,17 +101,18 @@ common_sources_cflags = {
 # A few files are currently only built for frontend, not server.
 # logging.c is excluded from OBJS_FRONTEND_SHLIB (shared library) as
 # a matter of policy, because it is not appropriate for general purpose
-# libraries such as libpq to report errors directly.
+# libraries such as libpq to report errors directly. fe_memutils.c is
+# excluded because libpq must not exit() on allocation failure.
 
 common_sources_frontend_shlib = common_sources
 common_sources_frontend_shlib += files(
-  'fe_memutils.c',
   'restricted_token.c',
   'sprompt.c',
 )
 
 common_sources_frontend_static = common_sources_frontend_shlib
 common_sources_frontend_static += files(
+  'fe_memutils.c',
   'logging.c',
 )
 
-- 
2.34.1

v25-0003-common-jsonapi-support-libpq-as-a-client.patchapplication/octet-stream; name=v25-0003-common-jsonapi-support-libpq-as-a-client.patchDownload
From faf3707623f282322c5ceef265574eda5022dbb3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v25 3/7] common/jsonapi: support libpq as a client

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed rather than exit()ing.

Co-authored-by: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/bin/pg_combinebackup/Makefile             |   4 +-
 src/bin/pg_combinebackup/meson.build          |   2 +-
 src/bin/pg_verifybackup/Makefile              |   2 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 448 +++++++++++++-----
 src/common/meson.build                        |   8 +-
 src/common/parse_manifest.c                   |   5 +-
 src/include/common/jsonapi.h                  |  20 +-
 src/test/modules/test_json_parser/Makefile    |   3 +
 src/test/modules/test_json_parser/meson.build |   4 +-
 10 files changed, 361 insertions(+), 137 deletions(-)

diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index c3729755ba..2f7dc1ed87 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -18,6 +18,8 @@ include $(top_builddir)/src/Makefile.global
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
 LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
@@ -30,7 +32,7 @@ OBJS = \
 
 all: pg_combinebackup
 
-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
 	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
 
 install: all installdirs
diff --git a/src/bin/pg_combinebackup/meson.build b/src/bin/pg_combinebackup/meson.build
index d142608e94..c75205a652 100644
--- a/src/bin/pg_combinebackup/meson.build
+++ b/src/bin/pg_combinebackup/meson.build
@@ -17,7 +17,7 @@ endif
 
 pg_combinebackup = executable('pg_combinebackup',
   pg_combinebackup_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args,
 )
 bin_targets += pg_combinebackup
diff --git a/src/bin/pg_verifybackup/Makefile b/src/bin/pg_verifybackup/Makefile
index 7c045f142e..3372fada01 100644
--- a/src/bin/pg_verifybackup/Makefile
+++ b/src/bin/pg_verifybackup/Makefile
@@ -17,7 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 # We need libpq only because fe_utils does.
-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
diff --git a/src/common/Makefile b/src/common/Makefile
index 3049cf26ba..f9968e7d2b 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 OBJS_COMMON = \
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 2527dbe1da..bb2e8ca2e1 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,66 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend,
+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+#define ALLOC0(size) calloc(1, size)
+#define REALLOC realloc
+#define FREE(s) free(s)
+
+#define appendStrVal			appendPQExpBuffer
+#define appendBinaryStrVal		appendBinaryPQExpBuffer
+#define appendStrValChar		appendPQExpBufferChar
+/* XXX should we add a macro version to PQExpBuffer? */
+#define appendStrValCharMacro	appendPQExpBufferChar
+#define createStrVal			createPQExpBuffer
+#define initStrVal				initPQExpBuffer
+#define resetStrVal				resetPQExpBuffer
+#define termStrVal				termPQExpBuffer
+#define destroyStrVal			destroyPQExpBuffer
+
+#else							/* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+#define ALLOC0(size) palloc0(size)
+#define REALLOC repalloc
+
+/*
+ * Backend pfree() doesn't handle NULL pointers like the frontend's does; smooth
+ * that over to reduce mental gymnastics. Avoid multiple evaluation of the macro
+ * argument to avoid future hair-pulling.
+ */
+#define FREE(s) do {	\
+	void *__v = (s);	\
+	if (__v)			\
+		pfree(__v);		\
+} while (0)
+
+#define appendStrVal			appendStringInfo
+#define appendBinaryStrVal		appendBinaryStringInfo
+#define appendStrValChar		appendStringInfoChar
+#define appendStrValCharMacro	appendStringInfoCharMacro
+#define createStrVal			makeStringInfo
+#define initStrVal				initStringInfo
+#define resetStrVal				resetStringInfo
+#define termStrVal(s)			pfree((s)->data)
+#define destroyStrVal			destroyStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -103,7 +159,7 @@ struct JsonIncrementalState
 {
 	bool		is_last_chunk;
 	bool		partial_completed;
-	StringInfoData partial_token;
+	StrValType	partial_token;
 };
 
 /*
@@ -219,6 +275,7 @@ static JsonParseErrorType parse_object(JsonLexContext *lex, JsonSemAction *sem);
 static JsonParseErrorType parse_array_element(JsonLexContext *lex, JsonSemAction *sem);
 static JsonParseErrorType parse_array(JsonLexContext *lex, JsonSemAction *sem);
 static JsonParseErrorType report_parse_error(JsonParseContext ctx, JsonLexContext *lex);
+static bool allocate_incremental_state(JsonLexContext *lex);
 
 /* the null action object used for pure validation */
 JsonSemAction nullSemAction =
@@ -273,15 +330,11 @@ IsValidJsonNumber(const char *str, size_t len)
 {
 	bool		numeric_error;
 	size_t		total_len;
-	JsonLexContext dummy_lex;
+	JsonLexContext dummy_lex = {0};
 
 	if (len <= 0)
 		return false;
 
-	dummy_lex.incremental = false;
-	dummy_lex.inc_state = NULL;
-	dummy_lex.pstack = NULL;
-
 	/*
 	 * json_lex_number expects a leading  '-' to have been eaten already.
 	 *
@@ -321,6 +374,9 @@ IsValidJsonNumber(const char *str, size_t len)
  * responsible for freeing the returned struct, either by calling
  * freeJsonLexContext() or (in backend environment) via memory context
  * cleanup.
+ *
+ * In frontend code this can return NULL on OOM, so callers must inspect the
+ * returned pointer.
  */
 JsonLexContext *
 makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
@@ -328,7 +384,9 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return NULL;
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -341,13 +399,70 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
 
 	return lex;
 }
 
+/*
+ * Allocates the internal bookkeeping structures for incremental parsing. This
+ * can only fail in-band with FRONTEND code.
+ */
+#define JS_STACK_CHUNK_SIZE 64
+#define JS_MAX_PROD_LEN 10		/* more than we need */
+#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
+								 * number */
+static bool
+allocate_incremental_state(JsonLexContext *lex)
+{
+	void	   *pstack,
+			   *prediction,
+			   *fnames,
+			   *fnull;
+
+	lex->inc_state = ALLOC0(sizeof(JsonIncrementalState));
+	pstack = ALLOC(sizeof(JsonParserStack));
+	prediction = ALLOC(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
+	fnames = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(char *));
+	fnull = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+#ifdef FRONTEND
+	if (!lex->inc_state
+		|| !pstack
+		|| !prediction
+		|| !fnames
+		|| !fnull)
+	{
+		FREE(lex->inc_state);
+		FREE(pstack);
+		FREE(prediction);
+		FREE(fnames);
+		FREE(fnull);
+
+		return false;
+	}
+#endif
+
+	initStrVal(&(lex->inc_state->partial_token));
+	lex->pstack = pstack;
+	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
+	lex->pstack->prediction = prediction;
+	lex->pstack->pred_index = 0;
+	lex->pstack->fnames = fnames;
+	lex->pstack->fnull = fnull;
+
+	lex->incremental = true;
+	return true;
+}
+
 
 /*
  * makeJsonLexContextIncremental
@@ -357,19 +472,20 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
  * we don't need the input, that will be handed in bit by bit to the
  * parse routine. We also need an accumulator for partial tokens in case
  * the boundary between chunks happens to fall in the middle of a token.
+ *
+ * In frontend code this can return NULL on OOM, so callers must inspect the
+ * returned pointer.
  */
-#define JS_STACK_CHUNK_SIZE 64
-#define JS_MAX_PROD_LEN 10		/* more than we need */
-#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
-								 * number */
-
 JsonLexContext *
 makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 							  bool need_escapes)
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return NULL;
+
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -377,42 +493,60 @@ makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 
 	lex->line_number = 1;
 	lex->input_encoding = encoding;
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+	if (!allocate_incremental_state(lex))
+	{
+		if (lex->flags & JSONLEX_FREE_STRUCT)
+			FREE(lex);
+		return NULL;
+	}
+
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+
 	return lex;
 }
 
-static inline void
+static inline bool
 inc_lex_level(JsonLexContext *lex)
 {
-	lex->lex_level += 1;
-
-	if (lex->incremental && lex->lex_level >= lex->pstack->stack_size)
+	if (lex->incremental && (lex->lex_level + 1) >= lex->pstack->stack_size)
 	{
-		lex->pstack->stack_size += JS_STACK_CHUNK_SIZE;
-		lex->pstack->prediction =
-			repalloc(lex->pstack->prediction,
-					 lex->pstack->stack_size * JS_MAX_PROD_LEN);
-		if (lex->pstack->fnames)
-			lex->pstack->fnames =
-				repalloc(lex->pstack->fnames,
-						 lex->pstack->stack_size * sizeof(char *));
-		if (lex->pstack->fnull)
-			lex->pstack->fnull =
-				repalloc(lex->pstack->fnull, lex->pstack->stack_size * sizeof(bool));
+		size_t		new_stack_size;
+		char	   *new_prediction;
+		char	  **new_fnames;
+		bool	   *new_fnull;
+
+		new_stack_size = lex->pstack->stack_size + JS_STACK_CHUNK_SIZE;
+
+		new_prediction = REALLOC(lex->pstack->prediction,
+								 new_stack_size * JS_MAX_PROD_LEN);
+		new_fnames = REALLOC(lex->pstack->fnames,
+							 new_stack_size * sizeof(char *));
+		new_fnull = REALLOC(lex->pstack->fnull, new_stack_size * sizeof(bool));
+
+#ifdef FRONTEND
+		if (!new_prediction || !new_fnames || !new_fnull)
+			return false;
+#endif
+
+		lex->pstack->stack_size = new_stack_size;
+		lex->pstack->prediction = new_prediction;
+		lex->pstack->fnames = new_fnames;
+		lex->pstack->fnull = new_fnull;
 	}
+
+	lex->lex_level += 1;
+	return true;
 }
 
 static inline void
@@ -482,24 +616,31 @@ get_fnull(JsonLexContext *lex)
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
+	if (!lex)
+		return;
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-		destroyStringInfo(lex->strval);
+		destroyStrVal(lex->strval);
 
 	if (lex->errormsg)
-		destroyStringInfo(lex->errormsg);
+		destroyStrVal(lex->errormsg);
 
 	if (lex->incremental)
 	{
-		pfree(lex->inc_state->partial_token.data);
-		pfree(lex->inc_state);
-		pfree(lex->pstack->prediction);
-		pfree(lex->pstack->fnames);
-		pfree(lex->pstack->fnull);
-		pfree(lex->pstack);
+		termStrVal(&lex->inc_state->partial_token);
+		FREE(lex->inc_state);
+		FREE(lex->pstack->prediction);
+		FREE(lex->pstack->fnames);
+		FREE(lex->pstack->fnull);
+		FREE(lex->pstack);
 	}
 
 	if (lex->flags & JSONLEX_FREE_STRUCT)
-		pfree(lex);
+		FREE(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -522,22 +663,13 @@ JsonParseErrorType
 pg_parse_json(JsonLexContext *lex, JsonSemAction *sem)
 {
 #ifdef FORCE_JSON_PSTACK
-
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-
 	/*
 	 * We don't need partial token processing, there is only one chunk. But we
 	 * still need to init the partial token string so that freeJsonLexContext
-	 * works.
+	 * works, so perform the full incremental initialization.
 	 */
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+	if (!allocate_incremental_state(lex))
+		return JSON_OUT_OF_MEMORY;
 
 	return pg_parse_json_incremental(lex, sem, lex->input, lex->input_length, true);
 
@@ -597,7 +729,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -737,7 +869,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_OEND:
@@ -766,7 +900,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_AEND:
@@ -793,9 +929,11 @@ pg_parse_json_incremental(JsonLexContext *lex,
 						json_ofield_action ostart = sem->object_field_start;
 						json_ofield_action oend = sem->object_field_end;
 
-						if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
+						if ((ostart != NULL || oend != NULL) && lex->parse_strval)
 						{
-							fname = pstrdup(lex->strval->data);
+							fname = STRDUP(lex->strval->data);
+							if (fname == NULL)
+								return JSON_OUT_OF_MEMORY;
 						}
 						set_fname(lex, fname);
 					}
@@ -883,14 +1021,21 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							 */
 							if (tok == JSON_TOKEN_STRING)
 							{
-								if (lex->strval != NULL)
-									pstack->scalar_val = pstrdup(lex->strval->data);
+								if (lex->parse_strval)
+								{
+									pstack->scalar_val = STRDUP(lex->strval->data);
+									if (pstack->scalar_val == NULL)
+										return JSON_OUT_OF_MEMORY;
+								}
 							}
 							else
 							{
 								ptrdiff_t	tlen = (lex->token_terminator - lex->token_start);
 
-								pstack->scalar_val = palloc(tlen + 1);
+								pstack->scalar_val = ALLOC(tlen + 1);
+								if (pstack->scalar_val == NULL)
+									return JSON_OUT_OF_MEMORY;
+
 								memcpy(pstack->scalar_val, lex->token_start, tlen);
 								pstack->scalar_val[tlen] = '\0';
 							}
@@ -1025,14 +1170,21 @@ parse_scalar(JsonLexContext *lex, JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -1066,8 +1218,12 @@ parse_object_field(JsonLexContext *lex, JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -1123,6 +1279,11 @@ parse_object(JsonLexContext *lex, JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -1312,15 +1473,24 @@ json_lex(JsonLexContext *lex)
 	const char *const end = lex->input + lex->input_length;
 	JsonParseErrorType result;
 
-	if (lex->incremental && lex->inc_state->partial_completed)
+	if (lex->incremental)
 	{
-		/*
-		 * We just lexed a completed partial token on the last call, so reset
-		 * everything
-		 */
-		resetStringInfo(&(lex->inc_state->partial_token));
-		lex->token_terminator = lex->input;
-		lex->inc_state->partial_completed = false;
+		if (lex->inc_state->partial_completed)
+		{
+			/*
+			 * We just lexed a completed partial token on the last call, so
+			 * reset everything
+			 */
+			resetStrVal(&(lex->inc_state->partial_token));
+			lex->token_terminator = lex->input;
+			lex->inc_state->partial_completed = false;
+		}
+
+#ifdef FRONTEND
+		/* Make sure our partial token buffer is valid before using it below. */
+		if (PQExpBufferDataBroken(lex->inc_state->partial_token))
+			return JSON_OUT_OF_MEMORY;
+#endif
 	}
 
 	s = lex->token_terminator;
@@ -1331,7 +1501,7 @@ json_lex(JsonLexContext *lex)
 		 * We have a partial token. Extend it and if completed lex it by a
 		 * recursive call
 		 */
-		StringInfo	ptok = &(lex->inc_state->partial_token);
+		StrValType *ptok = &(lex->inc_state->partial_token);
 		size_t		added = 0;
 		bool		tok_done = false;
 		JsonLexContext dummy_lex;
@@ -1358,7 +1528,7 @@ json_lex(JsonLexContext *lex)
 			{
 				char		c = lex->input[i];
 
-				appendStringInfoCharMacro(ptok, c);
+				appendStrValCharMacro(ptok, c);
 				added++;
 				if (c == '"' && escapes % 2 == 0)
 				{
@@ -1403,7 +1573,7 @@ json_lex(JsonLexContext *lex)
 						case '8':
 						case '9':
 							{
-								appendStringInfoCharMacro(ptok, cc);
+								appendStrValCharMacro(ptok, cc);
 								added++;
 							}
 							break;
@@ -1424,7 +1594,7 @@ json_lex(JsonLexContext *lex)
 
 				if (JSON_ALPHANUMERIC_CHAR(cc))
 				{
-					appendStringInfoCharMacro(ptok, cc);
+					appendStrValCharMacro(ptok, cc);
 					added++;
 				}
 				else
@@ -1467,6 +1637,7 @@ json_lex(JsonLexContext *lex)
 		dummy_lex.input_length = ptok->len;
 		dummy_lex.input_encoding = lex->input_encoding;
 		dummy_lex.incremental = false;
+		dummy_lex.parse_strval = lex->parse_strval;
 		dummy_lex.strval = lex->strval;
 
 		partial_result = json_lex(&dummy_lex);
@@ -1622,8 +1793,8 @@ json_lex(JsonLexContext *lex)
 					if (lex->incremental && !lex->inc_state->is_last_chunk &&
 						p == lex->input + lex->input_length)
 					{
-						appendBinaryStringInfo(
-											   &(lex->inc_state->partial_token), s, end - s);
+						appendBinaryStrVal(
+										   &(lex->inc_state->partial_token), s, end - s);
 						return JSON_INCOMPLETE;
 					}
 
@@ -1680,8 +1851,8 @@ json_lex_string(JsonLexContext *lex)
 	do { \
 		if (lex->incremental && !lex->inc_state->is_last_chunk) \
 		{ \
-			appendBinaryStringInfo(&lex->inc_state->partial_token, \
-								   lex->token_start, end - lex->token_start); \
+			appendBinaryStrVal(&lex->inc_state->partial_token, \
+							   lex->token_start, end - lex->token_start); \
 			return JSON_INCOMPLETE; \
 		} \
 		lex->token_terminator = s; \
@@ -1694,8 +1865,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -1732,7 +1910,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -1789,19 +1967,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -1811,22 +1989,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -1861,7 +2039,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to appendBinaryStrVal.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -1885,8 +2063,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				appendBinaryStrVal(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -1902,6 +2080,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -2019,8 +2202,8 @@ json_lex_number(JsonLexContext *lex, const char *s,
 	if (lex->incremental && !lex->inc_state->is_last_chunk &&
 		len >= lex->input_length)
 	{
-		appendBinaryStringInfo(&lex->inc_state->partial_token,
-							   lex->token_start, s - lex->token_start);
+		appendBinaryStrVal(&lex->inc_state->partial_token,
+						   lex->token_start, s - lex->token_start);
 		if (num_err != NULL)
 			*num_err = error;
 
@@ -2096,19 +2279,25 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
 	if (lex->errormsg)
-		resetStringInfo(lex->errormsg);
+		resetStrVal(lex->errormsg);
 	else
-		lex->errormsg = makeStringInfo();
+		lex->errormsg = createStrVal();
 
 	/*
 	 * A helper for error messages that should print the current token. The
 	 * format must contain exactly one %.*s specifier.
 	 */
 #define json_token_error(lex, format) \
-	appendStringInfo((lex)->errormsg, _(format), \
-					 (int) ((lex)->token_terminator - (lex)->token_start), \
-					 (lex)->token_start);
+	appendStrVal((lex)->errormsg, _(format), \
+				 (int) ((lex)->token_terminator - (lex)->token_start), \
+				 (lex)->token_start);
 
 	switch (error)
 	{
@@ -2127,9 +2316,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			json_token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
 			break;
 		case JSON_ESCAPING_REQUIRED:
-			appendStringInfo(lex->errormsg,
-							 _("Character with value 0x%02x must be escaped."),
-							 (unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
 			break;
 		case JSON_EXPECTED_END:
 			json_token_error(lex, "Expected end of input, but found \"%.*s\".");
@@ -2160,6 +2349,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 		case JSON_INVALID_TOKEN:
 			json_token_error(lex, "Token \"%.*s\" is invalid.");
 			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -2191,15 +2383,23 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 	}
 #undef json_token_error
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	if (lex->errormsg->len == 0)
-		appendStringInfo(lex->errormsg,
-						 "unexpected json parse error type: %d",
-						 (int) error);
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && lex->errormsg->len == 0)
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d",
+					 (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
+#endif
 
 	return lex->errormsg->data;
 }
diff --git a/src/common/meson.build b/src/common/meson.build
index f72628a646..eb618419cd 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -126,13 +126,18 @@ common_sources_frontend_static += files(
 # least cryptohash_openssl.c, hmac_openssl.c depend on it.
 # controldata_utils.c depends on wait_event_types_h. That's arguably a
 # layering violation, but ...
+#
+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
+# appropriately. This seems completely broken.
 pgcommon = {}
 pgcommon_variants = {
   '_srv': internal_lib_args + {
+    'include_directories': include_directories('.'),
     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
     'dependencies': [backend_common_code],
   },
   '': default_lib_args + {
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_static,
     'dependencies': [frontend_common_code],
     # Files in libpgcommon.a should use/export the "xxx_private" versions
@@ -141,6 +146,7 @@ pgcommon_variants = {
   },
   '_shlib': default_lib_args + {
     'pic': true,
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
   },
@@ -158,7 +164,6 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'sources': sources,
         'c_args': c_args,
@@ -171,7 +176,6 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'dependencies': opts['dependencies'] + [ssl],
       }
diff --git a/src/common/parse_manifest.c b/src/common/parse_manifest.c
index 612e120b17..0da6272336 100644
--- a/src/common/parse_manifest.c
+++ b/src/common/parse_manifest.c
@@ -139,7 +139,8 @@ json_parse_manifest_incremental_init(JsonManifestParseContext *context)
 	parse->state = JM_EXPECT_TOPLEVEL_START;
 	parse->saw_version_field = false;
 
-	makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true);
+	if (!makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true))
+		context->error_cb(context, "out of memory");
 
 	incstate->sem.semstate = parse;
 	incstate->sem.object_start = json_manifest_object_start;
@@ -240,6 +241,8 @@ json_parse_manifest(JsonManifestParseContext *context, const char *buffer,
 
 	/* Create a JSON lexing context. */
 	lex = makeJsonLexContextCstringLen(NULL, buffer, size, PG_UTF8, true);
+	if (!lex)
+		json_manifest_parse_failure(context, "out of memory");
 
 	/* Set up semantic actions. */
 	sem.semstate = &parse;
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index 71a491d72d..d03a61fcd6 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -51,6 +49,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -64,6 +63,18 @@ typedef enum JsonParseErrorType
 typedef struct JsonParserStack JsonParserStack;
 typedef struct JsonIncrementalState JsonIncrementalState;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
+
 /*
  * All the fields in this structure should be treated as read-only.
  *
@@ -102,8 +113,9 @@ typedef struct JsonLexContext
 	const char *line_start;		/* where that line starts within input */
 	JsonParserStack *pstack;
 	JsonIncrementalState *inc_state;
-	StringInfo	strval;
-	StringInfo	errormsg;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
diff --git a/src/test/modules/test_json_parser/Makefile b/src/test/modules/test_json_parser/Makefile
index 2dc7175b7c..f410e04cf1 100644
--- a/src/test/modules/test_json_parser/Makefile
+++ b/src/test/modules/test_json_parser/Makefile
@@ -19,6 +19,9 @@ include $(top_builddir)/src/Makefile.global
 include $(top_srcdir)/contrib/contrib-global.mk
 endif
 
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
+
 all: test_json_parser_incremental$(X) test_json_parser_perf$(X)
 
 %.o: $(top_srcdir)/$(subdir)/%.c
diff --git a/src/test/modules/test_json_parser/meson.build b/src/test/modules/test_json_parser/meson.build
index b224f3e07e..8136070233 100644
--- a/src/test/modules/test_json_parser/meson.build
+++ b/src/test/modules/test_json_parser/meson.build
@@ -13,7 +13,7 @@ endif
 
 test_json_parser_incremental = executable('test_json_parser_incremental',
   test_json_parser_incremental_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args + {
     'install': false,
   },
@@ -32,7 +32,7 @@ endif
 
 test_json_parser_perf = executable('test_json_parser_perf',
   test_json_parser_perf_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args + {
     'install': false,
   },
-- 
2.34.1

v25-0006-Review-comments.patchapplication/octet-stream; name=v25-0006-Review-comments.patchDownload
From 5253f7190a5cdc7b4f2eb64cf1174334655a288e Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Thu, 28 Mar 2024 21:59:02 +0100
Subject: [PATCH v25 6/7] Review comments

Fixes and tidy-ups following a review of v21, a few items
are (listed in no specific order):

* Implement a version check for libcurl in autoconf, the equivalent
  check for Meson is still a TODO. [ed: moved to an earlier commit]
* Address a few TODOs in the code
* libpq JSON support memory management fixups [ed: moved to an earlier
  commit]
---
 src/backend/libpq/auth-oauth.c            | 22 +++----
 src/interfaces/libpq/fe-auth-oauth-curl.c | 72 ++++++++++++++++-------
 src/interfaces/libpq/fe-auth-oauth.c      |  7 ++-
 3 files changed, 68 insertions(+), 33 deletions(-)

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 2a0d74a079..ec1418c3fc 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -533,7 +533,9 @@ validate_token_format(const char *header)
 	if (!header || strlen(header) <= 7)
 	{
 		ereport(COMMERROR,
-				(errmsg("malformed OAuth bearer token 1")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is less than 8 bytes."));
 		return NULL;
 	}
 
@@ -551,9 +553,9 @@ validate_token_format(const char *header)
 	if (!*token)
 	{
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 2"),
-				 errdetail("Bearer token is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
 		return NULL;
 	}
 
@@ -573,9 +575,9 @@ validate_token_format(const char *header)
 		 * of someone's password into the logs.
 		 */
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 3"),
-				 errdetail("Bearer token is not in the correct format.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
 		return NULL;
 	}
 
@@ -617,10 +619,10 @@ validate(Port *port, const char *auth)
 	/* Make sure the validator authenticated the user. */
 	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
-		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
-				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
-						port->user_name)));
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity"));
 		return false;
 	}
 
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index d9c9fc6cf9..9e4bb30095 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -31,6 +31,8 @@
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
 
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
 /*
  * Parsed JSON Representations
  *
@@ -683,7 +685,11 @@ parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
 		return false;
 	}
 
-	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	if (!makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	ctx.errbuf = &actx->errbuf;
 	ctx.fields = fields;
@@ -1334,7 +1340,12 @@ setup_curl_handles(struct async_ctx *actx)
 	 * pretty strict when it comes to provider behavior, so we have to check
 	 * what comes back anyway.)
 	 */
-	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
 
 	return true;
@@ -1356,9 +1367,19 @@ append_data(char *buf, size_t size, size_t nmemb, void *userdata)
 	PQExpBuffer resp = userdata;
 	size_t		len = size * nmemb;
 
-	/* TODO: cap the maximum size */
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+		return 0;
+
+	/* The data passed from libcurl is not null-terminated */
 	appendBinaryPQExpBuffer(resp, buf, len);
-	/* TODO: check for broken buffer */
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+		return 0;
 
 	return len;
 }
@@ -1675,7 +1696,12 @@ start_device_authz(struct async_ctx *actx, PGconn *conn)
 	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
 	if (conn->oauth_scope)
 		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
-	/* TODO check for broken buffer */
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	/* Make our request. */
 	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
@@ -1817,32 +1843,34 @@ finish_token_request(struct async_ctx *actx, struct token *tok)
 	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
-	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
-	 * response uses either 400 Bad Request or 401 Unauthorized.
-	 *
-	 * TODO: there are references online to 403 appearing in the wild...
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
 	 */
-	if (response_code != 200
-		&& response_code != 400
-		 /* && response_code != 401 TODO */ )
+	if (response_code == 200)
 	{
-		actx_error(actx, "unexpected response code %ld", response_code);
-		return false;
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
 	}
 
 	/*
-	 * Pull the fields we care about from the document.
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
 	 */
-	if (response_code == 200)
+	if (response_code == 400 || response_code == 401)
 	{
-		actx->errctx = "failed to parse access token response";
-		if (!parse_access_token(actx, tok))
-			return false;		/* error message already set */
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
 	}
-	else if (!parse_token_error(actx, &tok->err))
-		return false;
 
-	return true;
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
 }
 
 /*
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index f943a31cc0..61de9ac451 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -247,7 +247,12 @@ handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
 		return false;
 	}
 
-	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	if (!makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+		return false;
+	}
 
 	initPQExpBuffer(&ctx.errbuf);
 	sem.semstate = &ctx;
-- 
2.34.1

v25-0007-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v25-0007-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 8b9641b122b1481a013628c6c652e9a44ade0010 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v25 7/7] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1864 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1074 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5577 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 94187cea06..a127042b4b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -374,6 +375,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -384,7 +387,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index 4e01a48cd8..d2b4eb1eb5 100644
--- a/meson.build
+++ b/meson.build
@@ -3420,6 +3420,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3581,6 +3584,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index c3d0dfedf1..f401ec179e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -7,6 +7,7 @@ subdir('authentication')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..17bd2d3d88
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1864 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp = json.dumps(
+                {
+                    "status": "invalid_token",
+                    "openid-configuration": discovery_uri,
+                }
+            )
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..dbb8b8823c
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..427ab063e6
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1074 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

#113Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#112)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 05.08.24 19:53, Jacob Champion wrote:

On Fri, Aug 2, 2024 at 11:48 AM Peter Eisentraut <peter@eisentraut.org> wrote:

Yes, I think with an adjusted comment and commit message, the actual
change makes sense.

Done in v25.

...along with a bunch of other stuff:

I have committed 0001, and I plan to backpatch it once the release
freeze lifts.

I'll work on 0002 next.

#114Peter Eisentraut
peter@eisentraut.org
In reply to: Peter Eisentraut (#113)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 07.08.24 09:34, Peter Eisentraut wrote:

On 05.08.24 19:53, Jacob Champion wrote:

On Fri, Aug 2, 2024 at 11:48 AM Peter Eisentraut
<peter@eisentraut.org> wrote:

Yes, I think with an adjusted comment and commit message, the actual
change makes sense.

Done in v25.

...along with a bunch of other stuff:

I have committed 0001, and I plan to backpatch it once the release
freeze lifts.

I'll work on 0002 next.

I have committed 0002 now.

#115Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#114)
5 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sun, Aug 11, 2024 at 11:37 PM Peter Eisentraut <peter@eisentraut.org> wrote:

I have committed 0002 now.

Thanks Peter! Rebased over both in v26.

--Jacob

Attachments:

v26-0001-common-jsonapi-support-libpq-as-a-client.patchapplication/octet-stream; name=v26-0001-common-jsonapi-support-libpq-as-a-client.patchDownload
From b3e925b9a997a9a5475e441503b9859347e95ca5 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v26 1/5] common/jsonapi: support libpq as a client

Based on a patch by Michael Paquier.

For frontend code, use PQExpBuffer instead of StringInfo. This requires
us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
as needed rather than exit()ing.

Co-authored-by: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/bin/pg_combinebackup/Makefile             |   4 +-
 src/bin/pg_combinebackup/meson.build          |   2 +-
 src/bin/pg_verifybackup/Makefile              |   2 +-
 src/common/Makefile                           |   2 +-
 src/common/jsonapi.c                          | 448 +++++++++++++-----
 src/common/meson.build                        |   8 +-
 src/common/parse_manifest.c                   |   5 +-
 src/include/common/jsonapi.h                  |  20 +-
 src/test/modules/test_json_parser/Makefile    |   3 +
 src/test/modules/test_json_parser/meson.build |   4 +-
 10 files changed, 361 insertions(+), 137 deletions(-)

diff --git a/src/bin/pg_combinebackup/Makefile b/src/bin/pg_combinebackup/Makefile
index c3729755ba..2f7dc1ed87 100644
--- a/src/bin/pg_combinebackup/Makefile
+++ b/src/bin/pg_combinebackup/Makefile
@@ -18,6 +18,8 @@ include $(top_builddir)/src/Makefile.global
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
 LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
@@ -30,7 +32,7 @@ OBJS = \
 
 all: pg_combinebackup
 
-pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
 	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
 
 install: all installdirs
diff --git a/src/bin/pg_combinebackup/meson.build b/src/bin/pg_combinebackup/meson.build
index d142608e94..c75205a652 100644
--- a/src/bin/pg_combinebackup/meson.build
+++ b/src/bin/pg_combinebackup/meson.build
@@ -17,7 +17,7 @@ endif
 
 pg_combinebackup = executable('pg_combinebackup',
   pg_combinebackup_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args,
 )
 bin_targets += pg_combinebackup
diff --git a/src/bin/pg_verifybackup/Makefile b/src/bin/pg_verifybackup/Makefile
index 7c045f142e..3372fada01 100644
--- a/src/bin/pg_verifybackup/Makefile
+++ b/src/bin/pg_verifybackup/Makefile
@@ -17,7 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 # We need libpq only because fe_utils does.
-LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
diff --git a/src/common/Makefile b/src/common/Makefile
index 89ef61c52a..9856fdeccc 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -40,7 +40,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
 override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
-override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
 LIBS += $(PTHREAD_LIBS)
 
 OBJS_COMMON = \
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 2ffcaaa6fd..4d1677b554 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,66 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef FRONTEND
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend,
+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef FRONTEND
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+#define ALLOC0(size) calloc(1, size)
+#define REALLOC realloc
+#define FREE(s) free(s)
+
+#define appendStrVal			appendPQExpBuffer
+#define appendBinaryStrVal		appendBinaryPQExpBuffer
+#define appendStrValChar		appendPQExpBufferChar
+/* XXX should we add a macro version to PQExpBuffer? */
+#define appendStrValCharMacro	appendPQExpBufferChar
+#define createStrVal			createPQExpBuffer
+#define initStrVal				initPQExpBuffer
+#define resetStrVal				resetPQExpBuffer
+#define termStrVal				termPQExpBuffer
+#define destroyStrVal			destroyPQExpBuffer
+
+#else							/* !FRONTEND */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+#define ALLOC0(size) palloc0(size)
+#define REALLOC repalloc
+
+/*
+ * Backend pfree() doesn't handle NULL pointers like the frontend's does; smooth
+ * that over to reduce mental gymnastics. Avoid multiple evaluation of the macro
+ * argument to avoid future hair-pulling.
+ */
+#define FREE(s) do {	\
+	void *__v = (s);	\
+	if (__v)			\
+		pfree(__v);		\
+} while (0)
+
+#define appendStrVal			appendStringInfo
+#define appendBinaryStrVal		appendBinaryStringInfo
+#define appendStrValChar		appendStringInfoChar
+#define appendStrValCharMacro	appendStringInfoCharMacro
+#define createStrVal			makeStringInfo
+#define initStrVal				initStringInfo
+#define resetStrVal				resetStringInfo
+#define termStrVal(s)			pfree((s)->data)
+#define destroyStrVal			destroyStringInfo
+
+#endif
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -103,7 +159,7 @@ struct JsonIncrementalState
 {
 	bool		is_last_chunk;
 	bool		partial_completed;
-	StringInfoData partial_token;
+	StrValType	partial_token;
 };
 
 /*
@@ -219,6 +275,7 @@ static JsonParseErrorType parse_object(JsonLexContext *lex, const JsonSemAction
 static JsonParseErrorType parse_array_element(JsonLexContext *lex, const JsonSemAction *sem);
 static JsonParseErrorType parse_array(JsonLexContext *lex, const JsonSemAction *sem);
 static JsonParseErrorType report_parse_error(JsonParseContext ctx, JsonLexContext *lex);
+static bool allocate_incremental_state(JsonLexContext *lex);
 
 /* the null action object used for pure validation */
 const JsonSemAction nullSemAction =
@@ -273,15 +330,11 @@ IsValidJsonNumber(const char *str, size_t len)
 {
 	bool		numeric_error;
 	size_t		total_len;
-	JsonLexContext dummy_lex;
+	JsonLexContext dummy_lex = {0};
 
 	if (len <= 0)
 		return false;
 
-	dummy_lex.incremental = false;
-	dummy_lex.inc_state = NULL;
-	dummy_lex.pstack = NULL;
-
 	/*
 	 * json_lex_number expects a leading  '-' to have been eaten already.
 	 *
@@ -321,6 +374,9 @@ IsValidJsonNumber(const char *str, size_t len)
  * responsible for freeing the returned struct, either by calling
  * freeJsonLexContext() or (in backend environment) via memory context
  * cleanup.
+ *
+ * In frontend code this can return NULL on OOM, so callers must inspect the
+ * returned pointer.
  */
 JsonLexContext *
 makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
@@ -328,7 +384,9 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return NULL;
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -341,13 +399,70 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
 
 	return lex;
 }
 
+/*
+ * Allocates the internal bookkeeping structures for incremental parsing. This
+ * can only fail in-band with FRONTEND code.
+ */
+#define JS_STACK_CHUNK_SIZE 64
+#define JS_MAX_PROD_LEN 10		/* more than we need */
+#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
+								 * number */
+static bool
+allocate_incremental_state(JsonLexContext *lex)
+{
+	void	   *pstack,
+			   *prediction,
+			   *fnames,
+			   *fnull;
+
+	lex->inc_state = ALLOC0(sizeof(JsonIncrementalState));
+	pstack = ALLOC(sizeof(JsonParserStack));
+	prediction = ALLOC(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
+	fnames = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(char *));
+	fnull = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+#ifdef FRONTEND
+	if (!lex->inc_state
+		|| !pstack
+		|| !prediction
+		|| !fnames
+		|| !fnull)
+	{
+		FREE(lex->inc_state);
+		FREE(pstack);
+		FREE(prediction);
+		FREE(fnames);
+		FREE(fnull);
+
+		return false;
+	}
+#endif
+
+	initStrVal(&(lex->inc_state->partial_token));
+	lex->pstack = pstack;
+	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
+	lex->pstack->prediction = prediction;
+	lex->pstack->pred_index = 0;
+	lex->pstack->fnames = fnames;
+	lex->pstack->fnull = fnull;
+
+	lex->incremental = true;
+	return true;
+}
+
 
 /*
  * makeJsonLexContextIncremental
@@ -357,19 +472,20 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
  * we don't need the input, that will be handed in bit by bit to the
  * parse routine. We also need an accumulator for partial tokens in case
  * the boundary between chunks happens to fall in the middle of a token.
+ *
+ * In frontend code this can return NULL on OOM, so callers must inspect the
+ * returned pointer.
  */
-#define JS_STACK_CHUNK_SIZE 64
-#define JS_MAX_PROD_LEN 10		/* more than we need */
-#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
-								 * number */
-
 JsonLexContext *
 makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 							  bool need_escapes)
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return NULL;
+
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -377,42 +493,60 @@ makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 
 	lex->line_number = 1;
 	lex->input_encoding = encoding;
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+	if (!allocate_incremental_state(lex))
+	{
+		if (lex->flags & JSONLEX_FREE_STRUCT)
+			FREE(lex);
+		return NULL;
+	}
+
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in FRONTEND code. We defer error handling to
+		 * time of use (json_lex_string()) since we might not need to parse
+		 * any strings anyway.
+		 */
+		lex->strval = createStrVal();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+
 	return lex;
 }
 
-static inline void
+static inline bool
 inc_lex_level(JsonLexContext *lex)
 {
-	lex->lex_level += 1;
-
-	if (lex->incremental && lex->lex_level >= lex->pstack->stack_size)
+	if (lex->incremental && (lex->lex_level + 1) >= lex->pstack->stack_size)
 	{
-		lex->pstack->stack_size += JS_STACK_CHUNK_SIZE;
-		lex->pstack->prediction =
-			repalloc(lex->pstack->prediction,
-					 lex->pstack->stack_size * JS_MAX_PROD_LEN);
-		if (lex->pstack->fnames)
-			lex->pstack->fnames =
-				repalloc(lex->pstack->fnames,
-						 lex->pstack->stack_size * sizeof(char *));
-		if (lex->pstack->fnull)
-			lex->pstack->fnull =
-				repalloc(lex->pstack->fnull, lex->pstack->stack_size * sizeof(bool));
+		size_t		new_stack_size;
+		char	   *new_prediction;
+		char	  **new_fnames;
+		bool	   *new_fnull;
+
+		new_stack_size = lex->pstack->stack_size + JS_STACK_CHUNK_SIZE;
+
+		new_prediction = REALLOC(lex->pstack->prediction,
+								 new_stack_size * JS_MAX_PROD_LEN);
+		new_fnames = REALLOC(lex->pstack->fnames,
+							 new_stack_size * sizeof(char *));
+		new_fnull = REALLOC(lex->pstack->fnull, new_stack_size * sizeof(bool));
+
+#ifdef FRONTEND
+		if (!new_prediction || !new_fnames || !new_fnull)
+			return false;
+#endif
+
+		lex->pstack->stack_size = new_stack_size;
+		lex->pstack->prediction = new_prediction;
+		lex->pstack->fnames = new_fnames;
+		lex->pstack->fnull = new_fnull;
 	}
+
+	lex->lex_level += 1;
+	return true;
 }
 
 static inline void
@@ -482,24 +616,31 @@ get_fnull(JsonLexContext *lex)
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
+	if (!lex)
+		return;
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-		destroyStringInfo(lex->strval);
+		destroyStrVal(lex->strval);
 
 	if (lex->errormsg)
-		destroyStringInfo(lex->errormsg);
+		destroyStrVal(lex->errormsg);
 
 	if (lex->incremental)
 	{
-		pfree(lex->inc_state->partial_token.data);
-		pfree(lex->inc_state);
-		pfree(lex->pstack->prediction);
-		pfree(lex->pstack->fnames);
-		pfree(lex->pstack->fnull);
-		pfree(lex->pstack);
+		termStrVal(&lex->inc_state->partial_token);
+		FREE(lex->inc_state);
+		FREE(lex->pstack->prediction);
+		FREE(lex->pstack->fnames);
+		FREE(lex->pstack->fnull);
+		FREE(lex->pstack);
 	}
 
 	if (lex->flags & JSONLEX_FREE_STRUCT)
-		pfree(lex);
+		FREE(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -522,22 +663,13 @@ JsonParseErrorType
 pg_parse_json(JsonLexContext *lex, const JsonSemAction *sem)
 {
 #ifdef FORCE_JSON_PSTACK
-
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-
 	/*
 	 * We don't need partial token processing, there is only one chunk. But we
 	 * still need to init the partial token string so that freeJsonLexContext
-	 * works.
+	 * works, so perform the full incremental initialization.
 	 */
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+	if (!allocate_incremental_state(lex))
+		return JSON_OUT_OF_MEMORY;
 
 	return pg_parse_json_incremental(lex, sem, lex->input, lex->input_length, true);
 
@@ -597,7 +729,7 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -737,7 +869,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_OEND:
@@ -766,7 +900,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_AEND:
@@ -793,9 +929,11 @@ pg_parse_json_incremental(JsonLexContext *lex,
 						json_ofield_action ostart = sem->object_field_start;
 						json_ofield_action oend = sem->object_field_end;
 
-						if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
+						if ((ostart != NULL || oend != NULL) && lex->parse_strval)
 						{
-							fname = pstrdup(lex->strval->data);
+							fname = STRDUP(lex->strval->data);
+							if (fname == NULL)
+								return JSON_OUT_OF_MEMORY;
 						}
 						set_fname(lex, fname);
 					}
@@ -883,14 +1021,21 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							 */
 							if (tok == JSON_TOKEN_STRING)
 							{
-								if (lex->strval != NULL)
-									pstack->scalar_val = pstrdup(lex->strval->data);
+								if (lex->parse_strval)
+								{
+									pstack->scalar_val = STRDUP(lex->strval->data);
+									if (pstack->scalar_val == NULL)
+										return JSON_OUT_OF_MEMORY;
+								}
 							}
 							else
 							{
 								ptrdiff_t	tlen = (lex->token_terminator - lex->token_start);
 
-								pstack->scalar_val = palloc(tlen + 1);
+								pstack->scalar_val = ALLOC(tlen + 1);
+								if (pstack->scalar_val == NULL)
+									return JSON_OUT_OF_MEMORY;
+
 								memcpy(pstack->scalar_val, lex->token_start, tlen);
 								pstack->scalar_val[tlen] = '\0';
 							}
@@ -1025,14 +1170,21 @@ parse_scalar(JsonLexContext *lex, const JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -1066,8 +1218,12 @@ parse_object_field(JsonLexContext *lex, const JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -1123,6 +1279,11 @@ parse_object(JsonLexContext *lex, const JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -1312,15 +1473,24 @@ json_lex(JsonLexContext *lex)
 	const char *const end = lex->input + lex->input_length;
 	JsonParseErrorType result;
 
-	if (lex->incremental && lex->inc_state->partial_completed)
+	if (lex->incremental)
 	{
-		/*
-		 * We just lexed a completed partial token on the last call, so reset
-		 * everything
-		 */
-		resetStringInfo(&(lex->inc_state->partial_token));
-		lex->token_terminator = lex->input;
-		lex->inc_state->partial_completed = false;
+		if (lex->inc_state->partial_completed)
+		{
+			/*
+			 * We just lexed a completed partial token on the last call, so
+			 * reset everything
+			 */
+			resetStrVal(&(lex->inc_state->partial_token));
+			lex->token_terminator = lex->input;
+			lex->inc_state->partial_completed = false;
+		}
+
+#ifdef FRONTEND
+		/* Make sure our partial token buffer is valid before using it below. */
+		if (PQExpBufferDataBroken(lex->inc_state->partial_token))
+			return JSON_OUT_OF_MEMORY;
+#endif
 	}
 
 	s = lex->token_terminator;
@@ -1331,7 +1501,7 @@ json_lex(JsonLexContext *lex)
 		 * We have a partial token. Extend it and if completed lex it by a
 		 * recursive call
 		 */
-		StringInfo	ptok = &(lex->inc_state->partial_token);
+		StrValType *ptok = &(lex->inc_state->partial_token);
 		size_t		added = 0;
 		bool		tok_done = false;
 		JsonLexContext dummy_lex;
@@ -1358,7 +1528,7 @@ json_lex(JsonLexContext *lex)
 			{
 				char		c = lex->input[i];
 
-				appendStringInfoCharMacro(ptok, c);
+				appendStrValCharMacro(ptok, c);
 				added++;
 				if (c == '"' && escapes % 2 == 0)
 				{
@@ -1403,7 +1573,7 @@ json_lex(JsonLexContext *lex)
 						case '8':
 						case '9':
 							{
-								appendStringInfoCharMacro(ptok, cc);
+								appendStrValCharMacro(ptok, cc);
 								added++;
 							}
 							break;
@@ -1424,7 +1594,7 @@ json_lex(JsonLexContext *lex)
 
 				if (JSON_ALPHANUMERIC_CHAR(cc))
 				{
-					appendStringInfoCharMacro(ptok, cc);
+					appendStrValCharMacro(ptok, cc);
 					added++;
 				}
 				else
@@ -1467,6 +1637,7 @@ json_lex(JsonLexContext *lex)
 		dummy_lex.input_length = ptok->len;
 		dummy_lex.input_encoding = lex->input_encoding;
 		dummy_lex.incremental = false;
+		dummy_lex.parse_strval = lex->parse_strval;
 		dummy_lex.strval = lex->strval;
 
 		partial_result = json_lex(&dummy_lex);
@@ -1622,8 +1793,8 @@ json_lex(JsonLexContext *lex)
 					if (lex->incremental && !lex->inc_state->is_last_chunk &&
 						p == lex->input + lex->input_length)
 					{
-						appendBinaryStringInfo(
-											   &(lex->inc_state->partial_token), s, end - s);
+						appendBinaryStrVal(
+										   &(lex->inc_state->partial_token), s, end - s);
 						return JSON_INCOMPLETE;
 					}
 
@@ -1680,8 +1851,8 @@ json_lex_string(JsonLexContext *lex)
 	do { \
 		if (lex->incremental && !lex->inc_state->is_last_chunk) \
 		{ \
-			appendBinaryStringInfo(&lex->inc_state->partial_token, \
-								   lex->token_start, end - lex->token_start); \
+			appendBinaryStrVal(&lex->inc_state->partial_token, \
+							   lex->token_start, end - lex->token_start); \
 			return JSON_INCOMPLETE; \
 		} \
 		lex->token_terminator = s; \
@@ -1694,8 +1865,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef FRONTEND
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		resetStrVal(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -1732,7 +1910,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -1789,19 +1967,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						appendPQExpBufferChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -1811,22 +1989,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						appendStrValChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						appendStrValChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						appendStrValChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						appendStrValChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						appendStrValChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						appendStrValChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -1861,7 +2039,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to appendBinaryStrVal.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -1885,8 +2063,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				appendBinaryStrVal(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -1902,6 +2080,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef FRONTEND
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -2019,8 +2202,8 @@ json_lex_number(JsonLexContext *lex, const char *s,
 	if (lex->incremental && !lex->inc_state->is_last_chunk &&
 		len >= lex->input_length)
 	{
-		appendBinaryStringInfo(&lex->inc_state->partial_token,
-							   lex->token_start, s - lex->token_start);
+		appendBinaryStrVal(&lex->inc_state->partial_token,
+						   lex->token_start, s - lex->token_start);
 		if (num_err != NULL)
 			*num_err = error;
 
@@ -2096,19 +2279,25 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	if (error == JSON_OUT_OF_MEMORY)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
 	if (lex->errormsg)
-		resetStringInfo(lex->errormsg);
+		resetStrVal(lex->errormsg);
 	else
-		lex->errormsg = makeStringInfo();
+		lex->errormsg = createStrVal();
 
 	/*
 	 * A helper for error messages that should print the current token. The
 	 * format must contain exactly one %.*s specifier.
 	 */
 #define json_token_error(lex, format) \
-	appendStringInfo((lex)->errormsg, _(format), \
-					 (int) ((lex)->token_terminator - (lex)->token_start), \
-					 (lex)->token_start);
+	appendStrVal((lex)->errormsg, _(format), \
+				 (int) ((lex)->token_terminator - (lex)->token_start), \
+				 (lex)->token_start);
 
 	switch (error)
 	{
@@ -2127,9 +2316,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			json_token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
 			break;
 		case JSON_ESCAPING_REQUIRED:
-			appendStringInfo(lex->errormsg,
-							 _("Character with value 0x%02x must be escaped."),
-							 (unsigned char) *(lex->token_terminator));
+			appendStrVal(lex->errormsg,
+						 _("Character with value 0x%02x must be escaped."),
+						 (unsigned char) *(lex->token_terminator));
 			break;
 		case JSON_EXPECTED_END:
 			json_token_error(lex, "Expected end of input, but found \"%.*s\".");
@@ -2160,6 +2349,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 		case JSON_INVALID_TOKEN:
 			json_token_error(lex, "Token \"%.*s\" is invalid.");
 			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -2191,15 +2383,23 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 	}
 #undef json_token_error
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	if (lex->errormsg->len == 0)
-		appendStringInfo(lex->errormsg,
-						 "unexpected json parse error type: %d",
-						 (int) error);
+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
+	if (lex->errormsg && lex->errormsg->len == 0)
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		appendStrVal(lex->errormsg,
+					 "unexpected json parse error type: %d",
+					 (int) error);
+	}
+
+#ifdef FRONTEND
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
+#endif
 
 	return lex->errormsg->data;
 }
diff --git a/src/common/meson.build b/src/common/meson.build
index 1a564e1dce..cc01fe3543 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -126,13 +126,18 @@ common_sources_frontend_static += files(
 # least cryptohash_openssl.c, hmac_openssl.c depend on it.
 # controldata_utils.c depends on wait_event_types_h. That's arguably a
 # layering violation, but ...
+#
+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
+# appropriately. This seems completely broken.
 pgcommon = {}
 pgcommon_variants = {
   '_srv': internal_lib_args + {
+    'include_directories': include_directories('.'),
     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
     'dependencies': [backend_common_code],
   },
   '': default_lib_args + {
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_static,
     'dependencies': [frontend_common_code],
     # Files in libpgcommon.a should use/export the "xxx_private" versions
@@ -141,6 +146,7 @@ pgcommon_variants = {
   },
   '_shlib': default_lib_args + {
     'pic': true,
+    'include_directories': include_directories('../interfaces/libpq', '.'),
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
   },
@@ -158,7 +164,6 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'sources': sources,
         'c_args': c_args,
@@ -171,7 +176,6 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
         'dependencies': opts['dependencies'] + [ssl],
       }
diff --git a/src/common/parse_manifest.c b/src/common/parse_manifest.c
index 612e120b17..0da6272336 100644
--- a/src/common/parse_manifest.c
+++ b/src/common/parse_manifest.c
@@ -139,7 +139,8 @@ json_parse_manifest_incremental_init(JsonManifestParseContext *context)
 	parse->state = JM_EXPECT_TOPLEVEL_START;
 	parse->saw_version_field = false;
 
-	makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true);
+	if (!makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true))
+		context->error_cb(context, "out of memory");
 
 	incstate->sem.semstate = parse;
 	incstate->sem.object_start = json_manifest_object_start;
@@ -240,6 +241,8 @@ json_parse_manifest(JsonManifestParseContext *context, const char *buffer,
 
 	/* Create a JSON lexing context. */
 	lex = makeJsonLexContextCstringLen(NULL, buffer, size, PG_UTF8, true);
+	if (!lex)
+		json_manifest_parse_failure(context, "out of memory");
 
 	/* Set up semantic actions. */
 	sem.semstate = &parse;
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index a995fdbe08..7b73f0b021 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -51,6 +49,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -64,6 +63,18 @@ typedef enum JsonParseErrorType
 typedef struct JsonParserStack JsonParserStack;
 typedef struct JsonIncrementalState JsonIncrementalState;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef FRONTEND
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
+
 /*
  * All the fields in this structure should be treated as read-only.
  *
@@ -102,8 +113,9 @@ typedef struct JsonLexContext
 	const char *line_start;		/* where that line starts within input */
 	JsonParserStack *pstack;
 	JsonIncrementalState *inc_state;
-	StringInfo	strval;
-	StringInfo	errormsg;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
diff --git a/src/test/modules/test_json_parser/Makefile b/src/test/modules/test_json_parser/Makefile
index 2dc7175b7c..f410e04cf1 100644
--- a/src/test/modules/test_json_parser/Makefile
+++ b/src/test/modules/test_json_parser/Makefile
@@ -19,6 +19,9 @@ include $(top_builddir)/src/Makefile.global
 include $(top_srcdir)/contrib/contrib-global.mk
 endif
 
+# TODO: fix this properly
+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
+
 all: test_json_parser_incremental$(X) test_json_parser_perf$(X)
 
 %.o: $(top_srcdir)/$(subdir)/%.c
diff --git a/src/test/modules/test_json_parser/meson.build b/src/test/modules/test_json_parser/meson.build
index b224f3e07e..8136070233 100644
--- a/src/test/modules/test_json_parser/meson.build
+++ b/src/test/modules/test_json_parser/meson.build
@@ -13,7 +13,7 @@ endif
 
 test_json_parser_incremental = executable('test_json_parser_incremental',
   test_json_parser_incremental_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args + {
     'install': false,
   },
@@ -32,7 +32,7 @@ endif
 
 test_json_parser_perf = executable('test_json_parser_perf',
   test_json_parser_perf_sources,
-  dependencies: [frontend_code],
+  dependencies: [frontend_code, libpq],
   kwargs: default_bin_args + {
     'install': false,
   },
-- 
2.34.1

v26-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v26-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From ebaaec0ecde2ba32066eef3341a08eff59957815 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v26 3/5] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 doc/src/sgml/client-auth.sgml                 |  28 +
 doc/src/sgml/filelist.sgml                    |   1 +
 doc/src/sgml/oauth-validators.sgml            |   9 +
 doc/src/sgml/postgres.sgml                    |   1 +
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 666 ++++++++++++++++++
 src/backend/libpq/auth-sasl.c                 |  10 +-
 src/backend/libpq/auth-scram.c                |   4 +-
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/common/Makefile                           |   2 +-
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/include/libpq/sasl.h                      |  11 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     |  12 +-
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  22 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  37 +
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   | 187 +++++
 .../modules/oauth_validator/t/oauth_server.py | 270 +++++++
 src/test/modules/oauth_validator/validator.c  |  82 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   4 +
 32 files changed, 1555 insertions(+), 47 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 1ce6c443a8..94187cea06 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +694,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..fb78b6c886 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a7ff5f8264..91cf16678e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..3c7884baf9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,9 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+
+ <para>
+  TODO
+ </para>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index ec9f90e283..bfb73991e7 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -263,6 +263,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..2a0d74a079
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 2"),
+				 errdetail("Bearer token is empty.")));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 3"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 03ddddc3c2..e4be4d499e 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2b607c5270..0a5a8640fc 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 75d588e36a..2245ae24a8 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1743,6 +1744,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2062,8 +2065,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2446,6 +2450,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 636780673b..b613fddf9e 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4785,6 +4786,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/common/Makefile b/src/common/Makefile
index 9856fdeccc..50218dd2db 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -41,7 +41,7 @@ override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
 override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
 
 override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
-LIBS += $(PTHREAD_LIBS)
+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
 
 OBJS_COMMON = \
 	archive.o \
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 435abee56a..d9c9fc6cf9 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -143,7 +143,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -1864,6 +1864,9 @@ handle_token_response(struct async_ctx *actx, char **token)
 	if (!finish_token_request(actx, &tok))
 		goto token_cleanup;
 
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
 	if (tok.access_token)
 	{
 		/* Construct our Bearer token. */
@@ -1892,13 +1895,6 @@ handle_token_response(struct async_ctx *actx, char **token)
 	 * acceptable errors; anything else and we bail.
 	 */
 	err = &tok.err;
-	if (!err->error)
-	{
-		/* TODO test */
-		actx_error(actx, "unknown error");
-		goto token_cleanup;
-	}
-
 	if (strcmp(err->error, "authorization_pending") != 0 &&
 		strcmp(err->error, "slow_down") != 0)
 	{
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..655ce75796
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,22 @@
+export PYTHON
+export with_oauth
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..3db2ddea1c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,37 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..16ee8acd8f
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,187 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..b17198302b
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,270 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "content_type" in self._test_params:
+            return self._test_params["content_type"]
+
+        return "application/json"
+
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        if self._should_modify() and "interval" in self._test_params:
+            return self._test_params["interval"]
+
+        return 0
+
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "retry_code" in self._test_params:
+            return self._test_params["retry_code"]
+
+        return "authorization_pending"
+
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "uri_spelling" in self._test_params:
+            return self._test_params["uri_spelling"]
+
+        return "verification_uri"
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type())
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling(): uri,
+            "expires-in": 5,
+        }
+
+        interval = self._interval()
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code()}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return {
+            "access_token": token,
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..7b4dc9c494
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,82 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index fe6ebf10f7..d6f9c4cd8b 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2397,6 +2397,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2440,7 +2445,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..abdff5a3c3
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	diag("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d518fe91e2..ff537441dd 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1718,6 +1718,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3065,6 +3066,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3660,6 +3663,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v26-0004-Review-comments.patchapplication/octet-stream; name=v26-0004-Review-comments.patchDownload
From 78ce297b9e62a180fcd5486e70122837f59b44b8 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Thu, 28 Mar 2024 21:59:02 +0100
Subject: [PATCH v26 4/5] Review comments

Fixes and tidy-ups following a review of v21, a few items
are (listed in no specific order):

* Implement a version check for libcurl in autoconf, the equivalent
  check for Meson is still a TODO. [ed: moved to an earlier commit]
* Address a few TODOs in the code
* libpq JSON support memory management fixups [ed: moved to an earlier
  commit]
---
 src/backend/libpq/auth-oauth.c            | 22 +++----
 src/interfaces/libpq/fe-auth-oauth-curl.c | 72 ++++++++++++++++-------
 src/interfaces/libpq/fe-auth-oauth.c      |  7 ++-
 3 files changed, 68 insertions(+), 33 deletions(-)

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 2a0d74a079..ec1418c3fc 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -533,7 +533,9 @@ validate_token_format(const char *header)
 	if (!header || strlen(header) <= 7)
 	{
 		ereport(COMMERROR,
-				(errmsg("malformed OAuth bearer token 1")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is less than 8 bytes."));
 		return NULL;
 	}
 
@@ -551,9 +553,9 @@ validate_token_format(const char *header)
 	if (!*token)
 	{
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 2"),
-				 errdetail("Bearer token is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
 		return NULL;
 	}
 
@@ -573,9 +575,9 @@ validate_token_format(const char *header)
 		 * of someone's password into the logs.
 		 */
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 3"),
-				 errdetail("Bearer token is not in the correct format.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
 		return NULL;
 	}
 
@@ -617,10 +619,10 @@ validate(Port *port, const char *auth)
 	/* Make sure the validator authenticated the user. */
 	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
-		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
-				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
-						port->user_name)));
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity"));
 		return false;
 	}
 
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index d9c9fc6cf9..9e4bb30095 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -31,6 +31,8 @@
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
 
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
 /*
  * Parsed JSON Representations
  *
@@ -683,7 +685,11 @@ parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
 		return false;
 	}
 
-	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	if (!makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	ctx.errbuf = &actx->errbuf;
 	ctx.fields = fields;
@@ -1334,7 +1340,12 @@ setup_curl_handles(struct async_ctx *actx)
 	 * pretty strict when it comes to provider behavior, so we have to check
 	 * what comes back anyway.)
 	 */
-	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
 
 	return true;
@@ -1356,9 +1367,19 @@ append_data(char *buf, size_t size, size_t nmemb, void *userdata)
 	PQExpBuffer resp = userdata;
 	size_t		len = size * nmemb;
 
-	/* TODO: cap the maximum size */
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+		return 0;
+
+	/* The data passed from libcurl is not null-terminated */
 	appendBinaryPQExpBuffer(resp, buf, len);
-	/* TODO: check for broken buffer */
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+		return 0;
 
 	return len;
 }
@@ -1675,7 +1696,12 @@ start_device_authz(struct async_ctx *actx, PGconn *conn)
 	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
 	if (conn->oauth_scope)
 		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
-	/* TODO check for broken buffer */
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	/* Make our request. */
 	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
@@ -1817,32 +1843,34 @@ finish_token_request(struct async_ctx *actx, struct token *tok)
 	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
-	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
-	 * response uses either 400 Bad Request or 401 Unauthorized.
-	 *
-	 * TODO: there are references online to 403 appearing in the wild...
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
 	 */
-	if (response_code != 200
-		&& response_code != 400
-		 /* && response_code != 401 TODO */ )
+	if (response_code == 200)
 	{
-		actx_error(actx, "unexpected response code %ld", response_code);
-		return false;
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
 	}
 
 	/*
-	 * Pull the fields we care about from the document.
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
 	 */
-	if (response_code == 200)
+	if (response_code == 400 || response_code == 401)
 	{
-		actx->errctx = "failed to parse access token response";
-		if (!parse_access_token(actx, tok))
-			return false;		/* error message already set */
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
 	}
-	else if (!parse_token_error(actx, &tok->err))
-		return false;
 
-	return true;
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
 }
 
 /*
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index f943a31cc0..61de9ac451 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -247,7 +247,12 @@ handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
 		return false;
 	}
 
-	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	if (!makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true))
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+		return false;
+	}
 
 	initPQExpBuffer(&ctx.errbuf);
 	sem.semstate = &ctx;
-- 
2.34.1

v26-0005-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v26-0005-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 6f4709574d1e8f42592bc2a2814b5e4d502c18e3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v26 5/5] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1864 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1074 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5577 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 94187cea06..a127042b4b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -374,6 +375,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -384,7 +387,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index a40c4dd7f6..753f6f8928 100644
--- a/meson.build
+++ b/meson.build
@@ -3419,6 +3419,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3580,6 +3583,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index c3d0dfedf1..f401ec179e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -7,6 +7,7 @@ subdir('authentication')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..17bd2d3d88
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1864 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp = json.dumps(
+                {
+                    "status": "invalid_token",
+                    "openid-configuration": discovery_uri,
+                }
+            )
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..dbb8b8823c
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..427ab063e6
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1074 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

v26-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v26-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From da6c573a346ff702a765fdbcf2da84b0b6a4b563 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v26 2/5] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                        |   19 +
 configure                                 |  144 ++
 configure.ac                              |   29 +
 doc/src/sgml/libpq.sgml                   |   76 +
 meson.build                               |   31 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   14 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2222 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 ++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   75 +
 src/interfaces/libpq/libpq-int.h          |   15 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 25 files changed, 3575 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 2abbeb2794..adeb0c1e63 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8479,6 +8482,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13010,6 +13059,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14035,6 +14168,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index c46ed2c591..099198ef12 100644
--- a/configure.ac
+++ b/configure.ac
@@ -924,6 +924,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1406,6 +1426,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1596,6 +1621,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 068ee60771..6a20247ef9 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2335,6 +2335,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9938,6 +9975,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/meson.build b/meson.build
index cd711c6d01..a40c4dd7f6 100644
--- a/meson.build
+++ b/meson.build
@@ -912,6 +912,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3076,6 +3105,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3748,6 +3778,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 979925cc2e..196c96fbb8 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -244,6 +244,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -727,6 +730,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 27f8499d8a..7d593778ec 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..435abee56a
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2222 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the whole
+	 * string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			/* HTTP optional whitespace allows only spaces and htabs. */
+			case ' ':
+			case '\t':
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently. We
+		 * accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specify 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in
+	 * the (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1; /* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char * const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN: /* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT: /* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so make
+		 * sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of CURLoption.
+	 * CURLOPT_PROTOCOLS is deprecated in modern Curls, but its replacement
+	 * didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char * const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can immediately
+	 * call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They don't
+		 * even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the foot
+		 * in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only
+	 * acceptable errors; anything else and we bail.
+	 */
+	err = &tok.err;
+	if (!err->error)
+	{
+		/* TODO test */
+		actx_error(actx, "unknown error");
+		goto token_cleanup;
+	}
+
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		if (err->error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase
+	 * our retry interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		fprintf(stderr, "Visit %s and enter the code: %s",
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on start_request()
+		 * to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break; /* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+				/*
+				 * No Curl requests are running, so we can simplify by
+				 * having the client wait directly on the timerfd rather
+				 * than the multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..f943a31cc0
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 4da07c1f98..a5a2361f85 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -421,7 +422,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -439,7 +440,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -526,6 +527,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -565,26 +575,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -629,7 +661,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -654,11 +686,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -962,12 +1004,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1125,7 +1173,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1142,7 +1190,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1458,3 +1507,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 360d9a4547..97118ce94b 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -364,6 +364,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -627,6 +644,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2644,6 +2662,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3697,6 +3716,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3852,6 +3872,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3885,7 +3915,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/* OK, we have processed the message; mark data consumed */
 				conn->inStart = conn->inCursor;
@@ -3918,6 +3958,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4599,6 +4674,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4716,6 +4792,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7198,6 +7279,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index f235bfbb41..aa1fee38c8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1041,10 +1041,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1061,7 +1064,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 87a6f3df07..25f216afcf 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -38,6 +38,8 @@ extern "C"
 #define LIBPQ_HAS_TRACE_FLAGS 1
 /* Indicates that PQsslAttribute(NULL, "library") is useful */
 #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -82,6 +84,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -163,6 +167,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -695,10 +706,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 03e4da40ba..58edc5016f 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 7623aeadab..cf1da9c1a7 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 547d14b3e7..d518fe91e2 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -368,6 +369,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1714,6 +1717,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1778,6 +1782,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1938,11 +1943,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3451,6 +3459,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

#116Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#115)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 13.08.24 23:11, Jacob Champion wrote:

On Sun, Aug 11, 2024 at 11:37 PM Peter Eisentraut <peter@eisentraut.org> wrote:

I have committed 0002 now.

Thanks Peter! Rebased over both in v26.

I have looked again at the jsonapi memory management patch (v26-0001).
As previously mentioned, I think adding a third or fourth (depending
on how you count) memory management API is maybe something we should
avoid. Also, the weird layering where src/common/ now (sometimes)
depends on libpq seems not great.

I'm thinking, maybe we leave the use of StringInfo at the source code
level, but #define the symbols to use PQExpBuffer. Something like

#ifdef JSONAPI_USE_PQEXPBUFFER

#define StringInfo PQExpBuffer
#define appendStringInfo appendPQExpBuffer
#define appendBinaryStringInfo appendBinaryPQExpBuffer
#define palloc malloc
//etc.

#endif

(simplified, the argument lists might differ)

Or, if people find that too scary, something like

#ifdef JSONAPI_USE_PQEXPBUFFER

#define jsonapi_StringInfo PQExpBuffer
#define jsonapi_appendStringInfo appendPQExpBuffer
#define jsonapi_appendBinaryStringInfo appendBinaryPQExpBuffer
#define jsonapi_palloc malloc
//etc.

#else

#define jsonapi_StringInfo StringInfo
#define jsonapi_appendStringInfo appendStringInfo
#define jsonapi_appendBinaryStringInfo appendBinaryStringInfo
#define jsonapi_palloc palloc
//etc.

#endif

That way, it's at least more easy to follow the source code because
you see a mostly-familiar API.

Also, we should make this PQExpBuffer-using mode only used by libpq,
not by frontend programs. So libpq takes its own copy of jsonapi.c
and compiles it using JSONAPI_USE_PQEXPBUFFER. That will make the
libpq build descriptions a bit more complicated, but everyone who is
not libpq doesn't need to change.

Once you get past all the function renaming, the logic changes in
jsonapi.c all look pretty reasonable. Refactoring like
allocate_incremental_state() makes sense.

You could add pg_nodiscard attributes to
makeJsonLexContextCstringLen() and makeJsonLexContextIncremental() so
that callers who are using the libpq mode are forced to check for
errors. Or maybe there is a clever way to avoid even that: Create a
fixed JsonLexContext like

static const JsonLexContext failed_oom;

and on OOM you return that one from makeJsonLexContext*(). And then
in pg_parse_json(), when you get handed that context, you return
JSON_OUT_OF_MEMORY immediately.

Other than that detail and the need to use freeJsonLexContext(), it
looks like this new mode doesn't impose any additional burden on
callers, since during parsing they need to check for errors anyway,
and this just adds one more error type for out of memory. That's a good
outcome.

#117Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#116)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Aug 26, 2024 at 1:18 AM Peter Eisentraut <peter@eisentraut.org> wrote:

Or, if people find that too scary, something like

#ifdef JSONAPI_USE_PQEXPBUFFER

#define jsonapi_StringInfo PQExpBuffer
[...]

That way, it's at least more easy to follow the source code because
you see a mostly-familiar API.

I was having trouble reasoning about the palloc-that-isn't-palloc code
during the first few drafts, so I will try a round with the jsonapi_
prefix.

Also, we should make this PQExpBuffer-using mode only used by libpq,
not by frontend programs. So libpq takes its own copy of jsonapi.c
and compiles it using JSONAPI_USE_PQEXPBUFFER. That will make the
libpq build descriptions a bit more complicated, but everyone who is
not libpq doesn't need to change.

Sounds reasonable. It complicates the test coverage situation a little
bit, but I think my current patch was maybe insufficient there anyway,
since the coverage for the backend flavor silently dropped...

Or maybe there is a clever way to avoid even that: Create a
fixed JsonLexContext like

static const JsonLexContext failed_oom;

and on OOM you return that one from makeJsonLexContext*(). And then
in pg_parse_json(), when you get handed that context, you return
JSON_OUT_OF_MEMORY immediately.

I like this idea.

Thanks!
--Jacob

#118Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#117)
6 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Aug 26, 2024 at 4:23 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

I was having trouble reasoning about the palloc-that-isn't-palloc code
during the first few drafts, so I will try a round with the jsonapi_
prefix.

v27 takes a stab at that. I have kept the ALLOC/FREE naming to match
the strategy in other src/common source files.

The name of the variable JSONAPI_USE_PQEXPBUFFER leads to sections of
code that look like this:

+#ifdef JSONAPI_USE_PQEXPBUFFER
+    if (!new_prediction || !new_fnames || !new_fnull)
+        return false;
+#endif

To me it wouldn't be immediately obvious why "using PQExpBuffer" has
anything to do with this code; the key idea is that we expect any
allocations to be able to fail. Maybe a name like JSONAPI_ALLOW_OOM or
JSONAPI_SHLIB_ALLOCATIONS or...?

It complicates the test coverage situation a little
bit, but I think my current patch was maybe insufficient there anyway,
since the coverage for the backend flavor silently dropped...

To do this without too much pain, I split the "forbidden" objects into
their own shared library, used only by the JSON tests which needed
them. I tried not to wrap too much ceremony around them, since they're
only needed in one place, so they don't have an associated Meson
dependency object.

Or maybe there is a clever way to avoid even that: Create a
fixed JsonLexContext like

static const JsonLexContext failed_oom;

I think this turned out nicely. Two slight deviations from this are
that we can't return a pointer-to-const, and we also need an OOM
sentinel for the JsonIncrementalState, since it's possible to
initialize incremental parsing into a JsonLexContext that's on the
stack.

--Jacob

Attachments:

since-v26.diff.txttext/plain; charset=US-ASCII; name=since-v26.diff.txtDownload
1:  b3e925b9a9 ! 1:  202b9ecef6 common/jsonapi: support libpq as a client
    @@ Commit message
     
         Based on a patch by Michael Paquier.
     
    -    For frontend code, use PQExpBuffer instead of StringInfo. This requires
    -    us to track allocation failures so that we can return JSON_OUT_OF_MEMORY
    -    as needed rather than exit()ing.
    +    For libpq, use PQExpBuffer instead of StringInfo. This requires us to
    +    track allocation failures so that we can return JSON_OUT_OF_MEMORY as
    +    needed rather than exit()ing.
     
         Co-authored-by: Michael Paquier <michael@paquier.xyz>
         Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
     
    - ## src/bin/pg_combinebackup/Makefile ##
    -@@ src/bin/pg_combinebackup/Makefile: include $(top_builddir)/src/Makefile.global
    - 
    - override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
    - LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils
    -+# TODO: fix this properly
    -+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
    - 
    - OBJS = \
    - 	$(WIN32RES) \
    -@@ src/bin/pg_combinebackup/Makefile: OBJS = \
    - 
    - all: pg_combinebackup
    - 
    --pg_combinebackup: $(OBJS) | submake-libpgport submake-libpgfeutils
    -+pg_combinebackup: $(OBJS) | submake-libpq submake-libpgport submake-libpgfeutils
    - 	$(CC) $(CFLAGS) $^ $(LDFLAGS) $(LDFLAGS_EX) $(LIBS) -o $@$(X)
    - 
    - install: all installdirs
    -
    - ## src/bin/pg_combinebackup/meson.build ##
    -@@ src/bin/pg_combinebackup/meson.build: endif
    - 
    - pg_combinebackup = executable('pg_combinebackup',
    -   pg_combinebackup_sources,
    --  dependencies: [frontend_code],
    -+  dependencies: [frontend_code, libpq],
    -   kwargs: default_bin_args,
    - )
    - bin_targets += pg_combinebackup
    -
    - ## src/bin/pg_verifybackup/Makefile ##
    -@@ src/bin/pg_verifybackup/Makefile: top_builddir = ../../..
    - include $(top_builddir)/src/Makefile.global
    - 
    - # We need libpq only because fe_utils does.
    --LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
    -+LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils -lpgcommon $(libpq_pgport)
    - 
    - OBJS = \
    - 	$(WIN32RES) \
    -
      ## src/common/Makefile ##
    -@@ src/common/Makefile: override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
    - override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
    - override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
    - 
    --override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common $(CPPFLAGS)
    -+override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
    - LIBS += $(PTHREAD_LIBS)
    - 
    - OBJS_COMMON = \
    +@@ src/common/Makefile: endif
    + # a matter of policy, because it is not appropriate for general purpose
    + # libraries such as libpq to report errors directly.  fe_memutils.c is
    + # excluded because libpq must not exit() on allocation failure.
    ++#
    ++# The excluded files for _shlib builds are pulled into their own static
    ++# library, for the benefit of test programs that need not follow the
    ++# shlib rules.
    + OBJS_FRONTEND_SHLIB = \
    + 	$(OBJS_COMMON) \
    + 	restricted_token.o \
    + 	sprompt.o
    +-OBJS_FRONTEND = \
    +-	$(OBJS_FRONTEND_SHLIB) \
    ++OBJS_EXCLUDED_SHLIB = \
    + 	fe_memutils.o \
    + 	logging.o
    ++OBJS_FRONTEND = \
    ++	$(OBJS_FRONTEND_SHLIB) \
    ++	$(OBJS_EXCLUDED_SHLIB)
    + 
    + # foo.o, foo_shlib.o, and foo_srv.o are all built from foo.c
    + OBJS_SHLIB = $(OBJS_FRONTEND_SHLIB:%.o=%_shlib.o)
    +@@ src/common/Makefile: TOOLSDIR = $(top_srcdir)/src/tools
    + GEN_KEYWORDLIST = $(PERL) -I $(TOOLSDIR) $(TOOLSDIR)/gen_keywordlist.pl
    + GEN_KEYWORDLIST_DEPS = $(TOOLSDIR)/gen_keywordlist.pl $(TOOLSDIR)/PerfectHash.pm
    + 
    +-all: libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a
    ++all: libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a libpgcommon_excluded_shlib.a
    + 
    + # libpgcommon is needed by some contrib
    + install: all installdirs
    +@@ src/common/Makefile: libpgcommon_shlib.a: $(OBJS_SHLIB)
    + 	rm -f $@
    + 	$(AR) $(AROPT) $@ $^
    + 
    ++# The JSON API normally exits on out-of-memory; disable that behavior for shared
    ++# library builds. This requires libpq's pqexpbuffer.h.
    ++jsonapi_shlib.o: override CPPFLAGS += -DJSONAPI_USE_PQEXPBUFFER
    ++jsonapi_shlib.o: override CPPFLAGS += -I$(libpq_srcdir)
    ++
    + # Because this uses its own compilation rule, it doesn't use the
    + # dependency tracking logic from Makefile.global.  To make sure that
    + # dependency tracking works anyway for the *_shlib.o files, depend on
    +@@ src/common/Makefile: libpgcommon_shlib.a: $(OBJS_SHLIB)
    + %_shlib.o: %.c %.o
    + 	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) -c $< -o $@
    + 
    ++libpgcommon_excluded_shlib.a: $(OBJS_EXCLUDED_SHLIB)
    ++	rm -f $@
    ++	$(AR) $(AROPT) $@ $^
    ++
    + #
    + # Server versions of object files
    + #
    +@@ src/common/Makefile: RYU_OBJS = $(RYU_FILES) $(RYU_FILES:%.o=%_shlib.o) $(RYU_FILES:%.o=%_srv.o)
    + $(RYU_OBJS): CFLAGS += $(PERMIT_DECLARATION_AFTER_STATEMENT)
    + 
    + clean distclean:
    +-	rm -f libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a
    ++	rm -f libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a libpgcommon_excluded_shlib.a
    + 	rm -f $(OBJS_FRONTEND) $(OBJS_SHLIB) $(OBJS_SRV)
    + 	rm -f kwlist_d.h
     
      ## src/common/jsonapi.c ##
     @@
    @@ src/common/jsonapi.c
      #include "port/pg_lfind.h"
      
     -#ifndef FRONTEND
    -+#ifdef FRONTEND
    ++#ifdef JSONAPI_USE_PQEXPBUFFER
     +#include "pqexpbuffer.h"
     +#else
     +#include "lib/stringinfo.h"
    @@ src/common/jsonapi.c
      #endif
      
     +/*
    -+ * In backend, we will use palloc/pfree along with StringInfo.  In frontend,
    ++ * By default, we will use palloc/pfree along with StringInfo.  In libpq,
     + * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
     + */
    -+#ifdef FRONTEND
    ++#ifdef JSONAPI_USE_PQEXPBUFFER
     +
     +#define STRDUP(s) strdup(s)
     +#define ALLOC(size) malloc(size)
    @@ src/common/jsonapi.c
     +#define REALLOC realloc
     +#define FREE(s) free(s)
     +
    -+#define appendStrVal			appendPQExpBuffer
    -+#define appendBinaryStrVal		appendBinaryPQExpBuffer
    -+#define appendStrValChar		appendPQExpBufferChar
    ++#define jsonapi_appendStringInfo			appendPQExpBuffer
    ++#define jsonapi_appendBinaryStringInfo		appendBinaryPQExpBuffer
    ++#define jsonapi_appendStringInfoChar		appendPQExpBufferChar
     +/* XXX should we add a macro version to PQExpBuffer? */
    -+#define appendStrValCharMacro	appendPQExpBufferChar
    -+#define createStrVal			createPQExpBuffer
    -+#define initStrVal				initPQExpBuffer
    -+#define resetStrVal				resetPQExpBuffer
    -+#define termStrVal				termPQExpBuffer
    -+#define destroyStrVal			destroyPQExpBuffer
    ++#define jsonapi_appendStringInfoCharMacro	appendPQExpBufferChar
    ++#define jsonapi_createStringInfo			createPQExpBuffer
    ++#define jsonapi_initStringInfo				initPQExpBuffer
    ++#define jsonapi_resetStringInfo				resetPQExpBuffer
    ++#define jsonapi_termStringInfo				termPQExpBuffer
    ++#define jsonapi_destroyStringInfo			destroyPQExpBuffer
     +
    -+#else							/* !FRONTEND */
    ++#else							/* !JSONAPI_USE_PQEXPBUFFER */
     +
     +#define STRDUP(s) pstrdup(s)
     +#define ALLOC(size) palloc(size)
     +#define ALLOC0(size) palloc0(size)
     +#define REALLOC repalloc
     +
    ++#ifdef FRONTEND
    ++#define FREE pfree
    ++#else
     +/*
     + * Backend pfree() doesn't handle NULL pointers like the frontend's does; smooth
     + * that over to reduce mental gymnastics. Avoid multiple evaluation of the macro
    @@ src/common/jsonapi.c
     +	if (__v)			\
     +		pfree(__v);		\
     +} while (0)
    ++#endif
     +
    -+#define appendStrVal			appendStringInfo
    -+#define appendBinaryStrVal		appendBinaryStringInfo
    -+#define appendStrValChar		appendStringInfoChar
    -+#define appendStrValCharMacro	appendStringInfoCharMacro
    -+#define createStrVal			makeStringInfo
    -+#define initStrVal				initStringInfo
    -+#define resetStrVal				resetStringInfo
    -+#define termStrVal(s)			pfree((s)->data)
    -+#define destroyStrVal			destroyStringInfo
    ++#define jsonapi_appendStringInfo			appendStringInfo
    ++#define jsonapi_appendBinaryStringInfo		appendBinaryStringInfo
    ++#define jsonapi_appendStringInfoChar		appendStringInfoChar
    ++#define jsonapi_appendStringInfoCharMacro	appendStringInfoCharMacro
    ++#define jsonapi_createStringInfo			makeStringInfo
    ++#define jsonapi_initStringInfo				initStringInfo
    ++#define jsonapi_resetStringInfo				resetStringInfo
    ++#define jsonapi_termStringInfo(s)			pfree((s)->data)
    ++#define jsonapi_destroyStringInfo			destroyStringInfo
     +
    -+#endif
    ++#endif							/* JSONAPI_USE_PQEXPBUFFER */
     +
      /*
       * The context of the parser is maintained by the recursive descent
    @@ src/common/jsonapi.c: static JsonParseErrorType parse_object(JsonLexContext *lex
      
      /* the null action object used for pure validation */
      const JsonSemAction nullSemAction =
    +@@ src/common/jsonapi.c: const JsonSemAction nullSemAction =
    + 	NULL, NULL, NULL, NULL, NULL
    + };
    + 
    ++/* sentinels used for out-of-memory conditions */
    ++static JsonLexContext failed_oom;
    ++static JsonIncrementalState failed_inc_oom;
    ++
    + /* Parser support routines */
    + 
    + /*
     @@ src/common/jsonapi.c: IsValidJsonNumber(const char *str, size_t len)
      {
      	bool		numeric_error;
    @@ src/common/jsonapi.c: IsValidJsonNumber(const char *str, size_t len)
       * freeJsonLexContext() or (in backend environment) via memory context
       * cleanup.
     + *
    -+ * In frontend code this can return NULL on OOM, so callers must inspect the
    -+ * returned pointer.
    ++ * In shlib code, any out-of-memory failures will be deferred to time
    ++ * of use; this function is guaranteed to return a valid JsonLexContext.
       */
      JsonLexContext *
      makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const ch
     -		lex = palloc0(sizeof(JsonLexContext));
     +		lex = ALLOC0(sizeof(JsonLexContext));
     +		if (!lex)
    -+			return NULL;
    ++			return &failed_oom;
      		lex->flags |= JSONLEX_FREE_STRUCT;
      	}
      	else
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const ch
      	{
     -		lex->strval = makeStringInfo();
     +		/*
    -+		 * This call can fail in FRONTEND code. We defer error handling to
    -+		 * time of use (json_lex_string()) since we might not need to parse
    -+		 * any strings anyway.
    ++		 * This call can fail in shlib code. We defer error handling to time
    ++		 * of use (json_lex_string()) since we might not need to parse any
    ++		 * strings anyway.
     +		 */
    -+		lex->strval = createStrVal();
    ++		lex->strval = jsonapi_createStringInfo();
      		lex->flags |= JSONLEX_FREE_STRVAL;
     +		lex->parse_strval = true;
      	}
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const ch
      
     +/*
     + * Allocates the internal bookkeeping structures for incremental parsing. This
    -+ * can only fail in-band with FRONTEND code.
    ++ * can only fail in-band with shlib code.
     + */
     +#define JS_STACK_CHUNK_SIZE 64
     +#define JS_MAX_PROD_LEN 10		/* more than we need */
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const ch
     +	fnames = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(char *));
     +	fnull = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(bool));
     +
    -+#ifdef FRONTEND
    ++#ifdef JSONAPI_USE_PQEXPBUFFER
     +	if (!lex->inc_state
     +		|| !pstack
     +		|| !prediction
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const ch
     +		FREE(fnames);
     +		FREE(fnull);
     +
    ++		lex->inc_state = &failed_inc_oom;
     +		return false;
     +	}
     +#endif
     +
    -+	initStrVal(&(lex->inc_state->partial_token));
    ++	jsonapi_initStringInfo(&(lex->inc_state->partial_token));
     +	lex->pstack = pstack;
     +	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
     +	lex->pstack->prediction = prediction;
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const ch
       * parse routine. We also need an accumulator for partial tokens in case
       * the boundary between chunks happens to fall in the middle of a token.
     + *
    -+ * In frontend code this can return NULL on OOM, so callers must inspect the
    -+ * returned pointer.
    ++ * In shlib code, any out-of-memory failures will be deferred to time of use;
    ++ * this function is guaranteed to return a valid JsonLexContext.
       */
     -#define JS_STACK_CHUNK_SIZE 64
     -#define JS_MAX_PROD_LEN 10		/* more than we need */
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const ch
     -		lex = palloc0(sizeof(JsonLexContext));
     +		lex = ALLOC0(sizeof(JsonLexContext));
     +		if (!lex)
    -+			return NULL;
    ++			return &failed_oom;
     +
      		lex->flags |= JSONLEX_FREE_STRUCT;
      	}
    @@ src/common/jsonapi.c: makeJsonLexContextIncremental(JsonLexContext *lex, int enc
     +	if (!allocate_incremental_state(lex))
     +	{
     +		if (lex->flags & JSONLEX_FREE_STRUCT)
    ++		{
     +			FREE(lex);
    -+		return NULL;
    ++			return &failed_oom;
    ++		}
    ++
    ++		/* lex->inc_state tracks the OOM failure; we can return here. */
    ++		return lex;
     +	}
     +
      	if (need_escapes)
      	{
     -		lex->strval = makeStringInfo();
     +		/*
    -+		 * This call can fail in FRONTEND code. We defer error handling to
    -+		 * time of use (json_lex_string()) since we might not need to parse
    -+		 * any strings anyway.
    ++		 * This call can fail in shlib code. We defer error handling to time
    ++		 * of use (json_lex_string()) since we might not need to parse any
    ++		 * strings anyway.
     +		 */
    -+		lex->strval = createStrVal();
    ++		lex->strval = jsonapi_createStringInfo();
      		lex->flags |= JSONLEX_FREE_STRVAL;
     +		lex->parse_strval = true;
      	}
    @@ src/common/jsonapi.c: makeJsonLexContextIncremental(JsonLexContext *lex, int enc
     +							 new_stack_size * sizeof(char *));
     +		new_fnull = REALLOC(lex->pstack->fnull, new_stack_size * sizeof(bool));
     +
    -+#ifdef FRONTEND
    ++#ifdef JSONAPI_USE_PQEXPBUFFER
     +		if (!new_prediction || !new_fnames || !new_fnull)
     +			return false;
     +#endif
    @@ src/common/jsonapi.c: get_fnull(JsonLexContext *lex)
      {
     +	static const JsonLexContext empty = {0};
     +
    -+	if (!lex)
    ++	if (!lex || lex == &failed_oom)
     +		return;
     +
      	if (lex->flags & JSONLEX_FREE_STRVAL)
     -		destroyStringInfo(lex->strval);
    -+		destroyStrVal(lex->strval);
    ++		jsonapi_destroyStringInfo(lex->strval);
      
      	if (lex->errormsg)
     -		destroyStringInfo(lex->errormsg);
    -+		destroyStrVal(lex->errormsg);
    ++		jsonapi_destroyStringInfo(lex->errormsg);
      
      	if (lex->incremental)
      	{
    @@ src/common/jsonapi.c: get_fnull(JsonLexContext *lex)
     -		pfree(lex->pstack->fnames);
     -		pfree(lex->pstack->fnull);
     -		pfree(lex->pstack);
    -+		termStrVal(&lex->inc_state->partial_token);
    ++		jsonapi_termStringInfo(&lex->inc_state->partial_token);
     +		FREE(lex->inc_state);
     +		FREE(lex->pstack->prediction);
     +		FREE(lex->pstack->fnames);
    @@ src/common/jsonapi.c: JsonParseErrorType
      
      	return pg_parse_json_incremental(lex, sem, lex->input, lex->input_length, true);
      
    +@@ src/common/jsonapi.c: pg_parse_json(JsonLexContext *lex, const JsonSemAction *sem)
    + 	JsonTokenType tok;
    + 	JsonParseErrorType result;
    + 
    ++	if (lex == &failed_oom)
    ++		return JSON_OUT_OF_MEMORY;
    + 	if (lex->incremental)
    + 		return JSON_INVALID_LEXER_TYPE;
    + 
     @@ src/common/jsonapi.c: json_count_array_elements(JsonLexContext *lex, int *elements)
    + 	int			count;
    + 	JsonParseErrorType result;
    + 
    ++	if (lex == &failed_oom)
    ++		return JSON_OUT_OF_MEMORY;
    ++
    + 	/*
    + 	 * It's safe to do this with a shallow copy because the lexical routines
    + 	 * don't scribble on the input. They do scribble on the other pointers
      	 * etc, so doing this with a copy makes that safe.
      	 */
      	memcpy(&copylex, lex, sizeof(JsonLexContext));
    @@ src/common/jsonapi.c: json_count_array_elements(JsonLexContext *lex, int *elemen
      	copylex.lex_level++;
      
      	count = 0;
    +@@ src/common/jsonapi.c: pg_parse_json_incremental(JsonLexContext *lex,
    + 	JsonParseContext ctx = JSON_PARSE_VALUE;
    + 	JsonParserStack *pstack = lex->pstack;
    + 
    +-
    ++	if (lex == &failed_oom || lex->inc_state == &failed_inc_oom)
    ++		return JSON_OUT_OF_MEMORY;
    + 	if (!lex->incremental)
    + 		return JSON_INVALID_LEXER_TYPE;
    + 
     @@ src/common/jsonapi.c: pg_parse_json_incremental(JsonLexContext *lex,
      							if (result != JSON_SUCCESS)
      								return result;
    @@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
      	JsonParseErrorType result;
      
     -	if (lex->incremental && lex->inc_state->partial_completed)
    ++	if (lex == &failed_oom || lex->inc_state == &failed_inc_oom)
    ++		return JSON_OUT_OF_MEMORY;
    ++
     +	if (lex->incremental)
      	{
     -		/*
    @@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
     +			 * We just lexed a completed partial token on the last call, so
     +			 * reset everything
     +			 */
    -+			resetStrVal(&(lex->inc_state->partial_token));
    ++			jsonapi_resetStringInfo(&(lex->inc_state->partial_token));
     +			lex->token_terminator = lex->input;
     +			lex->inc_state->partial_completed = false;
     +		}
     +
    -+#ifdef FRONTEND
    ++#ifdef JSONAPI_USE_PQEXPBUFFER
     +		/* Make sure our partial token buffer is valid before using it below. */
     +		if (PQExpBufferDataBroken(lex->inc_state->partial_token))
     +			return JSON_OUT_OF_MEMORY;
    @@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
      				char		c = lex->input[i];
      
     -				appendStringInfoCharMacro(ptok, c);
    -+				appendStrValCharMacro(ptok, c);
    ++				jsonapi_appendStringInfoCharMacro(ptok, c);
      				added++;
      				if (c == '"' && escapes % 2 == 0)
      				{
    @@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
      						case '9':
      							{
     -								appendStringInfoCharMacro(ptok, cc);
    -+								appendStrValCharMacro(ptok, cc);
    ++								jsonapi_appendStringInfoCharMacro(ptok, cc);
      								added++;
      							}
      							break;
    @@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
      				if (JSON_ALPHANUMERIC_CHAR(cc))
      				{
     -					appendStringInfoCharMacro(ptok, cc);
    -+					appendStrValCharMacro(ptok, cc);
    ++					jsonapi_appendStringInfoCharMacro(ptok, cc);
      					added++;
      				}
      				else
    @@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
      					{
     -						appendBinaryStringInfo(
     -											   &(lex->inc_state->partial_token), s, end - s);
    -+						appendBinaryStrVal(
    -+										   &(lex->inc_state->partial_token), s, end - s);
    ++						jsonapi_appendBinaryStringInfo(&(lex->inc_state->partial_token), s, end - s);
      						return JSON_INCOMPLETE;
      					}
      
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      		{ \
     -			appendBinaryStringInfo(&lex->inc_state->partial_token, \
     -								   lex->token_start, end - lex->token_start); \
    -+			appendBinaryStrVal(&lex->inc_state->partial_token, \
    -+							   lex->token_start, end - lex->token_start); \
    ++			jsonapi_appendBinaryStringInfo(&lex->inc_state->partial_token, \
    ++										   lex->token_start, \
    ++										   end - lex->token_start); \
      			return JSON_INCOMPLETE; \
      		} \
      		lex->token_terminator = s; \
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
     -		resetStringInfo(lex->strval);
     +	if (lex->parse_strval)
     +	{
    -+#ifdef FRONTEND
    ++#ifdef JSONAPI_USE_PQEXPBUFFER
     +		/* make sure initialization succeeded */
     +		if (lex->strval == NULL)
     +			return JSON_OUT_OF_MEMORY;
     +#endif
    -+		resetStrVal(lex->strval);
    ++		jsonapi_resetStringInfo(lex->strval);
     +	}
      
      	Assert(lex->input_length > 0);
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      						unicode_to_utf8(ch, (unsigned char *) utf8str);
      						utf8len = pg_utf_mblen((unsigned char *) utf8str);
     -						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
    -+						appendBinaryPQExpBuffer(lex->strval, utf8str, utf8len);
    ++						jsonapi_appendBinaryStringInfo(lex->strval, utf8str, utf8len);
      					}
      					else if (ch <= 0x007f)
      					{
      						/* The ASCII range is the same in all encodings */
     -						appendStringInfoChar(lex->strval, (char) ch);
    -+						appendPQExpBufferChar(lex->strval, (char) ch);
    ++						jsonapi_appendStringInfoChar(lex->strval, (char) ch);
      					}
      					else
      						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      					case '\\':
      					case '/':
     -						appendStringInfoChar(lex->strval, *s);
    -+						appendStrValChar(lex->strval, *s);
    ++						jsonapi_appendStringInfoChar(lex->strval, *s);
      						break;
      					case 'b':
     -						appendStringInfoChar(lex->strval, '\b');
    -+						appendStrValChar(lex->strval, '\b');
    ++						jsonapi_appendStringInfoChar(lex->strval, '\b');
      						break;
      					case 'f':
     -						appendStringInfoChar(lex->strval, '\f');
    -+						appendStrValChar(lex->strval, '\f');
    ++						jsonapi_appendStringInfoChar(lex->strval, '\f');
      						break;
      					case 'n':
     -						appendStringInfoChar(lex->strval, '\n');
    -+						appendStrValChar(lex->strval, '\n');
    ++						jsonapi_appendStringInfoChar(lex->strval, '\n');
      						break;
      					case 'r':
     -						appendStringInfoChar(lex->strval, '\r');
    -+						appendStrValChar(lex->strval, '\r');
    ++						jsonapi_appendStringInfoChar(lex->strval, '\r');
      						break;
      					case 't':
     -						appendStringInfoChar(lex->strval, '\t');
    -+						appendStrValChar(lex->strval, '\t');
    ++						jsonapi_appendStringInfoChar(lex->strval, '\t');
      						break;
      					default:
      
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      			/*
      			 * Skip to the first byte that requires special handling, so we
     -			 * can batch calls to appendBinaryStringInfo.
    -+			 * can batch calls to appendBinaryStrVal.
    ++			 * can batch calls to jsonapi_appendBinaryStringInfo.
      			 */
      			while (p < end - sizeof(Vector8) &&
      				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
     -			if (lex->strval != NULL)
     -				appendBinaryStringInfo(lex->strval, s, p - s);
     +			if (lex->parse_strval)
    -+				appendBinaryStrVal(lex->strval, s, p - s);
    ++				jsonapi_appendBinaryStringInfo(lex->strval, s, p - s);
      
      			/*
      			 * s will be incremented at the top of the loop, so set it to just
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      		return JSON_UNICODE_LOW_SURROGATE;
      	}
      
    -+#ifdef FRONTEND
    ++#ifdef JSONAPI_USE_PQEXPBUFFER
     +	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
     +		return JSON_OUT_OF_MEMORY;
     +#endif
    @@ src/common/jsonapi.c: json_lex_number(JsonLexContext *lex, const char *s,
      	{
     -		appendBinaryStringInfo(&lex->inc_state->partial_token,
     -							   lex->token_start, s - lex->token_start);
    -+		appendBinaryStrVal(&lex->inc_state->partial_token,
    -+						   lex->token_start, s - lex->token_start);
    ++		jsonapi_appendBinaryStringInfo(&lex->inc_state->partial_token,
    ++									   lex->token_start, s - lex->token_start);
      		if (num_err != NULL)
      			*num_err = error;
      
    @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *l
      char *
      json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
      {
    -+	if (error == JSON_OUT_OF_MEMORY)
    ++	if (error == JSON_OUT_OF_MEMORY || lex == &failed_oom)
     +	{
     +		/* Short circuit. Allocating anything for this case is unhelpful. */
     +		return _("out of memory");
    @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *l
     +
      	if (lex->errormsg)
     -		resetStringInfo(lex->errormsg);
    -+		resetStrVal(lex->errormsg);
    ++		jsonapi_resetStringInfo(lex->errormsg);
      	else
     -		lex->errormsg = makeStringInfo();
    -+		lex->errormsg = createStrVal();
    ++		lex->errormsg = jsonapi_createStringInfo();
      
      	/*
      	 * A helper for error messages that should print the current token. The
    @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *l
     -	appendStringInfo((lex)->errormsg, _(format), \
     -					 (int) ((lex)->token_terminator - (lex)->token_start), \
     -					 (lex)->token_start);
    -+	appendStrVal((lex)->errormsg, _(format), \
    -+				 (int) ((lex)->token_terminator - (lex)->token_start), \
    -+				 (lex)->token_start);
    ++	jsonapi_appendStringInfo((lex)->errormsg, _(format), \
    ++							 (int) ((lex)->token_terminator - (lex)->token_start), \
    ++							 (lex)->token_start);
      
      	switch (error)
      	{
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
     -			appendStringInfo(lex->errormsg,
     -							 _("Character with value 0x%02x must be escaped."),
     -							 (unsigned char) *(lex->token_terminator));
    -+			appendStrVal(lex->errormsg,
    -+						 _("Character with value 0x%02x must be escaped."),
    -+						 (unsigned char) *(lex->token_terminator));
    ++			jsonapi_appendStringInfo(lex->errormsg,
    ++									 _("Character with value 0x%02x must be escaped."),
    ++									 (unsigned char) *(lex->token_terminator));
      			break;
      		case JSON_EXPECTED_END:
      			json_token_error(lex, "Expected end of input, but found \"%.*s\".");
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
     -		appendStringInfo(lex->errormsg,
     -						 "unexpected json parse error type: %d",
     -						 (int) error);
    -+	/* Note that lex->errormsg can be NULL in FRONTEND code. */
    ++	/* Note that lex->errormsg can be NULL in shlib code. */
     +	if (lex->errormsg && lex->errormsg->len == 0)
     +	{
     +		/*
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
     +		 * unhandled enum values.  But this needs to be here anyway to cover
     +		 * the possibility of an incorrect input.
     +		 */
    -+		appendStrVal(lex->errormsg,
    -+					 "unexpected json parse error type: %d",
    -+					 (int) error);
    ++		jsonapi_appendStringInfo(lex->errormsg,
    ++								 "unexpected json parse error type: %d",
    ++								 (int) error);
     +	}
     +
    -+#ifdef FRONTEND
    ++#ifdef JSONAPI_USE_PQEXPBUFFER
     +	if (PQExpBufferBroken(lex->errormsg))
     +		return _("out of memory while constructing error description");
     +#endif
    @@ src/common/jsonapi.c: json_errdetail(JsonParseErrorType error, JsonLexContext *l
      }
     
      ## src/common/meson.build ##
    -@@ src/common/meson.build: common_sources_frontend_static += files(
    - # least cryptohash_openssl.c, hmac_openssl.c depend on it.
    - # controldata_utils.c depends on wait_event_types_h. That's arguably a
    - # layering violation, but ...
    +@@ src/common/meson.build: common_sources_cflags = {
    + # a matter of policy, because it is not appropriate for general purpose
    + # libraries such as libpq to report errors directly.  fe_memutils.c is
    + # excluded because libpq must not exit() on allocation failure.
     +#
    -+# XXX Frontend builds need libpq's pqexpbuffer.h, so adjust the include paths
    -+# appropriately. This seems completely broken.
    - pgcommon = {}
    - pgcommon_variants = {
    -   '_srv': internal_lib_args + {
    -+    'include_directories': include_directories('.'),
    -     'sources': common_sources + [lwlocknames_h] + [wait_event_types_h],
    -     'dependencies': [backend_common_code],
    -   },
    -   '': default_lib_args + {
    -+    'include_directories': include_directories('../interfaces/libpq', '.'),
    -     'sources': common_sources_frontend_static,
    -     'dependencies': [frontend_common_code],
    -     # Files in libpgcommon.a should use/export the "xxx_private" versions
    ++# The excluded files for _shlib builds are pulled into their own static
    ++# library, for the benefit of test programs that need not follow the
    ++# shlib rules.
    + 
    + common_sources_frontend_shlib = common_sources
    + common_sources_frontend_shlib += files(
    +@@ src/common/meson.build: common_sources_frontend_shlib += files(
    +   'sprompt.c',
    + )
    + 
    +-common_sources_frontend_static = common_sources_frontend_shlib
    +-common_sources_frontend_static += files(
    ++common_sources_excluded_shlib = files(
    +   'fe_memutils.c',
    +   'logging.c',
    + )
    + 
    ++common_sources_frontend_static = [
    ++  common_sources_frontend_shlib,
    ++  common_sources_excluded_shlib,
    ++]
    ++
    + # Build pgcommon once for backend, once for use in frontend binaries, and
    + # once for use in shared libraries
    + #
     @@ src/common/meson.build: pgcommon_variants = {
    -   },
    -   '_shlib': default_lib_args + {
          'pic': true,
    -+    'include_directories': include_directories('../interfaces/libpq', '.'),
          'sources': common_sources_frontend_shlib,
          'dependencies': [frontend_common_code],
    ++    # The JSON API normally exits on out-of-memory; disable that behavior for
    ++    # shared library builds. This requires libpq's pqexpbuffer.h.
    ++    'c_args': ['-DJSONAPI_USE_PQEXPBUFFER'],
    ++    'include_directories': include_directories('../interfaces/libpq'),
        },
    + }
    + 
     @@ src/common/meson.build: foreach name, opts : pgcommon_variants
          c_args = opts.get('c_args', []) + common_cflags[cflagname]
          cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
            c_pch: pch_c_h,
     -      include_directories: include_directories('.'),
            kwargs: opts + {
    ++        'include_directories': [
    ++          include_directories('.'),
    ++          opts.get('include_directories', []),
    ++        ],
              'sources': sources,
              'c_args': c_args,
    +         'build_by_default': false,
     @@ src/common/meson.build: foreach name, opts : pgcommon_variants
        lib = static_library('libpgcommon@0@'.format(name),
            link_with: cflag_libs,
            c_pch: pch_c_h,
     -      include_directories: include_directories('.'),
            kwargs: opts + {
    ++        'include_directories': [
    ++          include_directories('.'),
    ++          opts.get('include_directories', []),
    ++        ],
              'dependencies': opts['dependencies'] + [ssl],
            }
    -
    - ## src/common/parse_manifest.c ##
    -@@ src/common/parse_manifest.c: json_parse_manifest_incremental_init(JsonManifestParseContext *context)
    - 	parse->state = JM_EXPECT_TOPLEVEL_START;
    - 	parse->saw_version_field = false;
    - 
    --	makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true);
    -+	if (!makeJsonLexContextIncremental(&(incstate->lex), PG_UTF8, true))
    -+		context->error_cb(context, "out of memory");
    - 
    - 	incstate->sem.semstate = parse;
    - 	incstate->sem.object_start = json_manifest_object_start;
    -@@ src/common/parse_manifest.c: json_parse_manifest(JsonManifestParseContext *context, const char *buffer,
    - 
    - 	/* Create a JSON lexing context. */
    - 	lex = makeJsonLexContextCstringLen(NULL, buffer, size, PG_UTF8, true);
    -+	if (!lex)
    -+		json_manifest_parse_failure(context, "out of memory");
    - 
    - 	/* Set up semantic actions. */
    - 	sem.semstate = &parse;
    +     )
    +@@ src/common/meson.build: common_srv = pgcommon['_srv']
    + common_shlib = pgcommon['_shlib']
    + common_static = pgcommon['']
    + 
    ++common_excluded_shlib = static_library('libpgcommon_excluded_shlib',
    ++  sources: common_sources_excluded_shlib,
    ++  dependencies: [frontend_common_code],
    ++  build_by_default: false,
    ++  kwargs: default_lib_args + {
    ++    'install': false,
    ++  },
    ++)
    ++
    + subdir('unicode')
     
      ## src/include/common/jsonapi.h ##
     @@
    @@ src/include/common/jsonapi.h: typedef enum JsonParseErrorType
     + * Don't depend on the internal type header for strval; if callers need access
     + * then they can include the appropriate header themselves.
     + */
    -+#ifdef FRONTEND
    ++#ifdef JSONAPI_USE_PQEXPBUFFER
     +#define StrValType PQExpBufferData
     +#else
     +#define StrValType StringInfoData
    @@ src/include/common/jsonapi.h: typedef struct JsonLexContext
      typedef JsonParseErrorType (*json_struct_action) (void *state);
     
      ## src/test/modules/test_json_parser/Makefile ##
    +@@ src/test/modules/test_json_parser/Makefile: TAP_TESTS = 1
    + 
    + OBJS = test_json_parser_incremental.o test_json_parser_perf.o $(WIN32RES)
    + 
    +-EXTRA_CLEAN = test_json_parser_incremental$(X) test_json_parser_perf$(X)
    ++EXTRA_CLEAN = test_json_parser_incremental$(X) test_json_parser_incremental_shlib$(X) test_json_parser_perf$(X)
    + 
    + ifdef USE_PGXS
    + PG_CONFIG = pg_config
     @@ src/test/modules/test_json_parser/Makefile: include $(top_builddir)/src/Makefile.global
      include $(top_srcdir)/contrib/contrib-global.mk
      endif
      
    -+# TODO: fix this properly
    -+LDFLAGS_INTERNAL += -lpgcommon $(libpq_pgport)
    -+
    - all: test_json_parser_incremental$(X) test_json_parser_perf$(X)
    +-all: test_json_parser_incremental$(X) test_json_parser_perf$(X)
    ++all: test_json_parser_incremental$(X) test_json_parser_incremental_shlib$(X) test_json_parser_perf$(X)
      
      %.o: $(top_srcdir)/$(subdir)/%.c
    + 
    + test_json_parser_incremental$(X): test_json_parser_incremental.o $(WIN32RES)
    + 	$(CC) $(CFLAGS) $^ $(PG_LIBS_INTERNAL) $(LDFLAGS) $(LDFLAGS_EX) $(PG_LIBS) $(LIBS) -o $@
    + 
    ++test_json_parser_incremental_shlib$(X): test_json_parser_incremental.o $(WIN32RES)
    ++	$(CC) $(CFLAGS) $^ $(LDFLAGS) -lpgcommon_excluded_shlib $(libpq_pgport_shlib) -o $@
    ++
    + test_json_parser_perf$(X): test_json_parser_perf.o $(WIN32RES)
    + 	$(CC) $(CFLAGS) $^ $(PG_LIBS_INTERNAL) $(LDFLAGS) $(LDFLAGS_EX) $(PG_LIBS) $(LIBS) -o $@
    + 
     
      ## src/test/modules/test_json_parser/meson.build ##
    -@@ src/test/modules/test_json_parser/meson.build: endif
    - 
    - test_json_parser_incremental = executable('test_json_parser_incremental',
    -   test_json_parser_incremental_sources,
    --  dependencies: [frontend_code],
    -+  dependencies: [frontend_code, libpq],
    -   kwargs: default_bin_args + {
    -     'install': false,
    -   },
    -@@ src/test/modules/test_json_parser/meson.build: endif
    - 
    - test_json_parser_perf = executable('test_json_parser_perf',
    -   test_json_parser_perf_sources,
    --  dependencies: [frontend_code],
    -+  dependencies: [frontend_code, libpq],
    -   kwargs: default_bin_args + {
    -     'install': false,
    +@@ src/test/modules/test_json_parser/meson.build: test_json_parser_incremental = executable('test_json_parser_incremental',
        },
    + )
    + 
    ++# A second version of test_json_parser_incremental, this time compiled against
    ++# the shared-library flavor of jsonapi.
    ++test_json_parser_incremental_shlib = executable('test_json_parser_incremental_shlib',
    ++  test_json_parser_incremental_sources,
    ++  dependencies: [frontend_shlib_code, libpq],
    ++  c_args: ['-DJSONAPI_SHLIB_ALLOC'],
    ++  link_with: [common_excluded_shlib],
    ++  kwargs: default_bin_args + {
    ++    'install': false,
    ++  },
    ++)
    ++
    + test_json_parser_perf_sources = files(
    +   'test_json_parser_perf.c',
    + )
    +
    + ## src/test/modules/test_json_parser/t/001_test_json_parser_incremental.pl ##
    +@@ src/test/modules/test_json_parser/t/001_test_json_parser_incremental.pl: use FindBin;
    + 
    + my $test_file = "$FindBin::RealBin/../tiny.json";
    + 
    +-my $exe = "test_json_parser_incremental";
    ++my @exes =
    ++  ("test_json_parser_incremental", "test_json_parser_incremental_shlib");
    + 
    +-# Test the  usage error
    +-my ($stdout, $stderr) = run_command([ $exe, "-c", 10 ]);
    +-like($stderr, qr/Usage:/, 'error message if not enough arguments');
    ++foreach my $exe (@exes)
    ++{
    ++	note "testing executable $exe";
    + 
    +-# Test that we get success for small chunk sizes from 64 down to 1.
    ++	# Test the  usage error
    ++	my ($stdout, $stderr) = run_command([ $exe, "-c", 10 ]);
    ++	like($stderr, qr/Usage:/, 'error message if not enough arguments');
    + 
    +-for (my $size = 64; $size > 0; $size--)
    +-{
    +-	($stdout, $stderr) = run_command([ $exe, "-c", $size, $test_file ]);
    ++	# Test that we get success for small chunk sizes from 64 down to 1.
    ++	for (my $size = 64; $size > 0; $size--)
    ++	{
    ++		($stdout, $stderr) = run_command([ $exe, "-c", $size, $test_file ]);
    + 
    +-	like($stdout, qr/SUCCESS/, "chunk size $size: test succeeds");
    +-	is($stderr, "", "chunk size $size: no error output");
    ++		like($stdout, qr/SUCCESS/, "chunk size $size: test succeeds");
    ++		is($stderr, "", "chunk size $size: no error output");
    ++	}
    + }
    + 
    + done_testing();
    +
    + ## src/test/modules/test_json_parser/t/002_inline.pl ##
    +@@ src/test/modules/test_json_parser/t/002_inline.pl: use Test::More;
    + use File::Temp qw(tempfile);
    + 
    + my $dir = PostgreSQL::Test::Utils::tempdir;
    ++my $exe;
    + 
    + sub test
    + {
    + 	local $Test::Builder::Level = $Test::Builder::Level + 1;
    + 
    + 	my ($name, $json, %params) = @_;
    +-	my $exe = "test_json_parser_incremental";
    + 	my $chunk = length($json);
    + 
    + 	# Test the input with chunk sizes from max(input_size, 64) down to 1
    +@@ src/test/modules/test_json_parser/t/002_inline.pl: sub test
    + 	}
    + }
    + 
    +-test("number", "12345");
    +-test("string", '"hello"');
    +-test("false", "false");
    +-test("true", "true");
    +-test("null", "null");
    +-test("empty object", "{}");
    +-test("empty array", "[]");
    +-test("array with number", "[12345]");
    +-test("array with numbers", "[12345,67890]");
    +-test("array with null", "[null]");
    +-test("array with string", '["hello"]');
    +-test("array with boolean", '[false]');
    +-test("single pair", '{"key": "value"}');
    +-test("heavily nested array", "[" x 3200 . "]" x 3200);
    +-test("serial escapes", '"\\\\\\\\\\\\\\\\"');
    +-test("interrupted escapes", '"\\\\\\"\\\\\\\\\\"\\\\"');
    +-test("whitespace", '     ""     ');
    +-
    +-test("unclosed empty object",
    +-	"{", error => qr/input string ended unexpectedly/);
    +-test("bad key", "{{", error => qr/Expected string or "}", but found "\{"/);
    +-test("bad key", "{{}", error => qr/Expected string or "}", but found "\{"/);
    +-test("numeric key", "{1234: 2}",
    +-	error => qr/Expected string or "}", but found "1234"/);
    +-test(
    +-	"second numeric key",
    +-	'{"a": "a", 1234: 2}',
    +-	error => qr/Expected string, but found "1234"/);
    +-test(
    +-	"unclosed object with pair",
    +-	'{"key": "value"',
    +-	error => qr/input string ended unexpectedly/);
    +-test("missing key value",
    +-	'{"key": }', error => qr/Expected JSON value, but found "}"/);
    +-test(
    +-	"missing colon",
    +-	'{"key" 12345}',
    +-	error => qr/Expected ":", but found "12345"/);
    +-test(
    +-	"missing comma",
    +-	'{"key": 12345 12345}',
    +-	error => qr/Expected "," or "}", but found "12345"/);
    +-test("overnested array",
    +-	"[" x 6401, error => qr/maximum permitted depth is 6400/);
    +-test("overclosed array",
    +-	"[]]", error => qr/Expected end of input, but found "]"/);
    +-test("unexpected token in array",
    +-	"[ }}} ]", error => qr/Expected array element or "]", but found "}"/);
    +-test("junk punctuation", "[ ||| ]", error => qr/Token "|" is invalid/);
    +-test("missing comma in array",
    +-	"[123 123]", error => qr/Expected "," or "]", but found "123"/);
    +-test("misspelled boolean", "tru", error => qr/Token "tru" is invalid/);
    +-test(
    +-	"misspelled boolean in array",
    +-	"[tru]",
    +-	error => qr/Token "tru" is invalid/);
    +-test("smashed top-level scalar", "12zz",
    +-	error => qr/Token "12zz" is invalid/);
    +-test(
    +-	"smashed scalar in array",
    +-	"[12zz]",
    +-	error => qr/Token "12zz" is invalid/);
    +-test(
    +-	"unknown escape sequence",
    +-	'"hello\vworld"',
    +-	error => qr/Escape sequence "\\v" is invalid/);
    +-test("unescaped control",
    +-	"\"hello\tworld\"",
    +-	error => qr/Character with value 0x09 must be escaped/);
    +-test(
    +-	"incorrect escape count",
    +-	'"\\\\\\\\\\\\\\"',
    +-	error => qr/Token ""\\\\\\\\\\\\\\"" is invalid/);
    +-
    +-# Case with three bytes: double-quote, backslash and <f5>.
    +-# Both invalid-token and invalid-escape are possible errors, because for
    +-# smaller chunk sizes the incremental parser skips the string parsing when
    +-# it cannot find an ending quote.
    +-test("incomplete UTF-8 sequence",
    +-	"\"\\\x{F5}",
    +-	error => qr/(Token|Escape sequence) ""?\\\x{F5}" is invalid/);
    ++my @exes =
    ++  ("test_json_parser_incremental", "test_json_parser_incremental_shlib");
    ++
    ++foreach (@exes)
    ++{
    ++	$exe = $_;
    ++	note "testing executable $exe";
    ++
    ++	test("number", "12345");
    ++	test("string", '"hello"');
    ++	test("false", "false");
    ++	test("true", "true");
    ++	test("null", "null");
    ++	test("empty object", "{}");
    ++	test("empty array", "[]");
    ++	test("array with number", "[12345]");
    ++	test("array with numbers", "[12345,67890]");
    ++	test("array with null", "[null]");
    ++	test("array with string", '["hello"]');
    ++	test("array with boolean", '[false]');
    ++	test("single pair", '{"key": "value"}');
    ++	test("heavily nested array", "[" x 3200 . "]" x 3200);
    ++	test("serial escapes", '"\\\\\\\\\\\\\\\\"');
    ++	test("interrupted escapes", '"\\\\\\"\\\\\\\\\\"\\\\"');
    ++	test("whitespace", '     ""     ');
    ++
    ++	test("unclosed empty object",
    ++		"{", error => qr/input string ended unexpectedly/);
    ++	test("bad key", "{{",
    ++		error => qr/Expected string or "}", but found "\{"/);
    ++	test("bad key", "{{}",
    ++		error => qr/Expected string or "}", but found "\{"/);
    ++	test("numeric key", "{1234: 2}",
    ++		error => qr/Expected string or "}", but found "1234"/);
    ++	test(
    ++		"second numeric key",
    ++		'{"a": "a", 1234: 2}',
    ++		error => qr/Expected string, but found "1234"/);
    ++	test(
    ++		"unclosed object with pair",
    ++		'{"key": "value"',
    ++		error => qr/input string ended unexpectedly/);
    ++	test("missing key value",
    ++		'{"key": }', error => qr/Expected JSON value, but found "}"/);
    ++	test(
    ++		"missing colon",
    ++		'{"key" 12345}',
    ++		error => qr/Expected ":", but found "12345"/);
    ++	test(
    ++		"missing comma",
    ++		'{"key": 12345 12345}',
    ++		error => qr/Expected "," or "}", but found "12345"/);
    ++	test("overnested array",
    ++		"[" x 6401, error => qr/maximum permitted depth is 6400/);
    ++	test("overclosed array",
    ++		"[]]", error => qr/Expected end of input, but found "]"/);
    ++	test("unexpected token in array",
    ++		"[ }}} ]", error => qr/Expected array element or "]", but found "}"/);
    ++	test("junk punctuation", "[ ||| ]", error => qr/Token "|" is invalid/);
    ++	test("missing comma in array",
    ++		"[123 123]", error => qr/Expected "," or "]", but found "123"/);
    ++	test("misspelled boolean", "tru", error => qr/Token "tru" is invalid/);
    ++	test(
    ++		"misspelled boolean in array",
    ++		"[tru]",
    ++		error => qr/Token "tru" is invalid/);
    ++	test(
    ++		"smashed top-level scalar",
    ++		"12zz",
    ++		error => qr/Token "12zz" is invalid/);
    ++	test(
    ++		"smashed scalar in array",
    ++		"[12zz]",
    ++		error => qr/Token "12zz" is invalid/);
    ++	test(
    ++		"unknown escape sequence",
    ++		'"hello\vworld"',
    ++		error => qr/Escape sequence "\\v" is invalid/);
    ++	test("unescaped control",
    ++		"\"hello\tworld\"",
    ++		error => qr/Character with value 0x09 must be escaped/);
    ++	test(
    ++		"incorrect escape count",
    ++		'"\\\\\\\\\\\\\\"',
    ++		error => qr/Token ""\\\\\\\\\\\\\\"" is invalid/);
    ++
    ++	# Case with three bytes: double-quote, backslash and <f5>.
    ++	# Both invalid-token and invalid-escape are possible errors, because for
    ++	# smaller chunk sizes the incremental parser skips the string parsing when
    ++	# it cannot find an ending quote.
    ++	test("incomplete UTF-8 sequence",
    ++		"\"\\\x{F5}",
    ++		error => qr/(Token|Escape sequence) ""?\\\x{F5}" is invalid/);
    ++}
    + 
    + done_testing();
    +
    + ## src/test/modules/test_json_parser/t/003_test_semantic.pl ##
    +@@ src/test/modules/test_json_parser/t/003_test_semantic.pl: use File::Temp qw(tempfile);
    + my $test_file = "$FindBin::RealBin/../tiny.json";
    + my $test_out = "$FindBin::RealBin/../tiny.out";
    + 
    +-my $exe = "test_json_parser_incremental";
    ++my @exes =
    ++  ("test_json_parser_incremental", "test_json_parser_incremental_shlib");
    + 
    +-my ($stdout, $stderr) = run_command([ $exe, "-s", $test_file ]);
    ++foreach my $exe (@exes)
    ++{
    ++	note "testing executable $exe";
    + 
    +-is($stderr, "", "no error output");
    ++	my ($stdout, $stderr) = run_command([ $exe, "-s", $test_file ]);
    + 
    +-my $dir = PostgreSQL::Test::Utils::tempdir;
    +-my ($fh, $fname) = tempfile(DIR => $dir);
    ++	is($stderr, "", "no error output");
    + 
    +-print $fh $stdout, "\n";
    ++	my $dir = PostgreSQL::Test::Utils::tempdir;
    ++	my ($fh, $fname) = tempfile(DIR => $dir);
    + 
    +-close($fh);
    ++	print $fh $stdout, "\n";
    + 
    +-my @diffopts = ("-u");
    +-push(@diffopts, "--strip-trailing-cr") if $windows_os;
    +-($stdout, $stderr) = run_command([ "diff", @diffopts, $fname, $test_out ]);
    ++	close($fh);
    + 
    +-is($stdout, "", "no output diff");
    +-is($stderr, "", "no diff error");
    ++	my @diffopts = ("-u");
    ++	push(@diffopts, "--strip-trailing-cr") if $windows_os;
    ++	($stdout, $stderr) =
    ++	  run_command([ "diff", @diffopts, $fname, $test_out ]);
    ++
    ++	is($stdout, "", "no output diff");
    ++	is($stderr, "", "no diff error");
    ++}
    + 
    + done_testing();
2:  da6c573a34 ! 2:  0335987632 libpq: add OAUTHBEARER SASL mechanism
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
     +					goto keep_going;
     +				}
      
    - 				/* OK, we have processed the message; mark data consumed */
    - 				conn->inStart = conn->inCursor;
    + 				/*
    + 				 * OK, we have processed the message; mark data consumed.  We
     @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here until there is
      				goto keep_going;
      			}
    @@ src/interfaces/libpq/fe-misc.c: pqSocketCheck(PGconn *conn, int forRead, int for
     
      ## src/interfaces/libpq/libpq-fe.h ##
     @@ src/interfaces/libpq/libpq-fe.h: extern "C"
    - #define LIBPQ_HAS_TRACE_FLAGS 1
    - /* Indicates that PQsslAttribute(NULL, "library") is useful */
    - #define LIBPQ_HAS_SSL_LIBRARY_DETECTION 1
    + /* Indicates presence of PQsocketPoll, PQgetCurrentTimeUSec */
    + #define LIBPQ_HAS_SOCKET_POLL 1
    + 
    ++/* Features added in PostgreSQL v18: */
     +/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
     +#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
    - 
    ++
      /*
       * Option flags for PQcopyResult
    +  */
     @@ src/interfaces/libpq/libpq-fe.h: typedef enum
      	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
      	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
3:  ebaaec0ecd ! 3:  a0806d7c65 backend: add OAUTHBEARER SASL mechanism
    @@ src/backend/utils/misc/guc_tables.c: struct config_string ConfigureNamesString[]
      	{
      		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
     
    - ## src/common/Makefile ##
    -@@ src/common/Makefile: override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
    - override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
    - 
    - override CPPFLAGS := -DFRONTEND -I. -I$(top_srcdir)/src/common -I$(libpq_srcdir) $(CPPFLAGS)
    --LIBS += $(PTHREAD_LIBS)
    -+LIBS += $(PTHREAD_LIBS) $(libpq_pgport)
    - 
    - OBJS_COMMON = \
    - 	archive.o \
    -
      ## src/include/libpq/auth.h ##
     @@
      
4:  78ce297b9e ! 4:  5d5694934a Review comments
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c
      /*
       * Parsed JSON Representations
       *
    -@@ src/interfaces/libpq/fe-auth-oauth-curl.c: parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
    - 		return false;
    - 	}
    - 
    --	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
    -+	if (!makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true))
    -+	{
    -+		actx_error(actx, "out of memory");
    -+		return false;
    -+	}
    - 
    - 	ctx.errbuf = &actx->errbuf;
    - 	ctx.fields = fields;
     @@ src/interfaces/libpq/fe-auth-oauth-curl.c: setup_curl_handles(struct async_ctx *actx)
      	 * pretty strict when it comes to provider behavior, so we have to check
      	 * what comes back anyway.)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c: finish_token_request(struct async_ctx
      }
      
      /*
    -
    - ## src/interfaces/libpq/fe-auth-oauth.c ##
    -@@ src/interfaces/libpq/fe-auth-oauth.c: handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
    - 		return false;
    - 	}
    - 
    --	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
    -+	if (!makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true))
    -+	{
    -+		appendPQExpBufferStr(&conn->errorMessage,
    -+							 libpq_gettext("out of memory"));
    -+		return false;
    -+	}
    - 
    - 	initPQExpBuffer(&ctx.errbuf);
    - 	sem.semstate = &ctx;
5:  6f4709574d = 5:  b459ce7e7b DO NOT MERGE: Add pytest suite for OAuth
v27-0001-common-jsonapi-support-libpq-as-a-client.patchapplication/octet-stream; name=v27-0001-common-jsonapi-support-libpq-as-a-client.patchDownload
From 202b9ecef64f9507fa93de9b303610a0fd321597 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v27 1/5] common/jsonapi: support libpq as a client

Based on a patch by Michael Paquier.

For libpq, use PQExpBuffer instead of StringInfo. This requires us to
track allocation failures so that we can return JSON_OUT_OF_MEMORY as
needed rather than exit()ing.

Co-authored-by: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/common/Makefile                           |  23 +-
 src/common/jsonapi.c                          | 473 +++++++++++++-----
 src/common/meson.build                        |  35 +-
 src/include/common/jsonapi.h                  |  20 +-
 src/test/modules/test_json_parser/Makefile    |   7 +-
 src/test/modules/test_json_parser/meson.build |  12 +
 .../t/001_test_json_parser_incremental.pl     |  25 +-
 .../modules/test_json_parser/t/002_inline.pl  | 177 ++++---
 .../test_json_parser/t/003_test_semantic.pl   |  31 +-
 9 files changed, 560 insertions(+), 243 deletions(-)

diff --git a/src/common/Makefile b/src/common/Makefile
index 89ef61c52a..b3f12dc97f 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -104,14 +104,20 @@ endif
 # a matter of policy, because it is not appropriate for general purpose
 # libraries such as libpq to report errors directly.  fe_memutils.c is
 # excluded because libpq must not exit() on allocation failure.
+#
+# The excluded files for _shlib builds are pulled into their own static
+# library, for the benefit of test programs that need not follow the
+# shlib rules.
 OBJS_FRONTEND_SHLIB = \
 	$(OBJS_COMMON) \
 	restricted_token.o \
 	sprompt.o
-OBJS_FRONTEND = \
-	$(OBJS_FRONTEND_SHLIB) \
+OBJS_EXCLUDED_SHLIB = \
 	fe_memutils.o \
 	logging.o
+OBJS_FRONTEND = \
+	$(OBJS_FRONTEND_SHLIB) \
+	$(OBJS_EXCLUDED_SHLIB)
 
 # foo.o, foo_shlib.o, and foo_srv.o are all built from foo.c
 OBJS_SHLIB = $(OBJS_FRONTEND_SHLIB:%.o=%_shlib.o)
@@ -122,7 +128,7 @@ TOOLSDIR = $(top_srcdir)/src/tools
 GEN_KEYWORDLIST = $(PERL) -I $(TOOLSDIR) $(TOOLSDIR)/gen_keywordlist.pl
 GEN_KEYWORDLIST_DEPS = $(TOOLSDIR)/gen_keywordlist.pl $(TOOLSDIR)/PerfectHash.pm
 
-all: libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a
+all: libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a libpgcommon_excluded_shlib.a
 
 # libpgcommon is needed by some contrib
 install: all installdirs
@@ -155,6 +161,11 @@ libpgcommon_shlib.a: $(OBJS_SHLIB)
 	rm -f $@
 	$(AR) $(AROPT) $@ $^
 
+# The JSON API normally exits on out-of-memory; disable that behavior for shared
+# library builds. This requires libpq's pqexpbuffer.h.
+jsonapi_shlib.o: override CPPFLAGS += -DJSONAPI_USE_PQEXPBUFFER
+jsonapi_shlib.o: override CPPFLAGS += -I$(libpq_srcdir)
+
 # Because this uses its own compilation rule, it doesn't use the
 # dependency tracking logic from Makefile.global.  To make sure that
 # dependency tracking works anyway for the *_shlib.o files, depend on
@@ -164,6 +175,10 @@ libpgcommon_shlib.a: $(OBJS_SHLIB)
 %_shlib.o: %.c %.o
 	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) -c $< -o $@
 
+libpgcommon_excluded_shlib.a: $(OBJS_EXCLUDED_SHLIB)
+	rm -f $@
+	$(AR) $(AROPT) $@ $^
+
 #
 # Server versions of object files
 #
@@ -197,6 +212,6 @@ RYU_OBJS = $(RYU_FILES) $(RYU_FILES:%.o=%_shlib.o) $(RYU_FILES:%.o=%_srv.o)
 $(RYU_OBJS): CFLAGS += $(PERMIT_DECLARATION_AFTER_STATEMENT)
 
 clean distclean:
-	rm -f libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a
+	rm -f libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a libpgcommon_excluded_shlib.a
 	rm -f $(OBJS_FRONTEND) $(OBJS_SHLIB) $(OBJS_SRV)
 	rm -f kwlist_d.h
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 2ffcaaa6fd..05a0a031a3 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,70 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef JSONAPI_USE_PQEXPBUFFER
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * By default, we will use palloc/pfree along with StringInfo.  In libpq,
+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef JSONAPI_USE_PQEXPBUFFER
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+#define ALLOC0(size) calloc(1, size)
+#define REALLOC realloc
+#define FREE(s) free(s)
+
+#define jsonapi_appendStringInfo			appendPQExpBuffer
+#define jsonapi_appendBinaryStringInfo		appendBinaryPQExpBuffer
+#define jsonapi_appendStringInfoChar		appendPQExpBufferChar
+/* XXX should we add a macro version to PQExpBuffer? */
+#define jsonapi_appendStringInfoCharMacro	appendPQExpBufferChar
+#define jsonapi_createStringInfo			createPQExpBuffer
+#define jsonapi_initStringInfo				initPQExpBuffer
+#define jsonapi_resetStringInfo				resetPQExpBuffer
+#define jsonapi_termStringInfo				termPQExpBuffer
+#define jsonapi_destroyStringInfo			destroyPQExpBuffer
+
+#else							/* !JSONAPI_USE_PQEXPBUFFER */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+#define ALLOC0(size) palloc0(size)
+#define REALLOC repalloc
+
+#ifdef FRONTEND
+#define FREE pfree
+#else
+/*
+ * Backend pfree() doesn't handle NULL pointers like the frontend's does; smooth
+ * that over to reduce mental gymnastics. Avoid multiple evaluation of the macro
+ * argument to avoid future hair-pulling.
+ */
+#define FREE(s) do {	\
+	void *__v = (s);	\
+	if (__v)			\
+		pfree(__v);		\
+} while (0)
+#endif
+
+#define jsonapi_appendStringInfo			appendStringInfo
+#define jsonapi_appendBinaryStringInfo		appendBinaryStringInfo
+#define jsonapi_appendStringInfoChar		appendStringInfoChar
+#define jsonapi_appendStringInfoCharMacro	appendStringInfoCharMacro
+#define jsonapi_createStringInfo			makeStringInfo
+#define jsonapi_initStringInfo				initStringInfo
+#define jsonapi_resetStringInfo				resetStringInfo
+#define jsonapi_termStringInfo(s)			pfree((s)->data)
+#define jsonapi_destroyStringInfo			destroyStringInfo
+
+#endif							/* JSONAPI_USE_PQEXPBUFFER */
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -103,7 +163,7 @@ struct JsonIncrementalState
 {
 	bool		is_last_chunk;
 	bool		partial_completed;
-	StringInfoData partial_token;
+	StrValType	partial_token;
 };
 
 /*
@@ -219,6 +279,7 @@ static JsonParseErrorType parse_object(JsonLexContext *lex, const JsonSemAction
 static JsonParseErrorType parse_array_element(JsonLexContext *lex, const JsonSemAction *sem);
 static JsonParseErrorType parse_array(JsonLexContext *lex, const JsonSemAction *sem);
 static JsonParseErrorType report_parse_error(JsonParseContext ctx, JsonLexContext *lex);
+static bool allocate_incremental_state(JsonLexContext *lex);
 
 /* the null action object used for pure validation */
 const JsonSemAction nullSemAction =
@@ -227,6 +288,10 @@ const JsonSemAction nullSemAction =
 	NULL, NULL, NULL, NULL, NULL
 };
 
+/* sentinels used for out-of-memory conditions */
+static JsonLexContext failed_oom;
+static JsonIncrementalState failed_inc_oom;
+
 /* Parser support routines */
 
 /*
@@ -273,15 +338,11 @@ IsValidJsonNumber(const char *str, size_t len)
 {
 	bool		numeric_error;
 	size_t		total_len;
-	JsonLexContext dummy_lex;
+	JsonLexContext dummy_lex = {0};
 
 	if (len <= 0)
 		return false;
 
-	dummy_lex.incremental = false;
-	dummy_lex.inc_state = NULL;
-	dummy_lex.pstack = NULL;
-
 	/*
 	 * json_lex_number expects a leading  '-' to have been eaten already.
 	 *
@@ -321,6 +382,9 @@ IsValidJsonNumber(const char *str, size_t len)
  * responsible for freeing the returned struct, either by calling
  * freeJsonLexContext() or (in backend environment) via memory context
  * cleanup.
+ *
+ * In shlib code, any out-of-memory failures will be deferred to time
+ * of use; this function is guaranteed to return a valid JsonLexContext.
  */
 JsonLexContext *
 makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
@@ -328,7 +392,9 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return &failed_oom;
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -341,13 +407,71 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 	lex->input_encoding = encoding;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in shlib code. We defer error handling to time
+		 * of use (json_lex_string()) since we might not need to parse any
+		 * strings anyway.
+		 */
+		lex->strval = jsonapi_createStringInfo();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
 
 	return lex;
 }
 
+/*
+ * Allocates the internal bookkeeping structures for incremental parsing. This
+ * can only fail in-band with shlib code.
+ */
+#define JS_STACK_CHUNK_SIZE 64
+#define JS_MAX_PROD_LEN 10		/* more than we need */
+#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
+								 * number */
+static bool
+allocate_incremental_state(JsonLexContext *lex)
+{
+	void	   *pstack,
+			   *prediction,
+			   *fnames,
+			   *fnull;
+
+	lex->inc_state = ALLOC0(sizeof(JsonIncrementalState));
+	pstack = ALLOC(sizeof(JsonParserStack));
+	prediction = ALLOC(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
+	fnames = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(char *));
+	fnull = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+#ifdef JSONAPI_USE_PQEXPBUFFER
+	if (!lex->inc_state
+		|| !pstack
+		|| !prediction
+		|| !fnames
+		|| !fnull)
+	{
+		FREE(lex->inc_state);
+		FREE(pstack);
+		FREE(prediction);
+		FREE(fnames);
+		FREE(fnull);
+
+		lex->inc_state = &failed_inc_oom;
+		return false;
+	}
+#endif
+
+	jsonapi_initStringInfo(&(lex->inc_state->partial_token));
+	lex->pstack = pstack;
+	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
+	lex->pstack->prediction = prediction;
+	lex->pstack->pred_index = 0;
+	lex->pstack->fnames = fnames;
+	lex->pstack->fnull = fnull;
+
+	lex->incremental = true;
+	return true;
+}
+
 
 /*
  * makeJsonLexContextIncremental
@@ -357,19 +481,20 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
  * we don't need the input, that will be handed in bit by bit to the
  * parse routine. We also need an accumulator for partial tokens in case
  * the boundary between chunks happens to fall in the middle of a token.
+ *
+ * In shlib code, any out-of-memory failures will be deferred to time of use;
+ * this function is guaranteed to return a valid JsonLexContext.
  */
-#define JS_STACK_CHUNK_SIZE 64
-#define JS_MAX_PROD_LEN 10		/* more than we need */
-#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
-								 * number */
-
 JsonLexContext *
 makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 							  bool need_escapes)
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return &failed_oom;
+
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -377,42 +502,65 @@ makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 
 	lex->line_number = 1;
 	lex->input_encoding = encoding;
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+	if (!allocate_incremental_state(lex))
+	{
+		if (lex->flags & JSONLEX_FREE_STRUCT)
+		{
+			FREE(lex);
+			return &failed_oom;
+		}
+
+		/* lex->inc_state tracks the OOM failure; we can return here. */
+		return lex;
+	}
+
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in shlib code. We defer error handling to time
+		 * of use (json_lex_string()) since we might not need to parse any
+		 * strings anyway.
+		 */
+		lex->strval = jsonapi_createStringInfo();
 		lex->flags |= JSONLEX_FREE_STRVAL;
+		lex->parse_strval = true;
 	}
+
 	return lex;
 }
 
-static inline void
+static inline bool
 inc_lex_level(JsonLexContext *lex)
 {
-	lex->lex_level += 1;
-
-	if (lex->incremental && lex->lex_level >= lex->pstack->stack_size)
+	if (lex->incremental && (lex->lex_level + 1) >= lex->pstack->stack_size)
 	{
-		lex->pstack->stack_size += JS_STACK_CHUNK_SIZE;
-		lex->pstack->prediction =
-			repalloc(lex->pstack->prediction,
-					 lex->pstack->stack_size * JS_MAX_PROD_LEN);
-		if (lex->pstack->fnames)
-			lex->pstack->fnames =
-				repalloc(lex->pstack->fnames,
-						 lex->pstack->stack_size * sizeof(char *));
-		if (lex->pstack->fnull)
-			lex->pstack->fnull =
-				repalloc(lex->pstack->fnull, lex->pstack->stack_size * sizeof(bool));
+		size_t		new_stack_size;
+		char	   *new_prediction;
+		char	  **new_fnames;
+		bool	   *new_fnull;
+
+		new_stack_size = lex->pstack->stack_size + JS_STACK_CHUNK_SIZE;
+
+		new_prediction = REALLOC(lex->pstack->prediction,
+								 new_stack_size * JS_MAX_PROD_LEN);
+		new_fnames = REALLOC(lex->pstack->fnames,
+							 new_stack_size * sizeof(char *));
+		new_fnull = REALLOC(lex->pstack->fnull, new_stack_size * sizeof(bool));
+
+#ifdef JSONAPI_USE_PQEXPBUFFER
+		if (!new_prediction || !new_fnames || !new_fnull)
+			return false;
+#endif
+
+		lex->pstack->stack_size = new_stack_size;
+		lex->pstack->prediction = new_prediction;
+		lex->pstack->fnames = new_fnames;
+		lex->pstack->fnull = new_fnull;
 	}
+
+	lex->lex_level += 1;
+	return true;
 }
 
 static inline void
@@ -482,24 +630,31 @@ get_fnull(JsonLexContext *lex)
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
+	if (!lex || lex == &failed_oom)
+		return;
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-		destroyStringInfo(lex->strval);
+		jsonapi_destroyStringInfo(lex->strval);
 
 	if (lex->errormsg)
-		destroyStringInfo(lex->errormsg);
+		jsonapi_destroyStringInfo(lex->errormsg);
 
 	if (lex->incremental)
 	{
-		pfree(lex->inc_state->partial_token.data);
-		pfree(lex->inc_state);
-		pfree(lex->pstack->prediction);
-		pfree(lex->pstack->fnames);
-		pfree(lex->pstack->fnull);
-		pfree(lex->pstack);
+		jsonapi_termStringInfo(&lex->inc_state->partial_token);
+		FREE(lex->inc_state);
+		FREE(lex->pstack->prediction);
+		FREE(lex->pstack->fnames);
+		FREE(lex->pstack->fnull);
+		FREE(lex->pstack);
 	}
 
 	if (lex->flags & JSONLEX_FREE_STRUCT)
-		pfree(lex);
+		FREE(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -522,22 +677,13 @@ JsonParseErrorType
 pg_parse_json(JsonLexContext *lex, const JsonSemAction *sem)
 {
 #ifdef FORCE_JSON_PSTACK
-
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-
 	/*
 	 * We don't need partial token processing, there is only one chunk. But we
 	 * still need to init the partial token string so that freeJsonLexContext
-	 * works.
+	 * works, so perform the full incremental initialization.
 	 */
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+	if (!allocate_incremental_state(lex))
+		return JSON_OUT_OF_MEMORY;
 
 	return pg_parse_json_incremental(lex, sem, lex->input, lex->input_length, true);
 
@@ -546,6 +692,8 @@ pg_parse_json(JsonLexContext *lex, const JsonSemAction *sem)
 	JsonTokenType tok;
 	JsonParseErrorType result;
 
+	if (lex == &failed_oom)
+		return JSON_OUT_OF_MEMORY;
 	if (lex->incremental)
 		return JSON_INVALID_LEXER_TYPE;
 
@@ -591,13 +739,16 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	int			count;
 	JsonParseErrorType result;
 
+	if (lex == &failed_oom)
+		return JSON_OUT_OF_MEMORY;
+
 	/*
 	 * It's safe to do this with a shallow copy because the lexical routines
 	 * don't scribble on the input. They do scribble on the other pointers
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.parse_strval = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -658,7 +809,8 @@ pg_parse_json_incremental(JsonLexContext *lex,
 	JsonParseContext ctx = JSON_PARSE_VALUE;
 	JsonParserStack *pstack = lex->pstack;
 
-
+	if (lex == &failed_oom || lex->inc_state == &failed_inc_oom)
+		return JSON_OUT_OF_MEMORY;
 	if (!lex->incremental)
 		return JSON_INVALID_LEXER_TYPE;
 
@@ -737,7 +889,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_OEND:
@@ -766,7 +920,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_AEND:
@@ -793,9 +949,11 @@ pg_parse_json_incremental(JsonLexContext *lex,
 						json_ofield_action ostart = sem->object_field_start;
 						json_ofield_action oend = sem->object_field_end;
 
-						if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
+						if ((ostart != NULL || oend != NULL) && lex->parse_strval)
 						{
-							fname = pstrdup(lex->strval->data);
+							fname = STRDUP(lex->strval->data);
+							if (fname == NULL)
+								return JSON_OUT_OF_MEMORY;
 						}
 						set_fname(lex, fname);
 					}
@@ -883,14 +1041,21 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							 */
 							if (tok == JSON_TOKEN_STRING)
 							{
-								if (lex->strval != NULL)
-									pstack->scalar_val = pstrdup(lex->strval->data);
+								if (lex->parse_strval)
+								{
+									pstack->scalar_val = STRDUP(lex->strval->data);
+									if (pstack->scalar_val == NULL)
+										return JSON_OUT_OF_MEMORY;
+								}
 							}
 							else
 							{
 								ptrdiff_t	tlen = (lex->token_terminator - lex->token_start);
 
-								pstack->scalar_val = palloc(tlen + 1);
+								pstack->scalar_val = ALLOC(tlen + 1);
+								if (pstack->scalar_val == NULL)
+									return JSON_OUT_OF_MEMORY;
+
 								memcpy(pstack->scalar_val, lex->token_start, tlen);
 								pstack->scalar_val[tlen] = '\0';
 							}
@@ -1025,14 +1190,21 @@ parse_scalar(JsonLexContext *lex, const JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->parse_strval)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -1066,8 +1238,12 @@ parse_object_field(JsonLexContext *lex, const JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -1123,6 +1299,11 @@ parse_object(JsonLexContext *lex, const JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -1312,15 +1493,27 @@ json_lex(JsonLexContext *lex)
 	const char *const end = lex->input + lex->input_length;
 	JsonParseErrorType result;
 
-	if (lex->incremental && lex->inc_state->partial_completed)
+	if (lex == &failed_oom || lex->inc_state == &failed_inc_oom)
+		return JSON_OUT_OF_MEMORY;
+
+	if (lex->incremental)
 	{
-		/*
-		 * We just lexed a completed partial token on the last call, so reset
-		 * everything
-		 */
-		resetStringInfo(&(lex->inc_state->partial_token));
-		lex->token_terminator = lex->input;
-		lex->inc_state->partial_completed = false;
+		if (lex->inc_state->partial_completed)
+		{
+			/*
+			 * We just lexed a completed partial token on the last call, so
+			 * reset everything
+			 */
+			jsonapi_resetStringInfo(&(lex->inc_state->partial_token));
+			lex->token_terminator = lex->input;
+			lex->inc_state->partial_completed = false;
+		}
+
+#ifdef JSONAPI_USE_PQEXPBUFFER
+		/* Make sure our partial token buffer is valid before using it below. */
+		if (PQExpBufferDataBroken(lex->inc_state->partial_token))
+			return JSON_OUT_OF_MEMORY;
+#endif
 	}
 
 	s = lex->token_terminator;
@@ -1331,7 +1524,7 @@ json_lex(JsonLexContext *lex)
 		 * We have a partial token. Extend it and if completed lex it by a
 		 * recursive call
 		 */
-		StringInfo	ptok = &(lex->inc_state->partial_token);
+		StrValType *ptok = &(lex->inc_state->partial_token);
 		size_t		added = 0;
 		bool		tok_done = false;
 		JsonLexContext dummy_lex;
@@ -1358,7 +1551,7 @@ json_lex(JsonLexContext *lex)
 			{
 				char		c = lex->input[i];
 
-				appendStringInfoCharMacro(ptok, c);
+				jsonapi_appendStringInfoCharMacro(ptok, c);
 				added++;
 				if (c == '"' && escapes % 2 == 0)
 				{
@@ -1403,7 +1596,7 @@ json_lex(JsonLexContext *lex)
 						case '8':
 						case '9':
 							{
-								appendStringInfoCharMacro(ptok, cc);
+								jsonapi_appendStringInfoCharMacro(ptok, cc);
 								added++;
 							}
 							break;
@@ -1424,7 +1617,7 @@ json_lex(JsonLexContext *lex)
 
 				if (JSON_ALPHANUMERIC_CHAR(cc))
 				{
-					appendStringInfoCharMacro(ptok, cc);
+					jsonapi_appendStringInfoCharMacro(ptok, cc);
 					added++;
 				}
 				else
@@ -1467,6 +1660,7 @@ json_lex(JsonLexContext *lex)
 		dummy_lex.input_length = ptok->len;
 		dummy_lex.input_encoding = lex->input_encoding;
 		dummy_lex.incremental = false;
+		dummy_lex.parse_strval = lex->parse_strval;
 		dummy_lex.strval = lex->strval;
 
 		partial_result = json_lex(&dummy_lex);
@@ -1622,8 +1816,7 @@ json_lex(JsonLexContext *lex)
 					if (lex->incremental && !lex->inc_state->is_last_chunk &&
 						p == lex->input + lex->input_length)
 					{
-						appendBinaryStringInfo(
-											   &(lex->inc_state->partial_token), s, end - s);
+						jsonapi_appendBinaryStringInfo(&(lex->inc_state->partial_token), s, end - s);
 						return JSON_INCOMPLETE;
 					}
 
@@ -1680,8 +1873,9 @@ json_lex_string(JsonLexContext *lex)
 	do { \
 		if (lex->incremental && !lex->inc_state->is_last_chunk) \
 		{ \
-			appendBinaryStringInfo(&lex->inc_state->partial_token, \
-								   lex->token_start, end - lex->token_start); \
+			jsonapi_appendBinaryStringInfo(&lex->inc_state->partial_token, \
+										   lex->token_start, \
+										   end - lex->token_start); \
 			return JSON_INCOMPLETE; \
 		} \
 		lex->token_terminator = s; \
@@ -1694,8 +1888,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->parse_strval)
+	{
+#ifdef JSONAPI_USE_PQEXPBUFFER
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		jsonapi_resetStringInfo(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -1732,7 +1933,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->parse_strval)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -1789,19 +1990,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						jsonapi_appendBinaryStringInfo(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						jsonapi_appendStringInfoChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->parse_strval)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -1811,22 +2012,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						jsonapi_appendStringInfoChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						jsonapi_appendStringInfoChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						jsonapi_appendStringInfoChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						jsonapi_appendStringInfoChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						jsonapi_appendStringInfoChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						jsonapi_appendStringInfoChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -1861,7 +2062,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to jsonapi_appendBinaryStringInfo.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -1885,8 +2086,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->parse_strval)
+				jsonapi_appendBinaryStringInfo(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -1902,6 +2103,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef JSONAPI_USE_PQEXPBUFFER
+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -2019,8 +2225,8 @@ json_lex_number(JsonLexContext *lex, const char *s,
 	if (lex->incremental && !lex->inc_state->is_last_chunk &&
 		len >= lex->input_length)
 	{
-		appendBinaryStringInfo(&lex->inc_state->partial_token,
-							   lex->token_start, s - lex->token_start);
+		jsonapi_appendBinaryStringInfo(&lex->inc_state->partial_token,
+									   lex->token_start, s - lex->token_start);
 		if (num_err != NULL)
 			*num_err = error;
 
@@ -2096,19 +2302,25 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	if (error == JSON_OUT_OF_MEMORY || lex == &failed_oom)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
 	if (lex->errormsg)
-		resetStringInfo(lex->errormsg);
+		jsonapi_resetStringInfo(lex->errormsg);
 	else
-		lex->errormsg = makeStringInfo();
+		lex->errormsg = jsonapi_createStringInfo();
 
 	/*
 	 * A helper for error messages that should print the current token. The
 	 * format must contain exactly one %.*s specifier.
 	 */
 #define json_token_error(lex, format) \
-	appendStringInfo((lex)->errormsg, _(format), \
-					 (int) ((lex)->token_terminator - (lex)->token_start), \
-					 (lex)->token_start);
+	jsonapi_appendStringInfo((lex)->errormsg, _(format), \
+							 (int) ((lex)->token_terminator - (lex)->token_start), \
+							 (lex)->token_start);
 
 	switch (error)
 	{
@@ -2127,9 +2339,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			json_token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
 			break;
 		case JSON_ESCAPING_REQUIRED:
-			appendStringInfo(lex->errormsg,
-							 _("Character with value 0x%02x must be escaped."),
-							 (unsigned char) *(lex->token_terminator));
+			jsonapi_appendStringInfo(lex->errormsg,
+									 _("Character with value 0x%02x must be escaped."),
+									 (unsigned char) *(lex->token_terminator));
 			break;
 		case JSON_EXPECTED_END:
 			json_token_error(lex, "Expected end of input, but found \"%.*s\".");
@@ -2160,6 +2372,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 		case JSON_INVALID_TOKEN:
 			json_token_error(lex, "Token \"%.*s\" is invalid.");
 			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -2191,15 +2406,23 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 	}
 #undef json_token_error
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	if (lex->errormsg->len == 0)
-		appendStringInfo(lex->errormsg,
-						 "unexpected json parse error type: %d",
-						 (int) error);
+	/* Note that lex->errormsg can be NULL in shlib code. */
+	if (lex->errormsg && lex->errormsg->len == 0)
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		jsonapi_appendStringInfo(lex->errormsg,
+								 "unexpected json parse error type: %d",
+								 (int) error);
+	}
+
+#ifdef JSONAPI_USE_PQEXPBUFFER
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
+#endif
 
 	return lex->errormsg->data;
 }
diff --git a/src/common/meson.build b/src/common/meson.build
index 1a564e1dce..5dd4ad8d89 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -103,6 +103,10 @@ common_sources_cflags = {
 # a matter of policy, because it is not appropriate for general purpose
 # libraries such as libpq to report errors directly.  fe_memutils.c is
 # excluded because libpq must not exit() on allocation failure.
+#
+# The excluded files for _shlib builds are pulled into their own static
+# library, for the benefit of test programs that need not follow the
+# shlib rules.
 
 common_sources_frontend_shlib = common_sources
 common_sources_frontend_shlib += files(
@@ -110,12 +114,16 @@ common_sources_frontend_shlib += files(
   'sprompt.c',
 )
 
-common_sources_frontend_static = common_sources_frontend_shlib
-common_sources_frontend_static += files(
+common_sources_excluded_shlib = files(
   'fe_memutils.c',
   'logging.c',
 )
 
+common_sources_frontend_static = [
+  common_sources_frontend_shlib,
+  common_sources_excluded_shlib,
+]
+
 # Build pgcommon once for backend, once for use in frontend binaries, and
 # once for use in shared libraries
 #
@@ -143,6 +151,10 @@ pgcommon_variants = {
     'pic': true,
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
+    # The JSON API normally exits on out-of-memory; disable that behavior for
+    # shared library builds. This requires libpq's pqexpbuffer.h.
+    'c_args': ['-DJSONAPI_USE_PQEXPBUFFER'],
+    'include_directories': include_directories('../interfaces/libpq'),
   },
 }
 
@@ -158,8 +170,11 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
+        'include_directories': [
+          include_directories('.'),
+          opts.get('include_directories', []),
+        ],
         'sources': sources,
         'c_args': c_args,
         'build_by_default': false,
@@ -171,8 +186,11 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
+        'include_directories': [
+          include_directories('.'),
+          opts.get('include_directories', []),
+        ],
         'dependencies': opts['dependencies'] + [ssl],
       }
     )
@@ -183,4 +201,13 @@ common_srv = pgcommon['_srv']
 common_shlib = pgcommon['_shlib']
 common_static = pgcommon['']
 
+common_excluded_shlib = static_library('libpgcommon_excluded_shlib',
+  sources: common_sources_excluded_shlib,
+  dependencies: [frontend_common_code],
+  build_by_default: false,
+  kwargs: default_lib_args + {
+    'install': false,
+  },
+)
+
 subdir('unicode')
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index a995fdbe08..c084c3bcb3 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -51,6 +49,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -64,6 +63,18 @@ typedef enum JsonParseErrorType
 typedef struct JsonParserStack JsonParserStack;
 typedef struct JsonIncrementalState JsonIncrementalState;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef JSONAPI_USE_PQEXPBUFFER
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif
+
+typedef struct StrValType StrValType;
+
 /*
  * All the fields in this structure should be treated as read-only.
  *
@@ -102,8 +113,9 @@ typedef struct JsonLexContext
 	const char *line_start;		/* where that line starts within input */
 	JsonParserStack *pstack;
 	JsonIncrementalState *inc_state;
-	StringInfo	strval;
-	StringInfo	errormsg;
+	bool		parse_strval;
+	StrValType *strval;			/* only used if parse_strval == true */
+	StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
diff --git a/src/test/modules/test_json_parser/Makefile b/src/test/modules/test_json_parser/Makefile
index 2dc7175b7c..472e38d068 100644
--- a/src/test/modules/test_json_parser/Makefile
+++ b/src/test/modules/test_json_parser/Makefile
@@ -6,7 +6,7 @@ TAP_TESTS = 1
 
 OBJS = test_json_parser_incremental.o test_json_parser_perf.o $(WIN32RES)
 
-EXTRA_CLEAN = test_json_parser_incremental$(X) test_json_parser_perf$(X)
+EXTRA_CLEAN = test_json_parser_incremental$(X) test_json_parser_incremental_shlib$(X) test_json_parser_perf$(X)
 
 ifdef USE_PGXS
 PG_CONFIG = pg_config
@@ -19,13 +19,16 @@ include $(top_builddir)/src/Makefile.global
 include $(top_srcdir)/contrib/contrib-global.mk
 endif
 
-all: test_json_parser_incremental$(X) test_json_parser_perf$(X)
+all: test_json_parser_incremental$(X) test_json_parser_incremental_shlib$(X) test_json_parser_perf$(X)
 
 %.o: $(top_srcdir)/$(subdir)/%.c
 
 test_json_parser_incremental$(X): test_json_parser_incremental.o $(WIN32RES)
 	$(CC) $(CFLAGS) $^ $(PG_LIBS_INTERNAL) $(LDFLAGS) $(LDFLAGS_EX) $(PG_LIBS) $(LIBS) -o $@
 
+test_json_parser_incremental_shlib$(X): test_json_parser_incremental.o $(WIN32RES)
+	$(CC) $(CFLAGS) $^ $(LDFLAGS) -lpgcommon_excluded_shlib $(libpq_pgport_shlib) -o $@
+
 test_json_parser_perf$(X): test_json_parser_perf.o $(WIN32RES)
 	$(CC) $(CFLAGS) $^ $(PG_LIBS_INTERNAL) $(LDFLAGS) $(LDFLAGS_EX) $(PG_LIBS) $(LIBS) -o $@
 
diff --git a/src/test/modules/test_json_parser/meson.build b/src/test/modules/test_json_parser/meson.build
index b224f3e07e..059a8b71bd 100644
--- a/src/test/modules/test_json_parser/meson.build
+++ b/src/test/modules/test_json_parser/meson.build
@@ -19,6 +19,18 @@ test_json_parser_incremental = executable('test_json_parser_incremental',
   },
 )
 
+# A second version of test_json_parser_incremental, this time compiled against
+# the shared-library flavor of jsonapi.
+test_json_parser_incremental_shlib = executable('test_json_parser_incremental_shlib',
+  test_json_parser_incremental_sources,
+  dependencies: [frontend_shlib_code, libpq],
+  c_args: ['-DJSONAPI_SHLIB_ALLOC'],
+  link_with: [common_excluded_shlib],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+
 test_json_parser_perf_sources = files(
   'test_json_parser_perf.c',
 )
diff --git a/src/test/modules/test_json_parser/t/001_test_json_parser_incremental.pl b/src/test/modules/test_json_parser/t/001_test_json_parser_incremental.pl
index abf0d7a237..8cc42e8e29 100644
--- a/src/test/modules/test_json_parser/t/001_test_json_parser_incremental.pl
+++ b/src/test/modules/test_json_parser/t/001_test_json_parser_incremental.pl
@@ -13,20 +13,25 @@ use FindBin;
 
 my $test_file = "$FindBin::RealBin/../tiny.json";
 
-my $exe = "test_json_parser_incremental";
+my @exes =
+  ("test_json_parser_incremental", "test_json_parser_incremental_shlib");
 
-# Test the  usage error
-my ($stdout, $stderr) = run_command([ $exe, "-c", 10 ]);
-like($stderr, qr/Usage:/, 'error message if not enough arguments');
+foreach my $exe (@exes)
+{
+	note "testing executable $exe";
 
-# Test that we get success for small chunk sizes from 64 down to 1.
+	# Test the  usage error
+	my ($stdout, $stderr) = run_command([ $exe, "-c", 10 ]);
+	like($stderr, qr/Usage:/, 'error message if not enough arguments');
 
-for (my $size = 64; $size > 0; $size--)
-{
-	($stdout, $stderr) = run_command([ $exe, "-c", $size, $test_file ]);
+	# Test that we get success for small chunk sizes from 64 down to 1.
+	for (my $size = 64; $size > 0; $size--)
+	{
+		($stdout, $stderr) = run_command([ $exe, "-c", $size, $test_file ]);
 
-	like($stdout, qr/SUCCESS/, "chunk size $size: test succeeds");
-	is($stderr, "", "chunk size $size: no error output");
+		like($stdout, qr/SUCCESS/, "chunk size $size: test succeeds");
+		is($stderr, "", "chunk size $size: no error output");
+	}
 }
 
 done_testing();
diff --git a/src/test/modules/test_json_parser/t/002_inline.pl b/src/test/modules/test_json_parser/t/002_inline.pl
index 8d62eb44c8..5b6c6dc4ae 100644
--- a/src/test/modules/test_json_parser/t/002_inline.pl
+++ b/src/test/modules/test_json_parser/t/002_inline.pl
@@ -13,13 +13,13 @@ use Test::More;
 use File::Temp qw(tempfile);
 
 my $dir = PostgreSQL::Test::Utils::tempdir;
+my $exe;
 
 sub test
 {
 	local $Test::Builder::Level = $Test::Builder::Level + 1;
 
 	my ($name, $json, %params) = @_;
-	my $exe = "test_json_parser_incremental";
 	my $chunk = length($json);
 
 	# Test the input with chunk sizes from max(input_size, 64) down to 1
@@ -53,86 +53,99 @@ sub test
 	}
 }
 
-test("number", "12345");
-test("string", '"hello"');
-test("false", "false");
-test("true", "true");
-test("null", "null");
-test("empty object", "{}");
-test("empty array", "[]");
-test("array with number", "[12345]");
-test("array with numbers", "[12345,67890]");
-test("array with null", "[null]");
-test("array with string", '["hello"]');
-test("array with boolean", '[false]');
-test("single pair", '{"key": "value"}');
-test("heavily nested array", "[" x 3200 . "]" x 3200);
-test("serial escapes", '"\\\\\\\\\\\\\\\\"');
-test("interrupted escapes", '"\\\\\\"\\\\\\\\\\"\\\\"');
-test("whitespace", '     ""     ');
-
-test("unclosed empty object",
-	"{", error => qr/input string ended unexpectedly/);
-test("bad key", "{{", error => qr/Expected string or "}", but found "\{"/);
-test("bad key", "{{}", error => qr/Expected string or "}", but found "\{"/);
-test("numeric key", "{1234: 2}",
-	error => qr/Expected string or "}", but found "1234"/);
-test(
-	"second numeric key",
-	'{"a": "a", 1234: 2}',
-	error => qr/Expected string, but found "1234"/);
-test(
-	"unclosed object with pair",
-	'{"key": "value"',
-	error => qr/input string ended unexpectedly/);
-test("missing key value",
-	'{"key": }', error => qr/Expected JSON value, but found "}"/);
-test(
-	"missing colon",
-	'{"key" 12345}',
-	error => qr/Expected ":", but found "12345"/);
-test(
-	"missing comma",
-	'{"key": 12345 12345}',
-	error => qr/Expected "," or "}", but found "12345"/);
-test("overnested array",
-	"[" x 6401, error => qr/maximum permitted depth is 6400/);
-test("overclosed array",
-	"[]]", error => qr/Expected end of input, but found "]"/);
-test("unexpected token in array",
-	"[ }}} ]", error => qr/Expected array element or "]", but found "}"/);
-test("junk punctuation", "[ ||| ]", error => qr/Token "|" is invalid/);
-test("missing comma in array",
-	"[123 123]", error => qr/Expected "," or "]", but found "123"/);
-test("misspelled boolean", "tru", error => qr/Token "tru" is invalid/);
-test(
-	"misspelled boolean in array",
-	"[tru]",
-	error => qr/Token "tru" is invalid/);
-test("smashed top-level scalar", "12zz",
-	error => qr/Token "12zz" is invalid/);
-test(
-	"smashed scalar in array",
-	"[12zz]",
-	error => qr/Token "12zz" is invalid/);
-test(
-	"unknown escape sequence",
-	'"hello\vworld"',
-	error => qr/Escape sequence "\\v" is invalid/);
-test("unescaped control",
-	"\"hello\tworld\"",
-	error => qr/Character with value 0x09 must be escaped/);
-test(
-	"incorrect escape count",
-	'"\\\\\\\\\\\\\\"',
-	error => qr/Token ""\\\\\\\\\\\\\\"" is invalid/);
-
-# Case with three bytes: double-quote, backslash and <f5>.
-# Both invalid-token and invalid-escape are possible errors, because for
-# smaller chunk sizes the incremental parser skips the string parsing when
-# it cannot find an ending quote.
-test("incomplete UTF-8 sequence",
-	"\"\\\x{F5}",
-	error => qr/(Token|Escape sequence) ""?\\\x{F5}" is invalid/);
+my @exes =
+  ("test_json_parser_incremental", "test_json_parser_incremental_shlib");
+
+foreach (@exes)
+{
+	$exe = $_;
+	note "testing executable $exe";
+
+	test("number", "12345");
+	test("string", '"hello"');
+	test("false", "false");
+	test("true", "true");
+	test("null", "null");
+	test("empty object", "{}");
+	test("empty array", "[]");
+	test("array with number", "[12345]");
+	test("array with numbers", "[12345,67890]");
+	test("array with null", "[null]");
+	test("array with string", '["hello"]');
+	test("array with boolean", '[false]');
+	test("single pair", '{"key": "value"}');
+	test("heavily nested array", "[" x 3200 . "]" x 3200);
+	test("serial escapes", '"\\\\\\\\\\\\\\\\"');
+	test("interrupted escapes", '"\\\\\\"\\\\\\\\\\"\\\\"');
+	test("whitespace", '     ""     ');
+
+	test("unclosed empty object",
+		"{", error => qr/input string ended unexpectedly/);
+	test("bad key", "{{",
+		error => qr/Expected string or "}", but found "\{"/);
+	test("bad key", "{{}",
+		error => qr/Expected string or "}", but found "\{"/);
+	test("numeric key", "{1234: 2}",
+		error => qr/Expected string or "}", but found "1234"/);
+	test(
+		"second numeric key",
+		'{"a": "a", 1234: 2}',
+		error => qr/Expected string, but found "1234"/);
+	test(
+		"unclosed object with pair",
+		'{"key": "value"',
+		error => qr/input string ended unexpectedly/);
+	test("missing key value",
+		'{"key": }', error => qr/Expected JSON value, but found "}"/);
+	test(
+		"missing colon",
+		'{"key" 12345}',
+		error => qr/Expected ":", but found "12345"/);
+	test(
+		"missing comma",
+		'{"key": 12345 12345}',
+		error => qr/Expected "," or "}", but found "12345"/);
+	test("overnested array",
+		"[" x 6401, error => qr/maximum permitted depth is 6400/);
+	test("overclosed array",
+		"[]]", error => qr/Expected end of input, but found "]"/);
+	test("unexpected token in array",
+		"[ }}} ]", error => qr/Expected array element or "]", but found "}"/);
+	test("junk punctuation", "[ ||| ]", error => qr/Token "|" is invalid/);
+	test("missing comma in array",
+		"[123 123]", error => qr/Expected "," or "]", but found "123"/);
+	test("misspelled boolean", "tru", error => qr/Token "tru" is invalid/);
+	test(
+		"misspelled boolean in array",
+		"[tru]",
+		error => qr/Token "tru" is invalid/);
+	test(
+		"smashed top-level scalar",
+		"12zz",
+		error => qr/Token "12zz" is invalid/);
+	test(
+		"smashed scalar in array",
+		"[12zz]",
+		error => qr/Token "12zz" is invalid/);
+	test(
+		"unknown escape sequence",
+		'"hello\vworld"',
+		error => qr/Escape sequence "\\v" is invalid/);
+	test("unescaped control",
+		"\"hello\tworld\"",
+		error => qr/Character with value 0x09 must be escaped/);
+	test(
+		"incorrect escape count",
+		'"\\\\\\\\\\\\\\"',
+		error => qr/Token ""\\\\\\\\\\\\\\"" is invalid/);
+
+	# Case with three bytes: double-quote, backslash and <f5>.
+	# Both invalid-token and invalid-escape are possible errors, because for
+	# smaller chunk sizes the incremental parser skips the string parsing when
+	# it cannot find an ending quote.
+	test("incomplete UTF-8 sequence",
+		"\"\\\x{F5}",
+		error => qr/(Token|Escape sequence) ""?\\\x{F5}" is invalid/);
+}
 
 done_testing();
diff --git a/src/test/modules/test_json_parser/t/003_test_semantic.pl b/src/test/modules/test_json_parser/t/003_test_semantic.pl
index b6553bbcdd..c11480172d 100644
--- a/src/test/modules/test_json_parser/t/003_test_semantic.pl
+++ b/src/test/modules/test_json_parser/t/003_test_semantic.pl
@@ -16,24 +16,31 @@ use File::Temp qw(tempfile);
 my $test_file = "$FindBin::RealBin/../tiny.json";
 my $test_out = "$FindBin::RealBin/../tiny.out";
 
-my $exe = "test_json_parser_incremental";
+my @exes =
+  ("test_json_parser_incremental", "test_json_parser_incremental_shlib");
 
-my ($stdout, $stderr) = run_command([ $exe, "-s", $test_file ]);
+foreach my $exe (@exes)
+{
+	note "testing executable $exe";
 
-is($stderr, "", "no error output");
+	my ($stdout, $stderr) = run_command([ $exe, "-s", $test_file ]);
 
-my $dir = PostgreSQL::Test::Utils::tempdir;
-my ($fh, $fname) = tempfile(DIR => $dir);
+	is($stderr, "", "no error output");
 
-print $fh $stdout, "\n";
+	my $dir = PostgreSQL::Test::Utils::tempdir;
+	my ($fh, $fname) = tempfile(DIR => $dir);
 
-close($fh);
+	print $fh $stdout, "\n";
 
-my @diffopts = ("-u");
-push(@diffopts, "--strip-trailing-cr") if $windows_os;
-($stdout, $stderr) = run_command([ "diff", @diffopts, $fname, $test_out ]);
+	close($fh);
 
-is($stdout, "", "no output diff");
-is($stderr, "", "no diff error");
+	my @diffopts = ("-u");
+	push(@diffopts, "--strip-trailing-cr") if $windows_os;
+	($stdout, $stderr) =
+	  run_command([ "diff", @diffopts, $fname, $test_out ]);
+
+	is($stdout, "", "no output diff");
+	is($stderr, "", "no diff error");
+}
 
 done_testing();
-- 
2.34.1

v27-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v27-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 03359876322da3b2ba549fd107800d2e57921d6b Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v27 2/5] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                        |   19 +
 configure                                 |  144 ++
 configure.ac                              |   29 +
 doc/src/sgml/libpq.sgml                   |   76 +
 meson.build                               |   31 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   14 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2222 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 ++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   77 +
 src/interfaces/libpq/libpq-int.h          |   15 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 25 files changed, 3577 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 537366945c..8c4d9736c3 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8493,6 +8496,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13024,6 +13073,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14049,6 +14182,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 4e279c4bd6..9c6dbe5ccd 100644
--- a/configure.ac
+++ b/configure.ac
@@ -920,6 +920,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1402,6 +1422,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1592,6 +1617,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index f916fce414..f2a761e0a3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2335,6 +2335,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9940,6 +9977,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/meson.build b/meson.build
index ea07126f78..a2e2479821 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3080,6 +3109,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3752,6 +3782,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 979925cc2e..196c96fbb8 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -244,6 +244,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -727,6 +730,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 27f8499d8a..7d593778ec 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..435abee56a
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2222 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the whole
+	 * string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			/* HTTP optional whitespace allows only spaces and htabs. */
+			case ' ':
+			case '\t':
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently. We
+		 * accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specify 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in
+	 * the (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1; /* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char * const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN: /* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT: /* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so make
+		 * sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of CURLoption.
+	 * CURLOPT_PROTOCOLS is deprecated in modern Curls, but its replacement
+	 * didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char * const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can immediately
+	 * call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They don't
+		 * even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the foot
+		 * in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only
+	 * acceptable errors; anything else and we bail.
+	 */
+	err = &tok.err;
+	if (!err->error)
+	{
+		/* TODO test */
+		actx_error(actx, "unknown error");
+		goto token_cleanup;
+	}
+
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		if (err->error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase
+	 * our retry interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		fprintf(stderr, "Visit %s and enter the code: %s",
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on start_request()
+		 * to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break; /* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+				/*
+				 * No Curl requests are running, so we can simplify by
+				 * having the client wait directly on the timerfd rather
+				 * than the multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..f943a31cc0
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 5c8f404463..922cbc0054 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -429,7 +430,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -447,7 +448,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -534,6 +535,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -577,26 +587,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -641,7 +673,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -671,11 +703,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -983,12 +1025,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1146,7 +1194,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1163,7 +1211,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1479,3 +1528,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index ab308a0580..c9a0cb79f0 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -364,6 +364,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -627,6 +644,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2644,6 +2662,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3711,6 +3730,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3866,6 +3886,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3899,7 +3929,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3936,6 +3976,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4617,6 +4692,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4734,6 +4810,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7216,6 +7297,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index ca3e028a51..2c68ca041e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -56,6 +56,10 @@ extern "C"
 /* Indicates presence of PQsocketPoll, PQgetCurrentTimeUSec */
 #define LIBPQ_HAS_SOCKET_POLL 1
 
+/* Features added in PostgreSQL v18: */
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
+
 /*
  * Option flags for PQcopyResult
  */
@@ -99,6 +103,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -180,6 +186,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -712,10 +725,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 8ed1b28fcc..e617f39bef 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 7623aeadab..cf1da9c1a7 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9e951a9e6f..b0a44e06a5 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -368,6 +369,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1715,6 +1718,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1779,6 +1783,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1939,11 +1944,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3451,6 +3459,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v27-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v27-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From a0806d7c6571f20e6a355c8c1a3159583ddbda0e Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v27 3/5] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 doc/src/sgml/client-auth.sgml                 |  28 +
 doc/src/sgml/filelist.sgml                    |   1 +
 doc/src/sgml/oauth-validators.sgml            |   9 +
 doc/src/sgml/postgres.sgml                    |   1 +
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 666 ++++++++++++++++++
 src/backend/libpq/auth-sasl.c                 |  10 +-
 src/backend/libpq/auth-scram.c                |   4 +-
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/include/libpq/sasl.h                      |  11 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     |  12 +-
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  22 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  37 +
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   | 187 +++++
 .../modules/oauth_validator/t/oauth_server.py | 270 +++++++
 src/test/modules/oauth_validator/validator.c  |  82 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   4 +
 31 files changed, 1554 insertions(+), 46 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 1ce6c443a8..94187cea06 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +694,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..fb78b6c886 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a7ff5f8264..91cf16678e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..3c7884baf9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,9 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+
+ <para>
+  TODO
+ </para>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index ec9f90e283..bfb73991e7 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -263,6 +263,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..2a0d74a079
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 2"),
+				 errdetail("Bearer token is empty.")));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 3"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 03ddddc3c2..e4be4d499e 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2b607c5270..0a5a8640fc 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 75d588e36a..2245ae24a8 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1743,6 +1744,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2062,8 +2065,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2446,6 +2450,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index af227b1f24..ae9bfff71f 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4783,6 +4784,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 435abee56a..d9c9fc6cf9 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -143,7 +143,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -1864,6 +1864,9 @@ handle_token_response(struct async_ctx *actx, char **token)
 	if (!finish_token_request(actx, &tok))
 		goto token_cleanup;
 
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
 	if (tok.access_token)
 	{
 		/* Construct our Bearer token. */
@@ -1892,13 +1895,6 @@ handle_token_response(struct async_ctx *actx, char **token)
 	 * acceptable errors; anything else and we bail.
 	 */
 	err = &tok.err;
-	if (!err->error)
-	{
-		/* TODO test */
-		actx_error(actx, "unknown error");
-		goto token_cleanup;
-	}
-
 	if (strcmp(err->error, "authorization_pending") != 0 &&
 		strcmp(err->error, "slow_down") != 0)
 	{
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..655ce75796
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,22 @@
+export PYTHON
+export with_oauth
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..3db2ddea1c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,37 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..16ee8acd8f
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,187 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..b17198302b
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,270 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "content_type" in self._test_params:
+            return self._test_params["content_type"]
+
+        return "application/json"
+
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        if self._should_modify() and "interval" in self._test_params:
+            return self._test_params["interval"]
+
+        return 0
+
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "retry_code" in self._test_params:
+            return self._test_params["retry_code"]
+
+        return "authorization_pending"
+
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "uri_spelling" in self._test_params:
+            return self._test_params["uri_spelling"]
+
+        return "verification_uri"
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type())
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling(): uri,
+            "expires-in": 5,
+        }
+
+        interval = self._interval()
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code()}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return {
+            "access_token": token,
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..7b4dc9c494
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,82 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index fe6ebf10f7..d6f9c4cd8b 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2397,6 +2397,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2440,7 +2445,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..abdff5a3c3
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	diag("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b0a44e06a5..d9d988a03c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1719,6 +1719,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3065,6 +3066,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3660,6 +3663,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v27-0004-Review-comments.patchapplication/octet-stream; name=v27-0004-Review-comments.patchDownload
From 5d5694934a6abcc285024b12632f6e0f8bd74f43 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Thu, 28 Mar 2024 21:59:02 +0100
Subject: [PATCH v27 4/5] Review comments

Fixes and tidy-ups following a review of v21, a few items
are (listed in no specific order):

* Implement a version check for libcurl in autoconf, the equivalent
  check for Meson is still a TODO. [ed: moved to an earlier commit]
* Address a few TODOs in the code
* libpq JSON support memory management fixups [ed: moved to an earlier
  commit]
---
 src/backend/libpq/auth-oauth.c            | 22 ++++----
 src/interfaces/libpq/fe-auth-oauth-curl.c | 66 +++++++++++++++--------
 2 files changed, 57 insertions(+), 31 deletions(-)

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 2a0d74a079..ec1418c3fc 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -533,7 +533,9 @@ validate_token_format(const char *header)
 	if (!header || strlen(header) <= 7)
 	{
 		ereport(COMMERROR,
-				(errmsg("malformed OAuth bearer token 1")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is less than 8 bytes."));
 		return NULL;
 	}
 
@@ -551,9 +553,9 @@ validate_token_format(const char *header)
 	if (!*token)
 	{
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 2"),
-				 errdetail("Bearer token is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
 		return NULL;
 	}
 
@@ -573,9 +575,9 @@ validate_token_format(const char *header)
 		 * of someone's password into the logs.
 		 */
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 3"),
-				 errdetail("Bearer token is not in the correct format.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
 		return NULL;
 	}
 
@@ -617,10 +619,10 @@ validate(Port *port, const char *auth)
 	/* Make sure the validator authenticated the user. */
 	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
-		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
-				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
-						port->user_name)));
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity"));
 		return false;
 	}
 
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index d9c9fc6cf9..0e52218422 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -31,6 +31,8 @@
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
 
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
 /*
  * Parsed JSON Representations
  *
@@ -1334,7 +1336,12 @@ setup_curl_handles(struct async_ctx *actx)
 	 * pretty strict when it comes to provider behavior, so we have to check
 	 * what comes back anyway.)
 	 */
-	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
 
 	return true;
@@ -1356,9 +1363,19 @@ append_data(char *buf, size_t size, size_t nmemb, void *userdata)
 	PQExpBuffer resp = userdata;
 	size_t		len = size * nmemb;
 
-	/* TODO: cap the maximum size */
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+		return 0;
+
+	/* The data passed from libcurl is not null-terminated */
 	appendBinaryPQExpBuffer(resp, buf, len);
-	/* TODO: check for broken buffer */
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+		return 0;
 
 	return len;
 }
@@ -1675,7 +1692,12 @@ start_device_authz(struct async_ctx *actx, PGconn *conn)
 	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
 	if (conn->oauth_scope)
 		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
-	/* TODO check for broken buffer */
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	/* Make our request. */
 	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
@@ -1817,32 +1839,34 @@ finish_token_request(struct async_ctx *actx, struct token *tok)
 	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
-	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
-	 * response uses either 400 Bad Request or 401 Unauthorized.
-	 *
-	 * TODO: there are references online to 403 appearing in the wild...
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
 	 */
-	if (response_code != 200
-		&& response_code != 400
-		 /* && response_code != 401 TODO */ )
+	if (response_code == 200)
 	{
-		actx_error(actx, "unexpected response code %ld", response_code);
-		return false;
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
 	}
 
 	/*
-	 * Pull the fields we care about from the document.
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
 	 */
-	if (response_code == 200)
+	if (response_code == 400 || response_code == 401)
 	{
-		actx->errctx = "failed to parse access token response";
-		if (!parse_access_token(actx, tok))
-			return false;		/* error message already set */
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
 	}
-	else if (!parse_token_error(actx, &tok->err))
-		return false;
 
-	return true;
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
 }
 
 /*
-- 
2.34.1

v27-0005-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v27-0005-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From b459ce7e7bdb000f8142a7d25b0738e1a4918283 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v27 5/5] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1864 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1074 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5577 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 94187cea06..a127042b4b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -374,6 +375,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -384,7 +387,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index a2e2479821..6dbc383022 100644
--- a/meson.build
+++ b/meson.build
@@ -3423,6 +3423,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3584,6 +3587,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index c3d0dfedf1..f401ec179e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -7,6 +7,7 @@ subdir('authentication')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..17bd2d3d88
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1864 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp = json.dumps(
+                {
+                    "status": "invalid_token",
+                    "openid-configuration": discovery_uri,
+                }
+            )
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..dbb8b8823c
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..427ab063e6
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1074 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

#119Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#118)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 28.08.24 18:31, Jacob Champion wrote:

On Mon, Aug 26, 2024 at 4:23 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

I was having trouble reasoning about the palloc-that-isn't-palloc code
during the first few drafts, so I will try a round with the jsonapi_
prefix.

v27 takes a stab at that. I have kept the ALLOC/FREE naming to match
the strategy in other src/common source files.

This looks pretty good to me. Maybe on the naming side, this seems like
a gratuitous divergence:

+#define jsonapi_createStringInfo makeStringInfo

The name of the variable JSONAPI_USE_PQEXPBUFFER leads to sections of
code that look like this:

+#ifdef JSONAPI_USE_PQEXPBUFFER
+    if (!new_prediction || !new_fnames || !new_fnull)
+        return false;
+#endif

To me it wouldn't be immediately obvious why "using PQExpBuffer" has
anything to do with this code; the key idea is that we expect any
allocations to be able to fail. Maybe a name like JSONAPI_ALLOW_OOM or
JSONAPI_SHLIB_ALLOCATIONS or...?

Seems ok to me as is. I think the purpose of JSONAPI_USE_PQEXPBUFFER is
adequately explained by this comment

+/*
+ * By default, we will use palloc/pfree along with StringInfo.  In libpq,
+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on 
out-of-memory.
+ */
+#ifdef JSONAPI_USE_PQEXPBUFFER

For some of the other proposed names, I'd be afraid that someone might
think you are free to mix and match APIs, OOM behavior, and compilation
options.

Some comments on src/include/common/jsonapi.h:

-#include "lib/stringinfo.h"

I suspect this will fail headerscheck? Probably needs an exception
added there.

+#ifdef JSONAPI_USE_PQEXPBUFFER
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif

Maybe use jsonapi_StrValType here.

+typedef struct StrValType StrValType;

I don't think that is needed. It would just duplicate typedefs that
already exist elsewhere, depending on what StrValType is set to.

+       bool            parse_strval;
+       StrValType *strval;                     /* only used if 
parse_strval == true */

The parse_strval field could use a better explanation.

I actually don't understand the need for this field. AFAICT, this is
just used to record whether strval is valid. But in the cases where
it's not valid, why do we need to record that? Couldn't you just return
failed_oom in those cases?

#120Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#119)
6 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Aug 30, 2024 at 2:49 AM Peter Eisentraut <peter@eisentraut.org> wrote:

This looks pretty good to me. Maybe on the naming side, this seems like
a gratuitous divergence:

+#define jsonapi_createStringInfo makeStringInfo

Whoops, fixed.

Seems ok to me as is. I think the purpose of JSONAPI_USE_PQEXPBUFFER is
adequately explained by this comment

+/*
+ * By default, we will use palloc/pfree along with StringInfo.  In libpq,
+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on
out-of-memory.
+ */
+#ifdef JSONAPI_USE_PQEXPBUFFER

For some of the other proposed names, I'd be afraid that someone might
think you are free to mix and match APIs, OOM behavior, and compilation
options.

Yeah, that's fair.

Some comments on src/include/common/jsonapi.h:

-#include "lib/stringinfo.h"

I suspect this will fail headerscheck? Probably needs an exception
added there.

Currently it passes on my machine and the cfbot. The
forward-declaration of the struct should be enough to make clients
happy. Or was there a different way to break it?

+#ifdef JSONAPI_USE_PQEXPBUFFER
+#define StrValType PQExpBufferData
+#else
+#define StrValType StringInfoData
+#endif

Maybe use jsonapi_StrValType here.

Done.

+typedef struct StrValType StrValType;

I don't think that is needed. It would just duplicate typedefs that
already exist elsewhere, depending on what StrValType is set to.

Okay, removed.

The parse_strval field could use a better explanation.

I actually don't understand the need for this field. AFAICT, this is
just used to record whether strval is valid.

No, it's meant to track the value of the need_escapes argument to the
constructor. I've renamed it and moved the assignment to hopefully
make that a little more obvious. WDYT?

But in the cases where
it's not valid, why do we need to record that? Couldn't you just return
failed_oom in those cases?

We can do that if you'd like. I was just worried about using a valid
(broken) value of PQExpBuffer as a sentinel instead of a separate
flag. It would work as long as reviewers stay vigilant, but if we go
that direction and someone adds an unchecked

lex->strval = jsonapi_makeStringInfo();
// should check for NULL now, but we forgot

into a future patch, an allocation failure in _shlib builds would
silently disable string escaping instead of resulting in a
JSON_OUT_OF_MEMORY later.

Thanks,
--Jacob

Attachments:

since-v27.diff.txttext/plain; charset=US-ASCII; name=since-v27.diff.txtDownload
1:  202b9ecef6 ! 1:  04b2b11001 common/jsonapi: support libpq as a client
    @@ src/common/jsonapi.c
     +#define jsonapi_appendStringInfoChar		appendPQExpBufferChar
     +/* XXX should we add a macro version to PQExpBuffer? */
     +#define jsonapi_appendStringInfoCharMacro	appendPQExpBufferChar
    -+#define jsonapi_createStringInfo			createPQExpBuffer
    ++#define jsonapi_makeStringInfo				createPQExpBuffer
     +#define jsonapi_initStringInfo				initPQExpBuffer
     +#define jsonapi_resetStringInfo				resetPQExpBuffer
     +#define jsonapi_termStringInfo				termPQExpBuffer
    @@ src/common/jsonapi.c
     +#define jsonapi_appendBinaryStringInfo		appendBinaryStringInfo
     +#define jsonapi_appendStringInfoChar		appendStringInfoChar
     +#define jsonapi_appendStringInfoCharMacro	appendStringInfoCharMacro
    -+#define jsonapi_createStringInfo			makeStringInfo
    ++#define jsonapi_makeStringInfo				makeStringInfo
     +#define jsonapi_initStringInfo				initStringInfo
     +#define jsonapi_resetStringInfo				resetStringInfo
     +#define jsonapi_termStringInfo(s)			pfree((s)->data)
    @@ src/common/jsonapi.c: struct JsonIncrementalState
      	bool		is_last_chunk;
      	bool		partial_completed;
     -	StringInfoData partial_token;
    -+	StrValType	partial_token;
    ++	jsonapi_StrValType partial_token;
      };
      
      /*
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const ch
      	}
      	else
     @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
    + 	lex->line_number = 1;
    + 	lex->input_length = len;
      	lex->input_encoding = encoding;
    ++	lex->need_escapes = need_escapes;
      	if (need_escapes)
      	{
     -		lex->strval = makeStringInfo();
    @@ src/common/jsonapi.c: makeJsonLexContextCstringLen(JsonLexContext *lex, const ch
     +		 * of use (json_lex_string()) since we might not need to parse any
     +		 * strings anyway.
     +		 */
    -+		lex->strval = jsonapi_createStringInfo();
    ++		lex->strval = jsonapi_makeStringInfo();
      		lex->flags |= JSONLEX_FREE_STRVAL;
    -+		lex->parse_strval = true;
      	}
      
      	return lex;
    @@ src/common/jsonapi.c: makeJsonLexContextIncremental(JsonLexContext *lex, int enc
     +		return lex;
     +	}
     +
    ++	lex->need_escapes = need_escapes;
      	if (need_escapes)
      	{
     -		lex->strval = makeStringInfo();
    @@ src/common/jsonapi.c: makeJsonLexContextIncremental(JsonLexContext *lex, int enc
     +		 * of use (json_lex_string()) since we might not need to parse any
     +		 * strings anyway.
     +		 */
    -+		lex->strval = jsonapi_createStringInfo();
    ++		lex->strval = jsonapi_makeStringInfo();
      		lex->flags |= JSONLEX_FREE_STRVAL;
    -+		lex->parse_strval = true;
      	}
     +
      	return lex;
    @@ src/common/jsonapi.c: json_count_array_elements(JsonLexContext *lex, int *elemen
      	 */
      	memcpy(&copylex, lex, sizeof(JsonLexContext));
     -	copylex.strval = NULL;		/* not interested in values here */
    -+	copylex.parse_strval = false;	/* not interested in values here */
    ++	copylex.need_escapes = false;	/* not interested in values here */
      	copylex.lex_level++;
      
      	count = 0;
    @@ src/common/jsonapi.c: pg_parse_json_incremental(JsonLexContext *lex,
      						json_ofield_action oend = sem->object_field_end;
      
     -						if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
    -+						if ((ostart != NULL || oend != NULL) && lex->parse_strval)
    ++						if ((ostart != NULL || oend != NULL) && lex->need_escapes)
      						{
     -							fname = pstrdup(lex->strval->data);
     +							fname = STRDUP(lex->strval->data);
    @@ src/common/jsonapi.c: pg_parse_json_incremental(JsonLexContext *lex,
      							{
     -								if (lex->strval != NULL)
     -									pstack->scalar_val = pstrdup(lex->strval->data);
    -+								if (lex->parse_strval)
    ++								if (lex->need_escapes)
     +								{
     +									pstack->scalar_val = STRDUP(lex->strval->data);
     +									if (pstack->scalar_val == NULL)
    @@ src/common/jsonapi.c: parse_scalar(JsonLexContext *lex, const JsonSemAction *sem
      	{
     -		if (lex->strval != NULL)
     -			val = pstrdup(lex->strval->data);
    -+		if (lex->parse_strval)
    ++		if (lex->need_escapes)
     +		{
     +			val = STRDUP(lex->strval->data);
     +			if (val == NULL)
    @@ src/common/jsonapi.c: parse_object_field(JsonLexContext *lex, const JsonSemActio
      		return report_parse_error(JSON_PARSE_STRING, lex);
     -	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
     -		fname = pstrdup(lex->strval->data);
    -+	if ((ostart != NULL || oend != NULL) && lex->parse_strval)
    ++	if ((ostart != NULL || oend != NULL) && lex->need_escapes)
     +	{
     +		fname = STRDUP(lex->strval->data);
     +		if (fname == NULL)
    @@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
      		 * recursive call
      		 */
     -		StringInfo	ptok = &(lex->inc_state->partial_token);
    -+		StrValType *ptok = &(lex->inc_state->partial_token);
    ++		jsonapi_StrValType *ptok = &(lex->inc_state->partial_token);
      		size_t		added = 0;
      		bool		tok_done = false;
      		JsonLexContext dummy_lex;
    @@ src/common/jsonapi.c: json_lex(JsonLexContext *lex)
      		dummy_lex.input_length = ptok->len;
      		dummy_lex.input_encoding = lex->input_encoding;
      		dummy_lex.incremental = false;
    -+		dummy_lex.parse_strval = lex->parse_strval;
    ++		dummy_lex.need_escapes = lex->need_escapes;
      		dummy_lex.strval = lex->strval;
      
      		partial_result = json_lex(&dummy_lex);
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      
     -	if (lex->strval != NULL)
     -		resetStringInfo(lex->strval);
    -+	if (lex->parse_strval)
    ++	if (lex->need_escapes)
     +	{
     +#ifdef JSONAPI_USE_PQEXPBUFFER
     +		/* make sure initialization succeeded */
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
      				}
     -				if (lex->strval != NULL)
    -+				if (lex->parse_strval)
    ++				if (lex->need_escapes)
      				{
      					/*
      					 * Combine surrogate pairs.
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      				}
      			}
     -			else if (lex->strval != NULL)
    -+			else if (lex->parse_strval)
    ++			else if (lex->need_escapes)
      			{
      				if (hi_surrogate != -1)
      					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      
     -			if (lex->strval != NULL)
     -				appendBinaryStringInfo(lex->strval, s, p - s);
    -+			if (lex->parse_strval)
    ++			if (lex->need_escapes)
     +				jsonapi_appendBinaryStringInfo(lex->strval, s, p - s);
      
      			/*
    @@ src/common/jsonapi.c: json_lex_string(JsonLexContext *lex)
      	}
      
     +#ifdef JSONAPI_USE_PQEXPBUFFER
    -+	if (lex->parse_strval && PQExpBufferBroken(lex->strval))
    ++	if (lex->need_escapes && PQExpBufferBroken(lex->strval))
     +		return JSON_OUT_OF_MEMORY;
     +#endif
     +
    @@ src/common/jsonapi.c: report_parse_error(JsonParseContext ctx, JsonLexContext *l
     +		jsonapi_resetStringInfo(lex->errormsg);
      	else
     -		lex->errormsg = makeStringInfo();
    -+		lex->errormsg = jsonapi_createStringInfo();
    ++		lex->errormsg = jsonapi_makeStringInfo();
      
      	/*
      	 * A helper for error messages that should print the current token. The
    @@ src/include/common/jsonapi.h: typedef enum JsonParseErrorType
     + * then they can include the appropriate header themselves.
     + */
     +#ifdef JSONAPI_USE_PQEXPBUFFER
    -+#define StrValType PQExpBufferData
    ++#define jsonapi_StrValType PQExpBufferData
     +#else
    -+#define StrValType StringInfoData
    ++#define jsonapi_StrValType StringInfoData
     +#endif
    -+
    -+typedef struct StrValType StrValType;
     +
      /*
       * All the fields in this structure should be treated as read-only.
    @@ src/include/common/jsonapi.h: typedef struct JsonLexContext
      	JsonIncrementalState *inc_state;
     -	StringInfo	strval;
     -	StringInfo	errormsg;
    -+	bool		parse_strval;
    -+	StrValType *strval;			/* only used if parse_strval == true */
    -+	StrValType *errormsg;
    ++	bool		need_escapes;
    ++	struct jsonapi_StrValType *strval;	/* only used if need_escapes == true */
    ++	struct jsonapi_StrValType *errormsg;
      } JsonLexContext;
      
      typedef JsonParseErrorType (*json_struct_action) (void *state);
2:  0335987632 = 2:  6be4888464 libpq: add OAUTHBEARER SASL mechanism
3:  a0806d7c65 = 3:  1eb3d4798c backend: add OAUTHBEARER SASL mechanism
4:  5d5694934a = 4:  de9b6ab514 Review comments
5:  b459ce7e7b = 5:  3f19723018 DO NOT MERGE: Add pytest suite for OAuth
v28-0001-common-jsonapi-support-libpq-as-a-client.patchapplication/octet-stream; name=v28-0001-common-jsonapi-support-libpq-as-a-client.patchDownload
From 04b2b110016377e8c99928f3e94121d18a34ef3d Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Mon, 3 May 2021 15:38:26 -0700
Subject: [PATCH v28 1/5] common/jsonapi: support libpq as a client

Based on a patch by Michael Paquier.

For libpq, use PQExpBuffer instead of StringInfo. This requires us to
track allocation failures so that we can return JSON_OUT_OF_MEMORY as
needed rather than exit()ing.

Co-authored-by: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 src/common/Makefile                           |  23 +-
 src/common/jsonapi.c                          | 473 +++++++++++++-----
 src/common/meson.build                        |  35 +-
 src/include/common/jsonapi.h                  |  18 +-
 src/test/modules/test_json_parser/Makefile    |   7 +-
 src/test/modules/test_json_parser/meson.build |  12 +
 .../t/001_test_json_parser_incremental.pl     |  25 +-
 .../modules/test_json_parser/t/002_inline.pl  | 177 ++++---
 .../test_json_parser/t/003_test_semantic.pl   |  31 +-
 9 files changed, 558 insertions(+), 243 deletions(-)

diff --git a/src/common/Makefile b/src/common/Makefile
index 89ef61c52a..b3f12dc97f 100644
--- a/src/common/Makefile
+++ b/src/common/Makefile
@@ -104,14 +104,20 @@ endif
 # a matter of policy, because it is not appropriate for general purpose
 # libraries such as libpq to report errors directly.  fe_memutils.c is
 # excluded because libpq must not exit() on allocation failure.
+#
+# The excluded files for _shlib builds are pulled into their own static
+# library, for the benefit of test programs that need not follow the
+# shlib rules.
 OBJS_FRONTEND_SHLIB = \
 	$(OBJS_COMMON) \
 	restricted_token.o \
 	sprompt.o
-OBJS_FRONTEND = \
-	$(OBJS_FRONTEND_SHLIB) \
+OBJS_EXCLUDED_SHLIB = \
 	fe_memutils.o \
 	logging.o
+OBJS_FRONTEND = \
+	$(OBJS_FRONTEND_SHLIB) \
+	$(OBJS_EXCLUDED_SHLIB)
 
 # foo.o, foo_shlib.o, and foo_srv.o are all built from foo.c
 OBJS_SHLIB = $(OBJS_FRONTEND_SHLIB:%.o=%_shlib.o)
@@ -122,7 +128,7 @@ TOOLSDIR = $(top_srcdir)/src/tools
 GEN_KEYWORDLIST = $(PERL) -I $(TOOLSDIR) $(TOOLSDIR)/gen_keywordlist.pl
 GEN_KEYWORDLIST_DEPS = $(TOOLSDIR)/gen_keywordlist.pl $(TOOLSDIR)/PerfectHash.pm
 
-all: libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a
+all: libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a libpgcommon_excluded_shlib.a
 
 # libpgcommon is needed by some contrib
 install: all installdirs
@@ -155,6 +161,11 @@ libpgcommon_shlib.a: $(OBJS_SHLIB)
 	rm -f $@
 	$(AR) $(AROPT) $@ $^
 
+# The JSON API normally exits on out-of-memory; disable that behavior for shared
+# library builds. This requires libpq's pqexpbuffer.h.
+jsonapi_shlib.o: override CPPFLAGS += -DJSONAPI_USE_PQEXPBUFFER
+jsonapi_shlib.o: override CPPFLAGS += -I$(libpq_srcdir)
+
 # Because this uses its own compilation rule, it doesn't use the
 # dependency tracking logic from Makefile.global.  To make sure that
 # dependency tracking works anyway for the *_shlib.o files, depend on
@@ -164,6 +175,10 @@ libpgcommon_shlib.a: $(OBJS_SHLIB)
 %_shlib.o: %.c %.o
 	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) -c $< -o $@
 
+libpgcommon_excluded_shlib.a: $(OBJS_EXCLUDED_SHLIB)
+	rm -f $@
+	$(AR) $(AROPT) $@ $^
+
 #
 # Server versions of object files
 #
@@ -197,6 +212,6 @@ RYU_OBJS = $(RYU_FILES) $(RYU_FILES:%.o=%_shlib.o) $(RYU_FILES:%.o=%_srv.o)
 $(RYU_OBJS): CFLAGS += $(PERMIT_DECLARATION_AFTER_STATEMENT)
 
 clean distclean:
-	rm -f libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a
+	rm -f libpgcommon.a libpgcommon_shlib.a libpgcommon_srv.a libpgcommon_excluded_shlib.a
 	rm -f $(OBJS_FRONTEND) $(OBJS_SHLIB) $(OBJS_SRV)
 	rm -f kwlist_d.h
diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index 2ffcaaa6fd..6892a4be4e 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -21,10 +21,70 @@
 #include "mb/pg_wchar.h"
 #include "port/pg_lfind.h"
 
-#ifndef FRONTEND
+#ifdef JSONAPI_USE_PQEXPBUFFER
+#include "pqexpbuffer.h"
+#else
+#include "lib/stringinfo.h"
 #include "miscadmin.h"
 #endif
 
+/*
+ * By default, we will use palloc/pfree along with StringInfo.  In libpq,
+ * use malloc and PQExpBuffer, and return JSON_OUT_OF_MEMORY on out-of-memory.
+ */
+#ifdef JSONAPI_USE_PQEXPBUFFER
+
+#define STRDUP(s) strdup(s)
+#define ALLOC(size) malloc(size)
+#define ALLOC0(size) calloc(1, size)
+#define REALLOC realloc
+#define FREE(s) free(s)
+
+#define jsonapi_appendStringInfo			appendPQExpBuffer
+#define jsonapi_appendBinaryStringInfo		appendBinaryPQExpBuffer
+#define jsonapi_appendStringInfoChar		appendPQExpBufferChar
+/* XXX should we add a macro version to PQExpBuffer? */
+#define jsonapi_appendStringInfoCharMacro	appendPQExpBufferChar
+#define jsonapi_makeStringInfo				createPQExpBuffer
+#define jsonapi_initStringInfo				initPQExpBuffer
+#define jsonapi_resetStringInfo				resetPQExpBuffer
+#define jsonapi_termStringInfo				termPQExpBuffer
+#define jsonapi_destroyStringInfo			destroyPQExpBuffer
+
+#else							/* !JSONAPI_USE_PQEXPBUFFER */
+
+#define STRDUP(s) pstrdup(s)
+#define ALLOC(size) palloc(size)
+#define ALLOC0(size) palloc0(size)
+#define REALLOC repalloc
+
+#ifdef FRONTEND
+#define FREE pfree
+#else
+/*
+ * Backend pfree() doesn't handle NULL pointers like the frontend's does; smooth
+ * that over to reduce mental gymnastics. Avoid multiple evaluation of the macro
+ * argument to avoid future hair-pulling.
+ */
+#define FREE(s) do {	\
+	void *__v = (s);	\
+	if (__v)			\
+		pfree(__v);		\
+} while (0)
+#endif
+
+#define jsonapi_appendStringInfo			appendStringInfo
+#define jsonapi_appendBinaryStringInfo		appendBinaryStringInfo
+#define jsonapi_appendStringInfoChar		appendStringInfoChar
+#define jsonapi_appendStringInfoCharMacro	appendStringInfoCharMacro
+#define jsonapi_makeStringInfo				makeStringInfo
+#define jsonapi_initStringInfo				initStringInfo
+#define jsonapi_resetStringInfo				resetStringInfo
+#define jsonapi_termStringInfo(s)			pfree((s)->data)
+#define jsonapi_destroyStringInfo			destroyStringInfo
+
+#endif							/* JSONAPI_USE_PQEXPBUFFER */
+
 /*
  * The context of the parser is maintained by the recursive descent
  * mechanism, but is passed explicitly to the error reporting routine
@@ -103,7 +163,7 @@ struct JsonIncrementalState
 {
 	bool		is_last_chunk;
 	bool		partial_completed;
-	StringInfoData partial_token;
+	jsonapi_StrValType partial_token;
 };
 
 /*
@@ -219,6 +279,7 @@ static JsonParseErrorType parse_object(JsonLexContext *lex, const JsonSemAction
 static JsonParseErrorType parse_array_element(JsonLexContext *lex, const JsonSemAction *sem);
 static JsonParseErrorType parse_array(JsonLexContext *lex, const JsonSemAction *sem);
 static JsonParseErrorType report_parse_error(JsonParseContext ctx, JsonLexContext *lex);
+static bool allocate_incremental_state(JsonLexContext *lex);
 
 /* the null action object used for pure validation */
 const JsonSemAction nullSemAction =
@@ -227,6 +288,10 @@ const JsonSemAction nullSemAction =
 	NULL, NULL, NULL, NULL, NULL
 };
 
+/* sentinels used for out-of-memory conditions */
+static JsonLexContext failed_oom;
+static JsonIncrementalState failed_inc_oom;
+
 /* Parser support routines */
 
 /*
@@ -273,15 +338,11 @@ IsValidJsonNumber(const char *str, size_t len)
 {
 	bool		numeric_error;
 	size_t		total_len;
-	JsonLexContext dummy_lex;
+	JsonLexContext dummy_lex = {0};
 
 	if (len <= 0)
 		return false;
 
-	dummy_lex.incremental = false;
-	dummy_lex.inc_state = NULL;
-	dummy_lex.pstack = NULL;
-
 	/*
 	 * json_lex_number expects a leading  '-' to have been eaten already.
 	 *
@@ -321,6 +382,9 @@ IsValidJsonNumber(const char *str, size_t len)
  * responsible for freeing the returned struct, either by calling
  * freeJsonLexContext() or (in backend environment) via memory context
  * cleanup.
+ *
+ * In shlib code, any out-of-memory failures will be deferred to time
+ * of use; this function is guaranteed to return a valid JsonLexContext.
  */
 JsonLexContext *
 makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
@@ -328,7 +392,9 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return &failed_oom;
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -339,15 +405,73 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
 	lex->line_number = 1;
 	lex->input_length = len;
 	lex->input_encoding = encoding;
+	lex->need_escapes = need_escapes;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in shlib code. We defer error handling to time
+		 * of use (json_lex_string()) since we might not need to parse any
+		 * strings anyway.
+		 */
+		lex->strval = jsonapi_makeStringInfo();
 		lex->flags |= JSONLEX_FREE_STRVAL;
 	}
 
 	return lex;
 }
 
+/*
+ * Allocates the internal bookkeeping structures for incremental parsing. This
+ * can only fail in-band with shlib code.
+ */
+#define JS_STACK_CHUNK_SIZE 64
+#define JS_MAX_PROD_LEN 10		/* more than we need */
+#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
+								 * number */
+static bool
+allocate_incremental_state(JsonLexContext *lex)
+{
+	void	   *pstack,
+			   *prediction,
+			   *fnames,
+			   *fnull;
+
+	lex->inc_state = ALLOC0(sizeof(JsonIncrementalState));
+	pstack = ALLOC(sizeof(JsonParserStack));
+	prediction = ALLOC(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
+	fnames = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(char *));
+	fnull = ALLOC(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+#ifdef JSONAPI_USE_PQEXPBUFFER
+	if (!lex->inc_state
+		|| !pstack
+		|| !prediction
+		|| !fnames
+		|| !fnull)
+	{
+		FREE(lex->inc_state);
+		FREE(pstack);
+		FREE(prediction);
+		FREE(fnames);
+		FREE(fnull);
+
+		lex->inc_state = &failed_inc_oom;
+		return false;
+	}
+#endif
+
+	jsonapi_initStringInfo(&(lex->inc_state->partial_token));
+	lex->pstack = pstack;
+	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
+	lex->pstack->prediction = prediction;
+	lex->pstack->pred_index = 0;
+	lex->pstack->fnames = fnames;
+	lex->pstack->fnull = fnull;
+
+	lex->incremental = true;
+	return true;
+}
+
 
 /*
  * makeJsonLexContextIncremental
@@ -357,19 +481,20 @@ makeJsonLexContextCstringLen(JsonLexContext *lex, const char *json,
  * we don't need the input, that will be handed in bit by bit to the
  * parse routine. We also need an accumulator for partial tokens in case
  * the boundary between chunks happens to fall in the middle of a token.
+ *
+ * In shlib code, any out-of-memory failures will be deferred to time of use;
+ * this function is guaranteed to return a valid JsonLexContext.
  */
-#define JS_STACK_CHUNK_SIZE 64
-#define JS_MAX_PROD_LEN 10		/* more than we need */
-#define JSON_TD_MAX_STACK 6400	/* hard coded for now - this is a REALLY high
-								 * number */
-
 JsonLexContext *
 makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 							  bool need_escapes)
 {
 	if (lex == NULL)
 	{
-		lex = palloc0(sizeof(JsonLexContext));
+		lex = ALLOC0(sizeof(JsonLexContext));
+		if (!lex)
+			return &failed_oom;
+
 		lex->flags |= JSONLEX_FREE_STRUCT;
 	}
 	else
@@ -377,42 +502,65 @@ makeJsonLexContextIncremental(JsonLexContext *lex, int encoding,
 
 	lex->line_number = 1;
 	lex->input_encoding = encoding;
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+
+	if (!allocate_incremental_state(lex))
+	{
+		if (lex->flags & JSONLEX_FREE_STRUCT)
+		{
+			FREE(lex);
+			return &failed_oom;
+		}
+
+		/* lex->inc_state tracks the OOM failure; we can return here. */
+		return lex;
+	}
+
+	lex->need_escapes = need_escapes;
 	if (need_escapes)
 	{
-		lex->strval = makeStringInfo();
+		/*
+		 * This call can fail in shlib code. We defer error handling to time
+		 * of use (json_lex_string()) since we might not need to parse any
+		 * strings anyway.
+		 */
+		lex->strval = jsonapi_makeStringInfo();
 		lex->flags |= JSONLEX_FREE_STRVAL;
 	}
+
 	return lex;
 }
 
-static inline void
+static inline bool
 inc_lex_level(JsonLexContext *lex)
 {
-	lex->lex_level += 1;
-
-	if (lex->incremental && lex->lex_level >= lex->pstack->stack_size)
+	if (lex->incremental && (lex->lex_level + 1) >= lex->pstack->stack_size)
 	{
-		lex->pstack->stack_size += JS_STACK_CHUNK_SIZE;
-		lex->pstack->prediction =
-			repalloc(lex->pstack->prediction,
-					 lex->pstack->stack_size * JS_MAX_PROD_LEN);
-		if (lex->pstack->fnames)
-			lex->pstack->fnames =
-				repalloc(lex->pstack->fnames,
-						 lex->pstack->stack_size * sizeof(char *));
-		if (lex->pstack->fnull)
-			lex->pstack->fnull =
-				repalloc(lex->pstack->fnull, lex->pstack->stack_size * sizeof(bool));
+		size_t		new_stack_size;
+		char	   *new_prediction;
+		char	  **new_fnames;
+		bool	   *new_fnull;
+
+		new_stack_size = lex->pstack->stack_size + JS_STACK_CHUNK_SIZE;
+
+		new_prediction = REALLOC(lex->pstack->prediction,
+								 new_stack_size * JS_MAX_PROD_LEN);
+		new_fnames = REALLOC(lex->pstack->fnames,
+							 new_stack_size * sizeof(char *));
+		new_fnull = REALLOC(lex->pstack->fnull, new_stack_size * sizeof(bool));
+
+#ifdef JSONAPI_USE_PQEXPBUFFER
+		if (!new_prediction || !new_fnames || !new_fnull)
+			return false;
+#endif
+
+		lex->pstack->stack_size = new_stack_size;
+		lex->pstack->prediction = new_prediction;
+		lex->pstack->fnames = new_fnames;
+		lex->pstack->fnull = new_fnull;
 	}
+
+	lex->lex_level += 1;
+	return true;
 }
 
 static inline void
@@ -482,24 +630,31 @@ get_fnull(JsonLexContext *lex)
 void
 freeJsonLexContext(JsonLexContext *lex)
 {
+	static const JsonLexContext empty = {0};
+
+	if (!lex || lex == &failed_oom)
+		return;
+
 	if (lex->flags & JSONLEX_FREE_STRVAL)
-		destroyStringInfo(lex->strval);
+		jsonapi_destroyStringInfo(lex->strval);
 
 	if (lex->errormsg)
-		destroyStringInfo(lex->errormsg);
+		jsonapi_destroyStringInfo(lex->errormsg);
 
 	if (lex->incremental)
 	{
-		pfree(lex->inc_state->partial_token.data);
-		pfree(lex->inc_state);
-		pfree(lex->pstack->prediction);
-		pfree(lex->pstack->fnames);
-		pfree(lex->pstack->fnull);
-		pfree(lex->pstack);
+		jsonapi_termStringInfo(&lex->inc_state->partial_token);
+		FREE(lex->inc_state);
+		FREE(lex->pstack->prediction);
+		FREE(lex->pstack->fnames);
+		FREE(lex->pstack->fnull);
+		FREE(lex->pstack);
 	}
 
 	if (lex->flags & JSONLEX_FREE_STRUCT)
-		pfree(lex);
+		FREE(lex);
+	else
+		*lex = empty;
 }
 
 /*
@@ -522,22 +677,13 @@ JsonParseErrorType
 pg_parse_json(JsonLexContext *lex, const JsonSemAction *sem)
 {
 #ifdef FORCE_JSON_PSTACK
-
-	lex->incremental = true;
-	lex->inc_state = palloc0(sizeof(JsonIncrementalState));
-
 	/*
 	 * We don't need partial token processing, there is only one chunk. But we
 	 * still need to init the partial token string so that freeJsonLexContext
-	 * works.
+	 * works, so perform the full incremental initialization.
 	 */
-	initStringInfo(&(lex->inc_state->partial_token));
-	lex->pstack = palloc(sizeof(JsonParserStack));
-	lex->pstack->stack_size = JS_STACK_CHUNK_SIZE;
-	lex->pstack->prediction = palloc(JS_STACK_CHUNK_SIZE * JS_MAX_PROD_LEN);
-	lex->pstack->pred_index = 0;
-	lex->pstack->fnames = palloc(JS_STACK_CHUNK_SIZE * sizeof(char *));
-	lex->pstack->fnull = palloc(JS_STACK_CHUNK_SIZE * sizeof(bool));
+	if (!allocate_incremental_state(lex))
+		return JSON_OUT_OF_MEMORY;
 
 	return pg_parse_json_incremental(lex, sem, lex->input, lex->input_length, true);
 
@@ -546,6 +692,8 @@ pg_parse_json(JsonLexContext *lex, const JsonSemAction *sem)
 	JsonTokenType tok;
 	JsonParseErrorType result;
 
+	if (lex == &failed_oom)
+		return JSON_OUT_OF_MEMORY;
 	if (lex->incremental)
 		return JSON_INVALID_LEXER_TYPE;
 
@@ -591,13 +739,16 @@ json_count_array_elements(JsonLexContext *lex, int *elements)
 	int			count;
 	JsonParseErrorType result;
 
+	if (lex == &failed_oom)
+		return JSON_OUT_OF_MEMORY;
+
 	/*
 	 * It's safe to do this with a shallow copy because the lexical routines
 	 * don't scribble on the input. They do scribble on the other pointers
 	 * etc, so doing this with a copy makes that safe.
 	 */
 	memcpy(&copylex, lex, sizeof(JsonLexContext));
-	copylex.strval = NULL;		/* not interested in values here */
+	copylex.need_escapes = false;	/* not interested in values here */
 	copylex.lex_level++;
 
 	count = 0;
@@ -658,7 +809,8 @@ pg_parse_json_incremental(JsonLexContext *lex,
 	JsonParseContext ctx = JSON_PARSE_VALUE;
 	JsonParserStack *pstack = lex->pstack;
 
-
+	if (lex == &failed_oom || lex->inc_state == &failed_inc_oom)
+		return JSON_OUT_OF_MEMORY;
 	if (!lex->incremental)
 		return JSON_INVALID_LEXER_TYPE;
 
@@ -737,7 +889,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_OEND:
@@ -766,7 +920,9 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							if (result != JSON_SUCCESS)
 								return result;
 						}
-						inc_lex_level(lex);
+
+						if (!inc_lex_level(lex))
+							return JSON_OUT_OF_MEMORY;
 					}
 					break;
 				case JSON_SEM_AEND:
@@ -793,9 +949,11 @@ pg_parse_json_incremental(JsonLexContext *lex,
 						json_ofield_action ostart = sem->object_field_start;
 						json_ofield_action oend = sem->object_field_end;
 
-						if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
+						if ((ostart != NULL || oend != NULL) && lex->need_escapes)
 						{
-							fname = pstrdup(lex->strval->data);
+							fname = STRDUP(lex->strval->data);
+							if (fname == NULL)
+								return JSON_OUT_OF_MEMORY;
 						}
 						set_fname(lex, fname);
 					}
@@ -883,14 +1041,21 @@ pg_parse_json_incremental(JsonLexContext *lex,
 							 */
 							if (tok == JSON_TOKEN_STRING)
 							{
-								if (lex->strval != NULL)
-									pstack->scalar_val = pstrdup(lex->strval->data);
+								if (lex->need_escapes)
+								{
+									pstack->scalar_val = STRDUP(lex->strval->data);
+									if (pstack->scalar_val == NULL)
+										return JSON_OUT_OF_MEMORY;
+								}
 							}
 							else
 							{
 								ptrdiff_t	tlen = (lex->token_terminator - lex->token_start);
 
-								pstack->scalar_val = palloc(tlen + 1);
+								pstack->scalar_val = ALLOC(tlen + 1);
+								if (pstack->scalar_val == NULL)
+									return JSON_OUT_OF_MEMORY;
+
 								memcpy(pstack->scalar_val, lex->token_start, tlen);
 								pstack->scalar_val[tlen] = '\0';
 							}
@@ -1025,14 +1190,21 @@ parse_scalar(JsonLexContext *lex, const JsonSemAction *sem)
 	/* extract the de-escaped string value, or the raw lexeme */
 	if (lex_peek(lex) == JSON_TOKEN_STRING)
 	{
-		if (lex->strval != NULL)
-			val = pstrdup(lex->strval->data);
+		if (lex->need_escapes)
+		{
+			val = STRDUP(lex->strval->data);
+			if (val == NULL)
+				return JSON_OUT_OF_MEMORY;
+		}
 	}
 	else
 	{
 		int			len = (lex->token_terminator - lex->token_start);
 
-		val = palloc(len + 1);
+		val = ALLOC(len + 1);
+		if (val == NULL)
+			return JSON_OUT_OF_MEMORY;
+
 		memcpy(val, lex->token_start, len);
 		val[len] = '\0';
 	}
@@ -1066,8 +1238,12 @@ parse_object_field(JsonLexContext *lex, const JsonSemAction *sem)
 
 	if (lex_peek(lex) != JSON_TOKEN_STRING)
 		return report_parse_error(JSON_PARSE_STRING, lex);
-	if ((ostart != NULL || oend != NULL) && lex->strval != NULL)
-		fname = pstrdup(lex->strval->data);
+	if ((ostart != NULL || oend != NULL) && lex->need_escapes)
+	{
+		fname = STRDUP(lex->strval->data);
+		if (fname == NULL)
+			return JSON_OUT_OF_MEMORY;
+	}
 	result = json_lex(lex);
 	if (result != JSON_SUCCESS)
 		return result;
@@ -1123,6 +1299,11 @@ parse_object(JsonLexContext *lex, const JsonSemAction *sem)
 	JsonParseErrorType result;
 
 #ifndef FRONTEND
+
+	/*
+	 * TODO: clients need some way to put a bound on stack growth. Parse level
+	 * limits maybe?
+	 */
 	check_stack_depth();
 #endif
 
@@ -1312,15 +1493,27 @@ json_lex(JsonLexContext *lex)
 	const char *const end = lex->input + lex->input_length;
 	JsonParseErrorType result;
 
-	if (lex->incremental && lex->inc_state->partial_completed)
+	if (lex == &failed_oom || lex->inc_state == &failed_inc_oom)
+		return JSON_OUT_OF_MEMORY;
+
+	if (lex->incremental)
 	{
-		/*
-		 * We just lexed a completed partial token on the last call, so reset
-		 * everything
-		 */
-		resetStringInfo(&(lex->inc_state->partial_token));
-		lex->token_terminator = lex->input;
-		lex->inc_state->partial_completed = false;
+		if (lex->inc_state->partial_completed)
+		{
+			/*
+			 * We just lexed a completed partial token on the last call, so
+			 * reset everything
+			 */
+			jsonapi_resetStringInfo(&(lex->inc_state->partial_token));
+			lex->token_terminator = lex->input;
+			lex->inc_state->partial_completed = false;
+		}
+
+#ifdef JSONAPI_USE_PQEXPBUFFER
+		/* Make sure our partial token buffer is valid before using it below. */
+		if (PQExpBufferDataBroken(lex->inc_state->partial_token))
+			return JSON_OUT_OF_MEMORY;
+#endif
 	}
 
 	s = lex->token_terminator;
@@ -1331,7 +1524,7 @@ json_lex(JsonLexContext *lex)
 		 * We have a partial token. Extend it and if completed lex it by a
 		 * recursive call
 		 */
-		StringInfo	ptok = &(lex->inc_state->partial_token);
+		jsonapi_StrValType *ptok = &(lex->inc_state->partial_token);
 		size_t		added = 0;
 		bool		tok_done = false;
 		JsonLexContext dummy_lex;
@@ -1358,7 +1551,7 @@ json_lex(JsonLexContext *lex)
 			{
 				char		c = lex->input[i];
 
-				appendStringInfoCharMacro(ptok, c);
+				jsonapi_appendStringInfoCharMacro(ptok, c);
 				added++;
 				if (c == '"' && escapes % 2 == 0)
 				{
@@ -1403,7 +1596,7 @@ json_lex(JsonLexContext *lex)
 						case '8':
 						case '9':
 							{
-								appendStringInfoCharMacro(ptok, cc);
+								jsonapi_appendStringInfoCharMacro(ptok, cc);
 								added++;
 							}
 							break;
@@ -1424,7 +1617,7 @@ json_lex(JsonLexContext *lex)
 
 				if (JSON_ALPHANUMERIC_CHAR(cc))
 				{
-					appendStringInfoCharMacro(ptok, cc);
+					jsonapi_appendStringInfoCharMacro(ptok, cc);
 					added++;
 				}
 				else
@@ -1467,6 +1660,7 @@ json_lex(JsonLexContext *lex)
 		dummy_lex.input_length = ptok->len;
 		dummy_lex.input_encoding = lex->input_encoding;
 		dummy_lex.incremental = false;
+		dummy_lex.need_escapes = lex->need_escapes;
 		dummy_lex.strval = lex->strval;
 
 		partial_result = json_lex(&dummy_lex);
@@ -1622,8 +1816,7 @@ json_lex(JsonLexContext *lex)
 					if (lex->incremental && !lex->inc_state->is_last_chunk &&
 						p == lex->input + lex->input_length)
 					{
-						appendBinaryStringInfo(
-											   &(lex->inc_state->partial_token), s, end - s);
+						jsonapi_appendBinaryStringInfo(&(lex->inc_state->partial_token), s, end - s);
 						return JSON_INCOMPLETE;
 					}
 
@@ -1680,8 +1873,9 @@ json_lex_string(JsonLexContext *lex)
 	do { \
 		if (lex->incremental && !lex->inc_state->is_last_chunk) \
 		{ \
-			appendBinaryStringInfo(&lex->inc_state->partial_token, \
-								   lex->token_start, end - lex->token_start); \
+			jsonapi_appendBinaryStringInfo(&lex->inc_state->partial_token, \
+										   lex->token_start, \
+										   end - lex->token_start); \
 			return JSON_INCOMPLETE; \
 		} \
 		lex->token_terminator = s; \
@@ -1694,8 +1888,15 @@ json_lex_string(JsonLexContext *lex)
 		return code; \
 	} while (0)
 
-	if (lex->strval != NULL)
-		resetStringInfo(lex->strval);
+	if (lex->need_escapes)
+	{
+#ifdef JSONAPI_USE_PQEXPBUFFER
+		/* make sure initialization succeeded */
+		if (lex->strval == NULL)
+			return JSON_OUT_OF_MEMORY;
+#endif
+		jsonapi_resetStringInfo(lex->strval);
+	}
 
 	Assert(lex->input_length > 0);
 	s = lex->token_start;
@@ -1732,7 +1933,7 @@ json_lex_string(JsonLexContext *lex)
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_ESCAPE_FORMAT);
 				}
-				if (lex->strval != NULL)
+				if (lex->need_escapes)
 				{
 					/*
 					 * Combine surrogate pairs.
@@ -1789,19 +1990,19 @@ json_lex_string(JsonLexContext *lex)
 
 						unicode_to_utf8(ch, (unsigned char *) utf8str);
 						utf8len = pg_utf_mblen((unsigned char *) utf8str);
-						appendBinaryStringInfo(lex->strval, utf8str, utf8len);
+						jsonapi_appendBinaryStringInfo(lex->strval, utf8str, utf8len);
 					}
 					else if (ch <= 0x007f)
 					{
 						/* The ASCII range is the same in all encodings */
-						appendStringInfoChar(lex->strval, (char) ch);
+						jsonapi_appendStringInfoChar(lex->strval, (char) ch);
 					}
 					else
 						FAIL_AT_CHAR_END(JSON_UNICODE_HIGH_ESCAPE);
 #endif							/* FRONTEND */
 				}
 			}
-			else if (lex->strval != NULL)
+			else if (lex->need_escapes)
 			{
 				if (hi_surrogate != -1)
 					FAIL_AT_CHAR_END(JSON_UNICODE_LOW_SURROGATE);
@@ -1811,22 +2012,22 @@ json_lex_string(JsonLexContext *lex)
 					case '"':
 					case '\\':
 					case '/':
-						appendStringInfoChar(lex->strval, *s);
+						jsonapi_appendStringInfoChar(lex->strval, *s);
 						break;
 					case 'b':
-						appendStringInfoChar(lex->strval, '\b');
+						jsonapi_appendStringInfoChar(lex->strval, '\b');
 						break;
 					case 'f':
-						appendStringInfoChar(lex->strval, '\f');
+						jsonapi_appendStringInfoChar(lex->strval, '\f');
 						break;
 					case 'n':
-						appendStringInfoChar(lex->strval, '\n');
+						jsonapi_appendStringInfoChar(lex->strval, '\n');
 						break;
 					case 'r':
-						appendStringInfoChar(lex->strval, '\r');
+						jsonapi_appendStringInfoChar(lex->strval, '\r');
 						break;
 					case 't':
-						appendStringInfoChar(lex->strval, '\t');
+						jsonapi_appendStringInfoChar(lex->strval, '\t');
 						break;
 					default:
 
@@ -1861,7 +2062,7 @@ json_lex_string(JsonLexContext *lex)
 
 			/*
 			 * Skip to the first byte that requires special handling, so we
-			 * can batch calls to appendBinaryStringInfo.
+			 * can batch calls to jsonapi_appendBinaryStringInfo.
 			 */
 			while (p < end - sizeof(Vector8) &&
 				   !pg_lfind8('\\', (uint8 *) p, sizeof(Vector8)) &&
@@ -1885,8 +2086,8 @@ json_lex_string(JsonLexContext *lex)
 				}
 			}
 
-			if (lex->strval != NULL)
-				appendBinaryStringInfo(lex->strval, s, p - s);
+			if (lex->need_escapes)
+				jsonapi_appendBinaryStringInfo(lex->strval, s, p - s);
 
 			/*
 			 * s will be incremented at the top of the loop, so set it to just
@@ -1902,6 +2103,11 @@ json_lex_string(JsonLexContext *lex)
 		return JSON_UNICODE_LOW_SURROGATE;
 	}
 
+#ifdef JSONAPI_USE_PQEXPBUFFER
+	if (lex->need_escapes && PQExpBufferBroken(lex->strval))
+		return JSON_OUT_OF_MEMORY;
+#endif
+
 	/* Hooray, we found the end of the string! */
 	lex->prev_token_terminator = lex->token_terminator;
 	lex->token_terminator = s + 1;
@@ -2019,8 +2225,8 @@ json_lex_number(JsonLexContext *lex, const char *s,
 	if (lex->incremental && !lex->inc_state->is_last_chunk &&
 		len >= lex->input_length)
 	{
-		appendBinaryStringInfo(&lex->inc_state->partial_token,
-							   lex->token_start, s - lex->token_start);
+		jsonapi_appendBinaryStringInfo(&lex->inc_state->partial_token,
+									   lex->token_start, s - lex->token_start);
 		if (num_err != NULL)
 			*num_err = error;
 
@@ -2096,19 +2302,25 @@ report_parse_error(JsonParseContext ctx, JsonLexContext *lex)
 char *
 json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 {
+	if (error == JSON_OUT_OF_MEMORY || lex == &failed_oom)
+	{
+		/* Short circuit. Allocating anything for this case is unhelpful. */
+		return _("out of memory");
+	}
+
 	if (lex->errormsg)
-		resetStringInfo(lex->errormsg);
+		jsonapi_resetStringInfo(lex->errormsg);
 	else
-		lex->errormsg = makeStringInfo();
+		lex->errormsg = jsonapi_makeStringInfo();
 
 	/*
 	 * A helper for error messages that should print the current token. The
 	 * format must contain exactly one %.*s specifier.
 	 */
 #define json_token_error(lex, format) \
-	appendStringInfo((lex)->errormsg, _(format), \
-					 (int) ((lex)->token_terminator - (lex)->token_start), \
-					 (lex)->token_start);
+	jsonapi_appendStringInfo((lex)->errormsg, _(format), \
+							 (int) ((lex)->token_terminator - (lex)->token_start), \
+							 (lex)->token_start);
 
 	switch (error)
 	{
@@ -2127,9 +2339,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 			json_token_error(lex, "Escape sequence \"\\%.*s\" is invalid.");
 			break;
 		case JSON_ESCAPING_REQUIRED:
-			appendStringInfo(lex->errormsg,
-							 _("Character with value 0x%02x must be escaped."),
-							 (unsigned char) *(lex->token_terminator));
+			jsonapi_appendStringInfo(lex->errormsg,
+									 _("Character with value 0x%02x must be escaped."),
+									 (unsigned char) *(lex->token_terminator));
 			break;
 		case JSON_EXPECTED_END:
 			json_token_error(lex, "Expected end of input, but found \"%.*s\".");
@@ -2160,6 +2372,9 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 		case JSON_INVALID_TOKEN:
 			json_token_error(lex, "Token \"%.*s\" is invalid.");
 			break;
+		case JSON_OUT_OF_MEMORY:
+			/* should have been handled above; use the error path */
+			break;
 		case JSON_UNICODE_CODE_POINT_ZERO:
 			return _("\\u0000 cannot be converted to text.");
 		case JSON_UNICODE_ESCAPE_FORMAT:
@@ -2191,15 +2406,23 @@ json_errdetail(JsonParseErrorType error, JsonLexContext *lex)
 	}
 #undef json_token_error
 
-	/*
-	 * We don't use a default: case, so that the compiler will warn about
-	 * unhandled enum values.  But this needs to be here anyway to cover the
-	 * possibility of an incorrect input.
-	 */
-	if (lex->errormsg->len == 0)
-		appendStringInfo(lex->errormsg,
-						 "unexpected json parse error type: %d",
-						 (int) error);
+	/* Note that lex->errormsg can be NULL in shlib code. */
+	if (lex->errormsg && lex->errormsg->len == 0)
+	{
+		/*
+		 * We don't use a default: case, so that the compiler will warn about
+		 * unhandled enum values.  But this needs to be here anyway to cover
+		 * the possibility of an incorrect input.
+		 */
+		jsonapi_appendStringInfo(lex->errormsg,
+								 "unexpected json parse error type: %d",
+								 (int) error);
+	}
+
+#ifdef JSONAPI_USE_PQEXPBUFFER
+	if (PQExpBufferBroken(lex->errormsg))
+		return _("out of memory while constructing error description");
+#endif
 
 	return lex->errormsg->data;
 }
diff --git a/src/common/meson.build b/src/common/meson.build
index 1a564e1dce..5dd4ad8d89 100644
--- a/src/common/meson.build
+++ b/src/common/meson.build
@@ -103,6 +103,10 @@ common_sources_cflags = {
 # a matter of policy, because it is not appropriate for general purpose
 # libraries such as libpq to report errors directly.  fe_memutils.c is
 # excluded because libpq must not exit() on allocation failure.
+#
+# The excluded files for _shlib builds are pulled into their own static
+# library, for the benefit of test programs that need not follow the
+# shlib rules.
 
 common_sources_frontend_shlib = common_sources
 common_sources_frontend_shlib += files(
@@ -110,12 +114,16 @@ common_sources_frontend_shlib += files(
   'sprompt.c',
 )
 
-common_sources_frontend_static = common_sources_frontend_shlib
-common_sources_frontend_static += files(
+common_sources_excluded_shlib = files(
   'fe_memutils.c',
   'logging.c',
 )
 
+common_sources_frontend_static = [
+  common_sources_frontend_shlib,
+  common_sources_excluded_shlib,
+]
+
 # Build pgcommon once for backend, once for use in frontend binaries, and
 # once for use in shared libraries
 #
@@ -143,6 +151,10 @@ pgcommon_variants = {
     'pic': true,
     'sources': common_sources_frontend_shlib,
     'dependencies': [frontend_common_code],
+    # The JSON API normally exits on out-of-memory; disable that behavior for
+    # shared library builds. This requires libpq's pqexpbuffer.h.
+    'c_args': ['-DJSONAPI_USE_PQEXPBUFFER'],
+    'include_directories': include_directories('../interfaces/libpq'),
   },
 }
 
@@ -158,8 +170,11 @@ foreach name, opts : pgcommon_variants
     c_args = opts.get('c_args', []) + common_cflags[cflagname]
     cflag_libs += static_library('libpgcommon@0@_@1@'.format(name, cflagname),
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
+        'include_directories': [
+          include_directories('.'),
+          opts.get('include_directories', []),
+        ],
         'sources': sources,
         'c_args': c_args,
         'build_by_default': false,
@@ -171,8 +186,11 @@ foreach name, opts : pgcommon_variants
   lib = static_library('libpgcommon@0@'.format(name),
       link_with: cflag_libs,
       c_pch: pch_c_h,
-      include_directories: include_directories('.'),
       kwargs: opts + {
+        'include_directories': [
+          include_directories('.'),
+          opts.get('include_directories', []),
+        ],
         'dependencies': opts['dependencies'] + [ssl],
       }
     )
@@ -183,4 +201,13 @@ common_srv = pgcommon['_srv']
 common_shlib = pgcommon['_shlib']
 common_static = pgcommon['']
 
+common_excluded_shlib = static_library('libpgcommon_excluded_shlib',
+  sources: common_sources_excluded_shlib,
+  dependencies: [frontend_common_code],
+  build_by_default: false,
+  kwargs: default_lib_args + {
+    'install': false,
+  },
+)
+
 subdir('unicode')
diff --git a/src/include/common/jsonapi.h b/src/include/common/jsonapi.h
index a995fdbe08..c524ff5be8 100644
--- a/src/include/common/jsonapi.h
+++ b/src/include/common/jsonapi.h
@@ -14,8 +14,6 @@
 #ifndef JSONAPI_H
 #define JSONAPI_H
 
-#include "lib/stringinfo.h"
-
 typedef enum JsonTokenType
 {
 	JSON_TOKEN_INVALID,
@@ -51,6 +49,7 @@ typedef enum JsonParseErrorType
 	JSON_EXPECTED_OBJECT_NEXT,
 	JSON_EXPECTED_STRING,
 	JSON_INVALID_TOKEN,
+	JSON_OUT_OF_MEMORY,
 	JSON_UNICODE_CODE_POINT_ZERO,
 	JSON_UNICODE_ESCAPE_FORMAT,
 	JSON_UNICODE_HIGH_ESCAPE,
@@ -64,6 +63,16 @@ typedef enum JsonParseErrorType
 typedef struct JsonParserStack JsonParserStack;
 typedef struct JsonIncrementalState JsonIncrementalState;
 
+/*
+ * Don't depend on the internal type header for strval; if callers need access
+ * then they can include the appropriate header themselves.
+ */
+#ifdef JSONAPI_USE_PQEXPBUFFER
+#define jsonapi_StrValType PQExpBufferData
+#else
+#define jsonapi_StrValType StringInfoData
+#endif
+
 /*
  * All the fields in this structure should be treated as read-only.
  *
@@ -102,8 +111,9 @@ typedef struct JsonLexContext
 	const char *line_start;		/* where that line starts within input */
 	JsonParserStack *pstack;
 	JsonIncrementalState *inc_state;
-	StringInfo	strval;
-	StringInfo	errormsg;
+	bool		need_escapes;
+	struct jsonapi_StrValType *strval;	/* only used if need_escapes == true */
+	struct jsonapi_StrValType *errormsg;
 } JsonLexContext;
 
 typedef JsonParseErrorType (*json_struct_action) (void *state);
diff --git a/src/test/modules/test_json_parser/Makefile b/src/test/modules/test_json_parser/Makefile
index 2dc7175b7c..472e38d068 100644
--- a/src/test/modules/test_json_parser/Makefile
+++ b/src/test/modules/test_json_parser/Makefile
@@ -6,7 +6,7 @@ TAP_TESTS = 1
 
 OBJS = test_json_parser_incremental.o test_json_parser_perf.o $(WIN32RES)
 
-EXTRA_CLEAN = test_json_parser_incremental$(X) test_json_parser_perf$(X)
+EXTRA_CLEAN = test_json_parser_incremental$(X) test_json_parser_incremental_shlib$(X) test_json_parser_perf$(X)
 
 ifdef USE_PGXS
 PG_CONFIG = pg_config
@@ -19,13 +19,16 @@ include $(top_builddir)/src/Makefile.global
 include $(top_srcdir)/contrib/contrib-global.mk
 endif
 
-all: test_json_parser_incremental$(X) test_json_parser_perf$(X)
+all: test_json_parser_incremental$(X) test_json_parser_incremental_shlib$(X) test_json_parser_perf$(X)
 
 %.o: $(top_srcdir)/$(subdir)/%.c
 
 test_json_parser_incremental$(X): test_json_parser_incremental.o $(WIN32RES)
 	$(CC) $(CFLAGS) $^ $(PG_LIBS_INTERNAL) $(LDFLAGS) $(LDFLAGS_EX) $(PG_LIBS) $(LIBS) -o $@
 
+test_json_parser_incremental_shlib$(X): test_json_parser_incremental.o $(WIN32RES)
+	$(CC) $(CFLAGS) $^ $(LDFLAGS) -lpgcommon_excluded_shlib $(libpq_pgport_shlib) -o $@
+
 test_json_parser_perf$(X): test_json_parser_perf.o $(WIN32RES)
 	$(CC) $(CFLAGS) $^ $(PG_LIBS_INTERNAL) $(LDFLAGS) $(LDFLAGS_EX) $(PG_LIBS) $(LIBS) -o $@
 
diff --git a/src/test/modules/test_json_parser/meson.build b/src/test/modules/test_json_parser/meson.build
index b224f3e07e..059a8b71bd 100644
--- a/src/test/modules/test_json_parser/meson.build
+++ b/src/test/modules/test_json_parser/meson.build
@@ -19,6 +19,18 @@ test_json_parser_incremental = executable('test_json_parser_incremental',
   },
 )
 
+# A second version of test_json_parser_incremental, this time compiled against
+# the shared-library flavor of jsonapi.
+test_json_parser_incremental_shlib = executable('test_json_parser_incremental_shlib',
+  test_json_parser_incremental_sources,
+  dependencies: [frontend_shlib_code, libpq],
+  c_args: ['-DJSONAPI_SHLIB_ALLOC'],
+  link_with: [common_excluded_shlib],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+
 test_json_parser_perf_sources = files(
   'test_json_parser_perf.c',
 )
diff --git a/src/test/modules/test_json_parser/t/001_test_json_parser_incremental.pl b/src/test/modules/test_json_parser/t/001_test_json_parser_incremental.pl
index abf0d7a237..8cc42e8e29 100644
--- a/src/test/modules/test_json_parser/t/001_test_json_parser_incremental.pl
+++ b/src/test/modules/test_json_parser/t/001_test_json_parser_incremental.pl
@@ -13,20 +13,25 @@ use FindBin;
 
 my $test_file = "$FindBin::RealBin/../tiny.json";
 
-my $exe = "test_json_parser_incremental";
+my @exes =
+  ("test_json_parser_incremental", "test_json_parser_incremental_shlib");
 
-# Test the  usage error
-my ($stdout, $stderr) = run_command([ $exe, "-c", 10 ]);
-like($stderr, qr/Usage:/, 'error message if not enough arguments');
+foreach my $exe (@exes)
+{
+	note "testing executable $exe";
 
-# Test that we get success for small chunk sizes from 64 down to 1.
+	# Test the  usage error
+	my ($stdout, $stderr) = run_command([ $exe, "-c", 10 ]);
+	like($stderr, qr/Usage:/, 'error message if not enough arguments');
 
-for (my $size = 64; $size > 0; $size--)
-{
-	($stdout, $stderr) = run_command([ $exe, "-c", $size, $test_file ]);
+	# Test that we get success for small chunk sizes from 64 down to 1.
+	for (my $size = 64; $size > 0; $size--)
+	{
+		($stdout, $stderr) = run_command([ $exe, "-c", $size, $test_file ]);
 
-	like($stdout, qr/SUCCESS/, "chunk size $size: test succeeds");
-	is($stderr, "", "chunk size $size: no error output");
+		like($stdout, qr/SUCCESS/, "chunk size $size: test succeeds");
+		is($stderr, "", "chunk size $size: no error output");
+	}
 }
 
 done_testing();
diff --git a/src/test/modules/test_json_parser/t/002_inline.pl b/src/test/modules/test_json_parser/t/002_inline.pl
index 8d62eb44c8..5b6c6dc4ae 100644
--- a/src/test/modules/test_json_parser/t/002_inline.pl
+++ b/src/test/modules/test_json_parser/t/002_inline.pl
@@ -13,13 +13,13 @@ use Test::More;
 use File::Temp qw(tempfile);
 
 my $dir = PostgreSQL::Test::Utils::tempdir;
+my $exe;
 
 sub test
 {
 	local $Test::Builder::Level = $Test::Builder::Level + 1;
 
 	my ($name, $json, %params) = @_;
-	my $exe = "test_json_parser_incremental";
 	my $chunk = length($json);
 
 	# Test the input with chunk sizes from max(input_size, 64) down to 1
@@ -53,86 +53,99 @@ sub test
 	}
 }
 
-test("number", "12345");
-test("string", '"hello"');
-test("false", "false");
-test("true", "true");
-test("null", "null");
-test("empty object", "{}");
-test("empty array", "[]");
-test("array with number", "[12345]");
-test("array with numbers", "[12345,67890]");
-test("array with null", "[null]");
-test("array with string", '["hello"]');
-test("array with boolean", '[false]');
-test("single pair", '{"key": "value"}');
-test("heavily nested array", "[" x 3200 . "]" x 3200);
-test("serial escapes", '"\\\\\\\\\\\\\\\\"');
-test("interrupted escapes", '"\\\\\\"\\\\\\\\\\"\\\\"');
-test("whitespace", '     ""     ');
-
-test("unclosed empty object",
-	"{", error => qr/input string ended unexpectedly/);
-test("bad key", "{{", error => qr/Expected string or "}", but found "\{"/);
-test("bad key", "{{}", error => qr/Expected string or "}", but found "\{"/);
-test("numeric key", "{1234: 2}",
-	error => qr/Expected string or "}", but found "1234"/);
-test(
-	"second numeric key",
-	'{"a": "a", 1234: 2}',
-	error => qr/Expected string, but found "1234"/);
-test(
-	"unclosed object with pair",
-	'{"key": "value"',
-	error => qr/input string ended unexpectedly/);
-test("missing key value",
-	'{"key": }', error => qr/Expected JSON value, but found "}"/);
-test(
-	"missing colon",
-	'{"key" 12345}',
-	error => qr/Expected ":", but found "12345"/);
-test(
-	"missing comma",
-	'{"key": 12345 12345}',
-	error => qr/Expected "," or "}", but found "12345"/);
-test("overnested array",
-	"[" x 6401, error => qr/maximum permitted depth is 6400/);
-test("overclosed array",
-	"[]]", error => qr/Expected end of input, but found "]"/);
-test("unexpected token in array",
-	"[ }}} ]", error => qr/Expected array element or "]", but found "}"/);
-test("junk punctuation", "[ ||| ]", error => qr/Token "|" is invalid/);
-test("missing comma in array",
-	"[123 123]", error => qr/Expected "," or "]", but found "123"/);
-test("misspelled boolean", "tru", error => qr/Token "tru" is invalid/);
-test(
-	"misspelled boolean in array",
-	"[tru]",
-	error => qr/Token "tru" is invalid/);
-test("smashed top-level scalar", "12zz",
-	error => qr/Token "12zz" is invalid/);
-test(
-	"smashed scalar in array",
-	"[12zz]",
-	error => qr/Token "12zz" is invalid/);
-test(
-	"unknown escape sequence",
-	'"hello\vworld"',
-	error => qr/Escape sequence "\\v" is invalid/);
-test("unescaped control",
-	"\"hello\tworld\"",
-	error => qr/Character with value 0x09 must be escaped/);
-test(
-	"incorrect escape count",
-	'"\\\\\\\\\\\\\\"',
-	error => qr/Token ""\\\\\\\\\\\\\\"" is invalid/);
-
-# Case with three bytes: double-quote, backslash and <f5>.
-# Both invalid-token and invalid-escape are possible errors, because for
-# smaller chunk sizes the incremental parser skips the string parsing when
-# it cannot find an ending quote.
-test("incomplete UTF-8 sequence",
-	"\"\\\x{F5}",
-	error => qr/(Token|Escape sequence) ""?\\\x{F5}" is invalid/);
+my @exes =
+  ("test_json_parser_incremental", "test_json_parser_incremental_shlib");
+
+foreach (@exes)
+{
+	$exe = $_;
+	note "testing executable $exe";
+
+	test("number", "12345");
+	test("string", '"hello"');
+	test("false", "false");
+	test("true", "true");
+	test("null", "null");
+	test("empty object", "{}");
+	test("empty array", "[]");
+	test("array with number", "[12345]");
+	test("array with numbers", "[12345,67890]");
+	test("array with null", "[null]");
+	test("array with string", '["hello"]');
+	test("array with boolean", '[false]');
+	test("single pair", '{"key": "value"}');
+	test("heavily nested array", "[" x 3200 . "]" x 3200);
+	test("serial escapes", '"\\\\\\\\\\\\\\\\"');
+	test("interrupted escapes", '"\\\\\\"\\\\\\\\\\"\\\\"');
+	test("whitespace", '     ""     ');
+
+	test("unclosed empty object",
+		"{", error => qr/input string ended unexpectedly/);
+	test("bad key", "{{",
+		error => qr/Expected string or "}", but found "\{"/);
+	test("bad key", "{{}",
+		error => qr/Expected string or "}", but found "\{"/);
+	test("numeric key", "{1234: 2}",
+		error => qr/Expected string or "}", but found "1234"/);
+	test(
+		"second numeric key",
+		'{"a": "a", 1234: 2}',
+		error => qr/Expected string, but found "1234"/);
+	test(
+		"unclosed object with pair",
+		'{"key": "value"',
+		error => qr/input string ended unexpectedly/);
+	test("missing key value",
+		'{"key": }', error => qr/Expected JSON value, but found "}"/);
+	test(
+		"missing colon",
+		'{"key" 12345}',
+		error => qr/Expected ":", but found "12345"/);
+	test(
+		"missing comma",
+		'{"key": 12345 12345}',
+		error => qr/Expected "," or "}", but found "12345"/);
+	test("overnested array",
+		"[" x 6401, error => qr/maximum permitted depth is 6400/);
+	test("overclosed array",
+		"[]]", error => qr/Expected end of input, but found "]"/);
+	test("unexpected token in array",
+		"[ }}} ]", error => qr/Expected array element or "]", but found "}"/);
+	test("junk punctuation", "[ ||| ]", error => qr/Token "|" is invalid/);
+	test("missing comma in array",
+		"[123 123]", error => qr/Expected "," or "]", but found "123"/);
+	test("misspelled boolean", "tru", error => qr/Token "tru" is invalid/);
+	test(
+		"misspelled boolean in array",
+		"[tru]",
+		error => qr/Token "tru" is invalid/);
+	test(
+		"smashed top-level scalar",
+		"12zz",
+		error => qr/Token "12zz" is invalid/);
+	test(
+		"smashed scalar in array",
+		"[12zz]",
+		error => qr/Token "12zz" is invalid/);
+	test(
+		"unknown escape sequence",
+		'"hello\vworld"',
+		error => qr/Escape sequence "\\v" is invalid/);
+	test("unescaped control",
+		"\"hello\tworld\"",
+		error => qr/Character with value 0x09 must be escaped/);
+	test(
+		"incorrect escape count",
+		'"\\\\\\\\\\\\\\"',
+		error => qr/Token ""\\\\\\\\\\\\\\"" is invalid/);
+
+	# Case with three bytes: double-quote, backslash and <f5>.
+	# Both invalid-token and invalid-escape are possible errors, because for
+	# smaller chunk sizes the incremental parser skips the string parsing when
+	# it cannot find an ending quote.
+	test("incomplete UTF-8 sequence",
+		"\"\\\x{F5}",
+		error => qr/(Token|Escape sequence) ""?\\\x{F5}" is invalid/);
+}
 
 done_testing();
diff --git a/src/test/modules/test_json_parser/t/003_test_semantic.pl b/src/test/modules/test_json_parser/t/003_test_semantic.pl
index b6553bbcdd..c11480172d 100644
--- a/src/test/modules/test_json_parser/t/003_test_semantic.pl
+++ b/src/test/modules/test_json_parser/t/003_test_semantic.pl
@@ -16,24 +16,31 @@ use File::Temp qw(tempfile);
 my $test_file = "$FindBin::RealBin/../tiny.json";
 my $test_out = "$FindBin::RealBin/../tiny.out";
 
-my $exe = "test_json_parser_incremental";
+my @exes =
+  ("test_json_parser_incremental", "test_json_parser_incremental_shlib");
 
-my ($stdout, $stderr) = run_command([ $exe, "-s", $test_file ]);
+foreach my $exe (@exes)
+{
+	note "testing executable $exe";
 
-is($stderr, "", "no error output");
+	my ($stdout, $stderr) = run_command([ $exe, "-s", $test_file ]);
 
-my $dir = PostgreSQL::Test::Utils::tempdir;
-my ($fh, $fname) = tempfile(DIR => $dir);
+	is($stderr, "", "no error output");
 
-print $fh $stdout, "\n";
+	my $dir = PostgreSQL::Test::Utils::tempdir;
+	my ($fh, $fname) = tempfile(DIR => $dir);
 
-close($fh);
+	print $fh $stdout, "\n";
 
-my @diffopts = ("-u");
-push(@diffopts, "--strip-trailing-cr") if $windows_os;
-($stdout, $stderr) = run_command([ "diff", @diffopts, $fname, $test_out ]);
+	close($fh);
 
-is($stdout, "", "no output diff");
-is($stderr, "", "no diff error");
+	my @diffopts = ("-u");
+	push(@diffopts, "--strip-trailing-cr") if $windows_os;
+	($stdout, $stderr) =
+	  run_command([ "diff", @diffopts, $fname, $test_out ]);
+
+	is($stdout, "", "no output diff");
+	is($stderr, "", "no diff error");
+}
 
 done_testing();
-- 
2.34.1

v28-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v28-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 6be4888464c09af9abba4437937286003f51f60f Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 13 Apr 2021 10:27:27 -0700
Subject: [PATCH v28 2/5] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                        |   19 +
 configure                                 |  144 ++
 configure.ac                              |   29 +
 doc/src/sgml/libpq.sgml                   |   76 +
 meson.build                               |   31 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   14 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2222 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 ++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   77 +
 src/interfaces/libpq/libpq-int.h          |   15 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 25 files changed, 3577 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 537366945c..8c4d9736c3 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8493,6 +8496,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13024,6 +13073,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14049,6 +14182,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 4e279c4bd6..9c6dbe5ccd 100644
--- a/configure.ac
+++ b/configure.ac
@@ -920,6 +920,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1402,6 +1422,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1592,6 +1617,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index f916fce414..f2a761e0a3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2335,6 +2335,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9940,6 +9977,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/meson.build b/meson.build
index ea07126f78..a2e2479821 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3080,6 +3109,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3752,6 +3782,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 979925cc2e..196c96fbb8 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -244,6 +244,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -727,6 +730,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 27f8499d8a..7d593778ec 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..435abee56a
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2222 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the whole
+	 * string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			/* HTTP optional whitespace allows only spaces and htabs. */
+			case ' ':
+			case '\t':
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently. We
+		 * accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specify 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in
+	 * the (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1; /* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char * const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN: /* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT: /* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so make
+		 * sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of CURLoption.
+	 * CURLOPT_PROTOCOLS is deprecated in modern Curls, but its replacement
+	 * didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char * const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can immediately
+	 * call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They don't
+		 * even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the foot
+		 * in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only
+	 * acceptable errors; anything else and we bail.
+	 */
+	err = &tok.err;
+	if (!err->error)
+	{
+		/* TODO test */
+		actx_error(actx, "unknown error");
+		goto token_cleanup;
+	}
+
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		if (err->error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase
+	 * our retry interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		fprintf(stderr, "Visit %s and enter the code: %s",
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on start_request()
+		 * to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break; /* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+				/*
+				 * No Curl requests are running, so we can simplify by
+				 * having the client wait directly on the timerfd rather
+				 * than the multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..f943a31cc0
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 5c8f404463..922cbc0054 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -39,6 +39,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -429,7 +430,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -447,7 +448,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -534,6 +535,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -577,26 +587,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -641,7 +673,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -671,11 +703,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -983,12 +1025,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1146,7 +1194,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1163,7 +1211,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1479,3 +1528,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index ab308a0580..c9a0cb79f0 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -364,6 +364,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -627,6 +644,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2644,6 +2662,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3711,6 +3730,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3866,6 +3886,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3899,7 +3929,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3936,6 +3976,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4617,6 +4692,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4734,6 +4810,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7216,6 +7297,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index ca3e028a51..2c68ca041e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -56,6 +56,10 @@ extern "C"
 /* Indicates presence of PQsocketPoll, PQgetCurrentTimeUSec */
 #define LIBPQ_HAS_SOCKET_POLL 1
 
+/* Features added in PostgreSQL v18: */
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
+
 /*
  * Option flags for PQcopyResult
  */
@@ -99,6 +103,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -180,6 +186,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -712,10 +725,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 8ed1b28fcc..e617f39bef 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 7623aeadab..cf1da9c1a7 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9e951a9e6f..b0a44e06a5 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -368,6 +369,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1715,6 +1718,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1779,6 +1783,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1939,11 +1944,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3451,6 +3459,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v28-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v28-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 1eb3d4798c7358e403e951fc20853c5c99c8cd21 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v28 3/5] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 doc/src/sgml/client-auth.sgml                 |  28 +
 doc/src/sgml/filelist.sgml                    |   1 +
 doc/src/sgml/oauth-validators.sgml            |   9 +
 doc/src/sgml/postgres.sgml                    |   1 +
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 666 ++++++++++++++++++
 src/backend/libpq/auth-sasl.c                 |  10 +-
 src/backend/libpq/auth-scram.c                |   4 +-
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/include/libpq/sasl.h                      |  11 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     |  12 +-
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  22 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  37 +
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   | 187 +++++
 .../modules/oauth_validator/t/oauth_server.py | 270 +++++++
 src/test/modules/oauth_validator/validator.c  |  82 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   4 +
 31 files changed, 1554 insertions(+), 46 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 1ce6c443a8..94187cea06 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +694,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..fb78b6c886 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a7ff5f8264..91cf16678e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..3c7884baf9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,9 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+
+ <para>
+  TODO
+ </para>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index ec9f90e283..bfb73991e7 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -263,6 +263,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..2a0d74a079
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 2"),
+				 errdetail("Bearer token is empty.")));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 3"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 03ddddc3c2..e4be4d499e 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 2b607c5270..0a5a8640fc 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 75d588e36a..2245ae24a8 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1743,6 +1744,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2062,8 +2065,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2446,6 +2450,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 880d76aae0..e35ee36617 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4783,6 +4784,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 435abee56a..d9c9fc6cf9 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -143,7 +143,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -1864,6 +1864,9 @@ handle_token_response(struct async_ctx *actx, char **token)
 	if (!finish_token_request(actx, &tok))
 		goto token_cleanup;
 
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
 	if (tok.access_token)
 	{
 		/* Construct our Bearer token. */
@@ -1892,13 +1895,6 @@ handle_token_response(struct async_ctx *actx, char **token)
 	 * acceptable errors; anything else and we bail.
 	 */
 	err = &tok.err;
-	if (!err->error)
-	{
-		/* TODO test */
-		actx_error(actx, "unknown error");
-		goto token_cleanup;
-	}
-
 	if (strcmp(err->error, "authorization_pending") != 0 &&
 		strcmp(err->error, "slow_down") != 0)
 	{
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..655ce75796
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,22 @@
+export PYTHON
+export with_oauth
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..3db2ddea1c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,37 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..16ee8acd8f
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,187 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..b17198302b
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,270 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "content_type" in self._test_params:
+            return self._test_params["content_type"]
+
+        return "application/json"
+
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        if self._should_modify() and "interval" in self._test_params:
+            return self._test_params["interval"]
+
+        return 0
+
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "retry_code" in self._test_params:
+            return self._test_params["retry_code"]
+
+        return "authorization_pending"
+
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "uri_spelling" in self._test_params:
+            return self._test_params["uri_spelling"]
+
+        return "verification_uri"
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type())
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling(): uri,
+            "expires-in": 5,
+        }
+
+        interval = self._interval()
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code()}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return {
+            "access_token": token,
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..7b4dc9c494
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,82 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index fe6ebf10f7..d6f9c4cd8b 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2397,6 +2397,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2440,7 +2445,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..abdff5a3c3
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	diag("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b0a44e06a5..d9d988a03c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1719,6 +1719,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3065,6 +3066,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3660,6 +3663,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v28-0004-Review-comments.patchapplication/octet-stream; name=v28-0004-Review-comments.patchDownload
From de9b6ab514459af52ed937ee049f69209110fc23 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Thu, 28 Mar 2024 21:59:02 +0100
Subject: [PATCH v28 4/5] Review comments

Fixes and tidy-ups following a review of v21, a few items
are (listed in no specific order):

* Implement a version check for libcurl in autoconf, the equivalent
  check for Meson is still a TODO. [ed: moved to an earlier commit]
* Address a few TODOs in the code
* libpq JSON support memory management fixups [ed: moved to an earlier
  commit]
---
 src/backend/libpq/auth-oauth.c            | 22 ++++----
 src/interfaces/libpq/fe-auth-oauth-curl.c | 66 +++++++++++++++--------
 2 files changed, 57 insertions(+), 31 deletions(-)

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 2a0d74a079..ec1418c3fc 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -533,7 +533,9 @@ validate_token_format(const char *header)
 	if (!header || strlen(header) <= 7)
 	{
 		ereport(COMMERROR,
-				(errmsg("malformed OAuth bearer token 1")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is less than 8 bytes."));
 		return NULL;
 	}
 
@@ -551,9 +553,9 @@ validate_token_format(const char *header)
 	if (!*token)
 	{
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 2"),
-				 errdetail("Bearer token is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
 		return NULL;
 	}
 
@@ -573,9 +575,9 @@ validate_token_format(const char *header)
 		 * of someone's password into the logs.
 		 */
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 3"),
-				 errdetail("Bearer token is not in the correct format.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
 		return NULL;
 	}
 
@@ -617,10 +619,10 @@ validate(Port *port, const char *auth)
 	/* Make sure the validator authenticated the user. */
 	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
-		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
-				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
-						port->user_name)));
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity"));
 		return false;
 	}
 
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index d9c9fc6cf9..0e52218422 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -31,6 +31,8 @@
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
 
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
 /*
  * Parsed JSON Representations
  *
@@ -1334,7 +1336,12 @@ setup_curl_handles(struct async_ctx *actx)
 	 * pretty strict when it comes to provider behavior, so we have to check
 	 * what comes back anyway.)
 	 */
-	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
 
 	return true;
@@ -1356,9 +1363,19 @@ append_data(char *buf, size_t size, size_t nmemb, void *userdata)
 	PQExpBuffer resp = userdata;
 	size_t		len = size * nmemb;
 
-	/* TODO: cap the maximum size */
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+		return 0;
+
+	/* The data passed from libcurl is not null-terminated */
 	appendBinaryPQExpBuffer(resp, buf, len);
-	/* TODO: check for broken buffer */
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+		return 0;
 
 	return len;
 }
@@ -1675,7 +1692,12 @@ start_device_authz(struct async_ctx *actx, PGconn *conn)
 	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
 	if (conn->oauth_scope)
 		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
-	/* TODO check for broken buffer */
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	/* Make our request. */
 	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
@@ -1817,32 +1839,34 @@ finish_token_request(struct async_ctx *actx, struct token *tok)
 	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
-	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
-	 * response uses either 400 Bad Request or 401 Unauthorized.
-	 *
-	 * TODO: there are references online to 403 appearing in the wild...
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
 	 */
-	if (response_code != 200
-		&& response_code != 400
-		 /* && response_code != 401 TODO */ )
+	if (response_code == 200)
 	{
-		actx_error(actx, "unexpected response code %ld", response_code);
-		return false;
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
 	}
 
 	/*
-	 * Pull the fields we care about from the document.
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
 	 */
-	if (response_code == 200)
+	if (response_code == 400 || response_code == 401)
 	{
-		actx->errctx = "failed to parse access token response";
-		if (!parse_access_token(actx, tok))
-			return false;		/* error message already set */
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
 	}
-	else if (!parse_token_error(actx, &tok->err))
-		return false;
 
-	return true;
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
 }
 
 /*
-- 
2.34.1

v28-0005-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v28-0005-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 3f197230180bc5429efdf61501b735dbe62d702f Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v28 5/5] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1864 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1074 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5577 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 94187cea06..a127042b4b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -374,6 +375,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -384,7 +387,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index a2e2479821..6dbc383022 100644
--- a/meson.build
+++ b/meson.build
@@ -3423,6 +3423,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3584,6 +3587,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index c3d0dfedf1..f401ec179e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -7,6 +7,7 @@ subdir('authentication')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..17bd2d3d88
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1864 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # We will fail the first SASL exchange. First return a link to the
+            # discovery document, pointing to the test provider server.
+            resp = dict(base_response)
+
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp["openid-configuration"] = discovery_uri
+
+            if scope:
+                resp["scope"] = scope
+
+            resp = json.dumps(resp)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            discovery_uri = f"{openid_provider.issuer}/.well-known/openid-configuration"
+            resp = json.dumps(
+                {
+                    "status": "invalid_token",
+                    "openid-configuration": discovery_uri,
+                }
+            )
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=resp.encode("ascii"),
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange.
+            pq3.send(
+                conn,
+                pq3.types.ErrorResponse,
+                fields=[
+                    b"SFATAL",
+                    b"C28000",
+                    b"Mdoesn't matter",
+                    b"",
+                ],
+            )
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..dbb8b8823c
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * test_validate(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..427ab063e6
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1074 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

#121Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#120)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 03.09.24 22:56, Jacob Champion wrote:

The parse_strval field could use a better explanation.

I actually don't understand the need for this field. AFAICT, this is
just used to record whether strval is valid.

No, it's meant to track the value of the need_escapes argument to the
constructor. I've renamed it and moved the assignment to hopefully
make that a little more obvious. WDYT?

Yes, this is clearer.

This patch (v28-0001) looks good to me now.

#122Peter Eisentraut
peter@eisentraut.org
In reply to: Peter Eisentraut (#121)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 04.09.24 11:28, Peter Eisentraut wrote:

On 03.09.24 22:56, Jacob Champion wrote:

The parse_strval field could use a better explanation.

I actually don't understand the need for this field.  AFAICT, this is
just used to record whether strval is valid.

No, it's meant to track the value of the need_escapes argument to the
constructor. I've renamed it and moved the assignment to hopefully
make that a little more obvious. WDYT?

Yes, this is clearer.

This patch (v28-0001) looks good to me now.

This has been committed.

About the subsequent patches:

Is there any sense in dealing with the libpq and backend patches
separately in sequence, or is this split just for ease of handling?

(I suppose the 0004 "review comments" patch should be folded into the
respective other patches?)

What could be the next steps to keep this moving along, other than stare
at the remaining patches until we're content with them? ;-)

#123Daniel Gustafsson
daniel@yesql.se
In reply to: Peter Eisentraut (#122)
5 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 11 Sep 2024, at 09:37, Peter Eisentraut <peter@eisentraut.org> wrote:

Is there any sense in dealing with the libpq and backend patches separately in sequence, or is this split just for ease of handling?

I think it's just make reviewing a bit easier. At this point I think they can
be merged together, it's mostly out of historic reasons IIUC since the patchset
earlier on supported more than one library.

(I suppose the 0004 "review comments" patch should be folded into the respective other patches?)

Yes (0003 now), along with the 0004 in the attached version (I bumped to v29 as
one commit is now committed, but the attached doesn't change Jacobs commits but
rather add to them) which contains more review comments. More on that below:

I added a warning to autconf in case --with-oauth is used without --with-python
since this combination will error out in running the tests. Might be
superfluous but I had an embarrassingly long headscratcher myself as to why the
tests kept failing =)

CURL_IGNORE_DEPRECATION(x;) broke pgindent, it needs to keep the semicolon on
the outside like CURL_IGNORE_DEPRECATION(x);. This doesn't really work well
with how the macro is defined, not sure how we should handle that best (the
attached makes the style as per how pgindent want's it with the semicolon
returned).

The oauth_validator test module need to load Makefile.global before exporting
the symbols from there. I also removed the placeholder regress test which did
nothing and turned diag() calls into note() calls to keep the output from
cluttering.

There is a first stab at documenting the validator module API, more to come (it
doesn't compile right now).

It contains a pgindent and pgperltidy run to keep things as close to in final
sync as we can to catch things like the curl deprecation macro mentioned above
early.

What could be the next steps to keep this moving along, other than stare at the remaining patches until we're content with them? ;-)

I'm in the "stare at things" stage now to try and get this into the tree =)

To further pick away at this huge patch I propose to merge the SASL message
length hunk which can be extracted separately. The attached .txt (to keep the
CFBot from poking at it) contains a diff which can be committed ahead of the
rest of this patch to make it a tad smaller and to keep the history of that
change a bit clearer.

--
Daniel Gustafsson

Attachments:

v29-0004-Review-comments-2024-09-11.patchapplication/octet-stream; name=v29-0004-Review-comments-2024-09-11.patch; x-unix-mode=0644Download
From 37259ae39f6ee4ddad2373bc729ee3560db42a85 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Wed, 11 Sep 2024 15:21:40 +0200
Subject: [PATCH v29 4/4] Review comments 2024-09-11

---
 configure                                     |  5 +
 configure.ac                                  |  4 +
 doc/src/sgml/oauth-validators.sgml            | 91 ++++++++++++++++++-
 src/backend/libpq/auth-oauth.c                |  3 +-
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 81 +++++++++--------
 src/test/modules/oauth_validator/Makefile     | 18 +++-
 .../oauth_validator/expected/validator.out    |  6 --
 .../modules/oauth_validator/sql/validator.sql |  1 -
 src/test/modules/oauth_validator/validator.c  |  6 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  | 10 +-
 10 files changed, 163 insertions(+), 62 deletions(-)
 delete mode 100644 src/test/modules/oauth_validator/expected/validator.out
 delete mode 100644 src/test/modules/oauth_validator/sql/validator.sql

diff --git a/configure b/configure
index bf95091074..8bfc3c2215 100755
--- a/configure
+++ b/configure
@@ -8533,6 +8533,11 @@ $as_echo "#define USE_OAUTH 1" >>confdefs.h
 
 $as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
 
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
+  fi
 elif test x"$with_oauth" != x"no"; then
   as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
 fi
diff --git a/configure.ac b/configure.ac
index 0faa566579..7e3a74527a 100644
--- a/configure.ac
+++ b/configure.ac
@@ -932,6 +932,10 @@ fi
 if test x"$with_oauth" = x"curl"; then
   AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
   AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
+  fi
 elif test x"$with_oauth" != x"no"; then
   AC_MSG_ERROR([--with-oauth must specify curl])
 fi
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index 3c7884baf9..75cd9fc557 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -2,8 +2,95 @@
 
 <chapter id="oauth-validators">
  <title>Implementing OAuth Validator Modules</title>
-
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth tokens.
+ </para>
  <para>
-  TODO
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
  </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading a shared library
+   with the <xref linkend="guc-oauth-validator_library"/>'s name as the library
+   base name. The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
 </chapter>
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index ec1418c3fc..f2f8fe81e2 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -442,7 +442,8 @@ parse_kvpairs_for_auth(char **input)
 			 errmsg("malformed OAUTHBEARER message"),
 			 errdetail("Message did not contain a final terminator.")));
 
-	return NULL;				/* unreachable */
+	pg_unreachable();
+	return NULL;
 }
 
 static void
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 0e52218422..c79329e98a 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -622,8 +622,8 @@ check_content_type(struct async_ctx *actx, const char *type)
 	}
 
 	/*
-	 * We need to perform a length limited comparison and not compare the whole
-	 * string.
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
 	 */
 	if (pg_strncasecmp(content_type, type, type_len) != 0)
 		goto fail;
@@ -645,7 +645,7 @@ check_content_type(struct async_ctx *actx, const char *type)
 			case ';':
 				return true;	/* success! */
 
-			/* HTTP optional whitespace allows only spaces and htabs. */
+				/* HTTP optional whitespace allows only spaces and htabs. */
 			case ' ':
 			case '\t':
 				break;
@@ -817,8 +817,8 @@ parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
 		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
 
 		/*
-		 * Some services (Google, Azure) spell verification_uri differently. We
-		 * accept either.
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
 		 */
 		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
 
@@ -1165,11 +1165,11 @@ register_timer(CURLM *curlm, long timeout, void *ctx)
 	struct async_ctx *actx = ctx;
 
 	/*
-	 * TODO: maybe just signal drive_request() to immediately call back in
-	 * the (timeout == 0) case?
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
 	 */
 	if (!set_timer(actx, timeout))
-		return -1; /* actx_error already called */
+		return -1;				/* actx_error already called */
 
 	return 0;
 }
@@ -1184,7 +1184,7 @@ static int
 debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
 			   void *clientp)
 {
-	const char * const end = data + size;
+	const char *const end = data + size;
 	const char *prefix;
 
 	/* Prefixes are modeled off of the default libcurl debug output. */
@@ -1194,12 +1194,12 @@ debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
 			prefix = "*";
 			break;
 
-		case CURLINFO_HEADER_IN: /* fall through */
+		case CURLINFO_HEADER_IN:	/* fall through */
 		case CURLINFO_DATA_IN:
 			prefix = "<";
 			break;
 
-		case CURLINFO_HEADER_OUT: /* fall through */
+		case CURLINFO_HEADER_OUT:	/* fall through */
 		case CURLINFO_DATA_OUT:
 			prefix = ">";
 			break;
@@ -1296,8 +1296,8 @@ setup_curl_handles(struct async_ctx *actx)
 	{
 		/*
 		 * Set a callback for retrieving error information from libcurl, the
-		 * function only takes effect when CURLOPT_VERBOSE has been set so make
-		 * sure the order is kept.
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
 		 */
 		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
 		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
@@ -1309,17 +1309,17 @@ setup_curl_handles(struct async_ctx *actx)
 	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
 	 * intended for testing only.)
 	 *
-	 * There's a bit of unfortunate complexity around the choice of CURLoption.
-	 * CURLOPT_PROTOCOLS is deprecated in modern Curls, but its replacement
-	 * didn't show up until relatively recently.
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
 	 */
 	{
 #if CURL_AT_LEAST_VERSION(7, 85, 0)
-		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const		CURLoption popt = CURLOPT_PROTOCOLS_STR;
 		const char *protos = "https";
-		const char * const unsafe = "https,http";
+		const char *const unsafe = "https,http";
 #else
-		const CURLoption popt = CURLOPT_PROTOCOLS;
+		const		CURLoption popt = CURLOPT_PROTOCOLS;
 		long		protos = CURLPROTO_HTTPS;
 		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
 #endif
@@ -1408,8 +1408,8 @@ start_request(struct async_ctx *actx)
 	}
 
 	/*
-	 * actx->running tracks the number of running handles, so we can immediately
-	 * call back if no waiting is needed.
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
 	 *
 	 * Even though this is nominally an asynchronous process, there are some
 	 * operations that can synchronously fail by this point (e.g. connections
@@ -1452,23 +1452,23 @@ drive_request(struct async_ctx *actx)
 		/*
 		 * There's an async request in progress. Pump the multi handle.
 		 *
-		 * TODO: curl_multi_socket_all() is deprecated, presumably because it's
-		 * inefficient and pointless if your event loop has already handed you
-		 * the exact sockets that are ready. But that's not our use case --
-		 * our client has no way to tell us which sockets are ready. (They don't
-		 * even know there are sockets to begin with.)
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because
+		 * it's inefficient and pointless if your event loop has already
+		 * handed you the exact sockets that are ready. But that's not our use
+		 * case -- our client has no way to tell us which sockets are ready.
+		 * (They don't even know there are sockets to begin with.)
 		 *
 		 * We can grab the list of triggered events from the multiplexer
 		 * ourselves, but that's effectively what curl_multi_socket_all() is
 		 * going to do... so it appears to be exactly the API we need.
 		 *
 		 * Ignore the deprecation for now. This needs a followup on
-		 * curl-library@, to make sure we're not shooting ourselves in the foot
-		 * in some other way.
+		 * curl-library@, to make sure we're not shooting ourselves in the
+		 * foot in some other way.
 		 */
 		CURL_IGNORE_DEPRECATION(
-			err = curl_multi_socket_all(actx->curlm, &actx->running);
-		)
+								err = curl_multi_socket_all(actx->curlm, &actx->running);
+			)
 
 		if (err)
 		{
@@ -1915,8 +1915,8 @@ handle_token_response(struct async_ctx *actx, char **token)
 	}
 
 	/*
-	 * authorization_pending and slow_down are the only
-	 * acceptable errors; anything else and we bail.
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
 	 */
 	err = &tok.err;
 	if (strcmp(err->error, "authorization_pending") != 0 &&
@@ -1930,8 +1930,8 @@ handle_token_response(struct async_ctx *actx, char **token)
 	}
 
 	/*
-	 * A slow_down error requires us to permanently increase
-	 * our retry interval by five seconds. RFC 8628, Sec. 3.5.
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
 	 */
 	if (strcmp(err->error, "slow_down") == 0)
 	{
@@ -2102,8 +2102,8 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 
 		/*
 		 * Each case here must ensure that actx->running is set while we're
-		 * waiting on some asynchronous work. Most cases rely on start_request()
-		 * to do that for them.
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
 		 */
 		switch (actx->step)
 		{
@@ -2160,7 +2160,7 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 				}
 
 				if (state->token)
-					break; /* done! */
+					break;		/* done! */
 
 				/*
 				 * Wait for the required interval before issuing the next
@@ -2170,10 +2170,11 @@ pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
 					goto error_return;
 
 #ifdef HAVE_SYS_EPOLL_H
+
 				/*
-				 * No Curl requests are running, so we can simplify by
-				 * having the client wait directly on the timerfd rather
-				 * than the multiplexer. (This isn't possible for kqueue.)
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
 				 */
 				*altsock = actx->timerfd;
 #endif
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
index 655ce75796..f5028f2e52 100644
--- a/src/test/modules/oauth_validator/Makefile
+++ b/src/test/modules/oauth_validator/Makefile
@@ -1,5 +1,13 @@
-export PYTHON
-export with_oauth
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
 
 MODULES = validator
 PGFILEDESC = "validator - test OAuth validator module"
@@ -8,8 +16,6 @@ NO_INSTALLCHECK = 1
 
 TAP_TESTS = 1
 
-REGRESS = validator
-
 ifdef USE_PGXS
 PG_CONFIG = pg_config
 PGXS := $(shell $(PG_CONFIG) --pgxs)
@@ -19,4 +25,8 @@ subdir = src/test/modules/oauth_validator
 top_builddir = ../../../..
 include $(top_builddir)/src/Makefile.global
 include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_oauth
+
 endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
deleted file mode 100644
index 360caa2cb3..0000000000
--- a/src/test/modules/oauth_validator/expected/validator.out
+++ /dev/null
@@ -1,6 +0,0 @@
-SELECT 1;
- ?column? 
-----------
-        1
-(1 row)
-
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
deleted file mode 100644
index e0ac49d1ec..0000000000
--- a/src/test/modules/oauth_validator/sql/validator.sql
+++ /dev/null
@@ -1 +0,0 @@
-SELECT 1;
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
index 7b4dc9c494..c41dd07132 100644
--- a/src/test/modules/oauth_validator/validator.c
+++ b/src/test/modules/oauth_validator/validator.c
@@ -22,9 +22,9 @@ PG_MODULE_MAGIC;
 
 static void validator_startup(ValidatorModuleState *state);
 static void validator_shutdown(ValidatorModuleState *state);
-static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
-											  const char *token,
-											  const char *role);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
 
 static const OAuthValidatorCallbacks validator_callbacks = {
 	.startup_cb = validator_startup,
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
index abdff5a3c3..7bf4e4a03c 100644
--- a/src/test/perl/PostgreSQL/Test/OAuthServer.pm
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -34,25 +34,25 @@ sub run
 	my $port;
 
 	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
-		or die "failed to start OAuth server: $!";
+	  or die "failed to start OAuth server: $!";
 
-	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	read($read_fh, $port, 7) or die "failed to read port number: $!";
 	chomp $port;
 	die "server did not advertise a valid port"
-		unless Scalar::Util::looks_like_number($port);
+	  unless Scalar::Util::looks_like_number($port);
 
 	$self->{'pid'} = $pid;
 	$self->{'port'} = $port;
 	$self->{'child'} = $read_fh;
 
-	diag("OAuth provider (PID $pid) is listening on port $port\n");
+	note("OAuth provider (PID $pid) is listening on port $port\n");
 }
 
 sub stop
 {
 	my $self = shift;
 
-	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
 
 	kill(15, $self->{'pid'});
 	$self->{'pid'} = undef;
-- 
2.39.3 (Apple Git-146)

v29-0003-Review-comments.patchapplication/octet-stream; name=v29-0003-Review-comments.patch; x-unix-mode=0644Download
From ed925582cbbe42f3c65e076df57486ea7f2180b5 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Thu, 28 Mar 2024 21:59:02 +0100
Subject: [PATCH v29 3/4] Review comments

Fixes and tidy-ups following a review of v21, a few items
are (listed in no specific order):

* Implement a version check for libcurl in autoconf, the equivalent
  check for Meson is still a TODO. [ed: moved to an earlier commit]
* Address a few TODOs in the code
* libpq JSON support memory management fixups [ed: moved to an earlier
  commit]
---
 src/backend/libpq/auth-oauth.c            | 22 ++++----
 src/interfaces/libpq/fe-auth-oauth-curl.c | 66 +++++++++++++++--------
 2 files changed, 57 insertions(+), 31 deletions(-)

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 2a0d74a079..ec1418c3fc 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -533,7 +533,9 @@ validate_token_format(const char *header)
 	if (!header || strlen(header) <= 7)
 	{
 		ereport(COMMERROR,
-				(errmsg("malformed OAuth bearer token 1")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is less than 8 bytes."));
 		return NULL;
 	}
 
@@ -551,9 +553,9 @@ validate_token_format(const char *header)
 	if (!*token)
 	{
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 2"),
-				 errdetail("Bearer token is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
 		return NULL;
 	}
 
@@ -573,9 +575,9 @@ validate_token_format(const char *header)
 		 * of someone's password into the logs.
 		 */
 		ereport(COMMERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAuth bearer token 3"),
-				 errdetail("Bearer token is not in the correct format.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
 		return NULL;
 	}
 
@@ -617,10 +619,10 @@ validate(Port *port, const char *auth)
 	/* Make sure the validator authenticated the user. */
 	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
 	{
-		/* TODO: use logdetail; reduce message duplication */
 		ereport(LOG,
-				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
-						port->user_name)));
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity"));
 		return false;
 	}
 
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index d9c9fc6cf9..0e52218422 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -31,6 +31,8 @@
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
 
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
 /*
  * Parsed JSON Representations
  *
@@ -1334,7 +1336,12 @@ setup_curl_handles(struct async_ctx *actx)
 	 * pretty strict when it comes to provider behavior, so we have to check
 	 * what comes back anyway.)
 	 */
-	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
 
 	return true;
@@ -1356,9 +1363,19 @@ append_data(char *buf, size_t size, size_t nmemb, void *userdata)
 	PQExpBuffer resp = userdata;
 	size_t		len = size * nmemb;
 
-	/* TODO: cap the maximum size */
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+		return 0;
+
+	/* The data passed from libcurl is not null-terminated */
 	appendBinaryPQExpBuffer(resp, buf, len);
-	/* TODO: check for broken buffer */
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+		return 0;
 
 	return len;
 }
@@ -1675,7 +1692,12 @@ start_device_authz(struct async_ctx *actx, PGconn *conn)
 	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
 	if (conn->oauth_scope)
 		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
-	/* TODO check for broken buffer */
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
 
 	/* Make our request. */
 	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
@@ -1817,32 +1839,34 @@ finish_token_request(struct async_ctx *actx, struct token *tok)
 	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
-	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
-	 * response uses either 400 Bad Request or 401 Unauthorized.
-	 *
-	 * TODO: there are references online to 403 appearing in the wild...
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
 	 */
-	if (response_code != 200
-		&& response_code != 400
-		 /* && response_code != 401 TODO */ )
+	if (response_code == 200)
 	{
-		actx_error(actx, "unexpected response code %ld", response_code);
-		return false;
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
 	}
 
 	/*
-	 * Pull the fields we care about from the document.
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
 	 */
-	if (response_code == 200)
+	if (response_code == 400 || response_code == 401)
 	{
-		actx->errctx = "failed to parse access token response";
-		if (!parse_access_token(actx, tok))
-			return false;		/* error message already set */
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
 	}
-	else if (!parse_token_error(actx, &tok->err))
-		return false;
 
-	return true;
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
 }
 
 /*
-- 
2.39.3 (Apple Git-146)

v29-0002-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v29-0002-backend-add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 3c90b1c31922fc66d46a6b38e8cb62896217febc Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v29 2/4] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 doc/src/sgml/client-auth.sgml                 |  28 +
 doc/src/sgml/filelist.sgml                    |   1 +
 doc/src/sgml/oauth-validators.sgml            |   9 +
 doc/src/sgml/postgres.sgml                    |   1 +
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 666 ++++++++++++++++++
 src/backend/libpq/auth-sasl.c                 |  10 +-
 src/backend/libpq/auth-scram.c                |   4 +-
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/include/libpq/sasl.h                      |  11 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     |  12 +-
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  22 +
 .../oauth_validator/expected/validator.out    |   6 +
 src/test/modules/oauth_validator/meson.build  |  37 +
 .../modules/oauth_validator/sql/validator.sql |   1 +
 .../modules/oauth_validator/t/001_server.pl   | 187 +++++
 .../modules/oauth_validator/t/oauth_server.py | 270 +++++++
 src/test/modules/oauth_validator/validator.c  |  82 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   4 +
 31 files changed, 1554 insertions(+), 46 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/expected/validator.out
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/sql/validator.sql
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 90cb95c868..302cf0487b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -691,8 +696,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..fb78b6c886 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a7ff5f8264..91cf16678e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..3c7884baf9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,9 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+
+ <para>
+  TODO
+ </para>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index ec9f90e283..bfb73991e7 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -263,6 +263,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..2a0d74a079
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	return NULL;				/* unreachable */
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* If the token is empty or simply too short to be correct */
+	if (!header || strlen(header) <= 7)
+	{
+		ereport(COMMERROR,
+				(errmsg("malformed OAuth bearer token 1")));
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+		return NULL;
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 2"),
+				 errdetail("Bearer token is empty.")));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAuth bearer token 3"),
+				 errdetail("Bearer token is not in the correct format.")));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		/* TODO: use logdetail; reduce message duplication */
+		ereport(LOG,
+				(errmsg("OAuth bearer authentication failed for user \"%s\": validator provided no identity",
+						port->user_name)));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 03ddddc3c2..e4be4d499e 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2fd96a7129..735fd05373 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1748,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2066,8 +2069,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2454,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 686309db58..4075a6c95b 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4783,6 +4784,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 435abee56a..d9c9fc6cf9 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -143,7 +143,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -1864,6 +1864,9 @@ handle_token_response(struct async_ctx *actx, char **token)
 	if (!finish_token_request(actx, &tok))
 		goto token_cleanup;
 
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
 	if (tok.access_token)
 	{
 		/* Construct our Bearer token. */
@@ -1892,13 +1895,6 @@ handle_token_response(struct async_ctx *actx, char **token)
 	 * acceptable errors; anything else and we bail.
 	 */
 	err = &tok.err;
-	if (!err->error)
-	{
-		/* TODO test */
-		actx_error(actx, "unknown error");
-		goto token_cleanup;
-	}
-
 	if (strcmp(err->error, "authorization_pending") != 0 &&
 		strcmp(err->error, "slow_down") != 0)
 	{
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..655ce75796
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,22 @@
+export PYTHON
+export with_oauth
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+REGRESS = validator
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/oauth_validator/expected/validator.out b/src/test/modules/oauth_validator/expected/validator.out
new file mode 100644
index 0000000000..360caa2cb3
--- /dev/null
+++ b/src/test/modules/oauth_validator/expected/validator.out
@@ -0,0 +1,6 @@
+SELECT 1;
+ ?column? 
+----------
+        1
+(1 row)
+
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..3db2ddea1c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,37 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'regress': {
+    'sql': [
+      'validator',
+    ],
+  },
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/sql/validator.sql b/src/test/modules/oauth_validator/sql/validator.sql
new file mode 100644
index 0000000000..e0ac49d1ec
--- /dev/null
+++ b/src/test/modules/oauth_validator/sql/validator.sql
@@ -0,0 +1 @@
+SELECT 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..16ee8acd8f
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,187 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..b17198302b
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,270 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "content_type" in self._test_params:
+            return self._test_params["content_type"]
+
+        return "application/json"
+
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        if self._should_modify() and "interval" in self._test_params:
+            return self._test_params["interval"]
+
+        return 0
+
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "retry_code" in self._test_params:
+            return self._test_params["retry_code"]
+
+        return "authorization_pending"
+
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        if self._should_modify() and "uri_spelling" in self._test_params:
+            return self._test_params["uri_spelling"]
+
+        return "verification_uri"
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type())
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling(): uri,
+            "expires-in": 5,
+        }
+
+        interval = self._interval()
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code()}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return {
+            "access_token": token,
+            "token_type": "bearer",
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..7b4dc9c494
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,82 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult * validate_token(ValidatorModuleState *state,
+											  const char *token,
+											  const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+void
+_PG_init(void)
+{
+	/* no-op */
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 143dc8c101..566ec1d5ca 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2397,6 +2397,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2440,7 +2445,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..abdff5a3c3
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+		or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+		unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	diag("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	diag("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 105c275bfb..6a323c3e77 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1720,6 +1720,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3066,6 +3067,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3660,6 +3663,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.39.3 (Apple Git-146)

v29-0001-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v29-0001-libpq-add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From a1a14356677aa5ee138f1c78a50e6d3c00cb18fc Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Wed, 11 Sep 2024 09:41:29 +0200
Subject: [PATCH v29 1/4] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                        |   19 +
 configure                                 |  144 ++
 configure.ac                              |   29 +
 doc/src/sgml/libpq.sgml                   |   76 +
 meson.build                               |   31 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   14 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2222 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  659 ++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   85 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   77 +
 src/interfaces/libpq/libpq-int.h          |   15 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/typedefs.list          |   10 +
 25 files changed, 3577 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 53c8a1f2ba..bf95091074 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8493,6 +8496,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -13011,6 +13060,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -14036,6 +14169,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 6a35b2880b..0faa566579 100644
--- a/configure.ac
+++ b/configure.ac
@@ -920,6 +920,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1398,6 +1418,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1588,6 +1613,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 783e8e750b..950b6526a1 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2335,6 +2335,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9958,6 +9995,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/meson.build b/meson.build
index 4764b09266..9419d9e23e 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3071,6 +3100,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3743,6 +3773,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 38006367a4..33a2135312 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -712,6 +715,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..477c834b40 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..435abee56a
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2222 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the whole
+	 * string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			/* HTTP optional whitespace allows only spaces and htabs. */
+			case ' ':
+			case '\t':
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently. We
+		 * accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specify 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in
+	 * the (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1; /* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char * const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN: /* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT: /* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so make
+		 * sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of CURLoption.
+	 * CURLOPT_PROTOCOLS is deprecated in modern Curls, but its replacement
+	 * didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char * const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");	/* TODO: check result */
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	PQExpBuffer resp = userdata;
+	size_t		len = size * nmemb;
+
+	/* TODO: cap the maximum size */
+	appendBinaryPQExpBuffer(resp, buf, len);
+	/* TODO: check for broken buffer */
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, &actx->work_data, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can immediately
+	 * call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They don't
+		 * even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the foot
+		 * in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			actx_error_str(actx, curl_easy_strerror(msg->data.result));
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, work_buffer, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only
+	 * acceptable errors; anything else and we bail.
+	 */
+	err = &tok.err;
+	if (!err->error)
+	{
+		/* TODO test */
+		actx_error(actx, "unknown error");
+		goto token_cleanup;
+	}
+
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		if (err->error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase
+	 * our retry interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		fprintf(stderr, "Visit %s and enter the code: %s",
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on start_request()
+		 * to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break; /* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+				/*
+				 * No Curl requests are running, so we can simplify by
+				 * having the client wait directly on the timerfd rather
+				 * than the multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..f943a31cc0
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,659 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so.
+		 */
+		if (conn->oauth_discovery_uri)
+			conn->oauth_want_retry = true;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/* Use our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..5d311d4107 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +588,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +674,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +704,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1026,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1195,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1212,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1542,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d5a72587d2..8669aef76d 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3694,6 +3713,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3849,6 +3869,16 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry)
+					{
+						/* TODO: only allow retry once */
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3882,7 +3912,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3919,6 +3959,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4600,6 +4675,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4717,6 +4793,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7209,6 +7290,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..2b79f107f4 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -60,6 +60,10 @@ extern "C"
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
 
+/* Features added in PostgreSQL v18: */
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
+
 /*
  * Option flags for PQcopyResult
  */
@@ -103,6 +107,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +190,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +730,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 9579f80353..a86259b65d 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..0181e5cc03 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index e9ebddde24..105c275bfb 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,8 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
 CV
 CachedExpression
 CachedPlan
@@ -1716,6 +1719,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1780,6 +1784,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1940,11 +1945,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3451,6 +3459,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.39.3 (Apple Git-146)

saslmsglength.txttext/plain; name=saslmsglength.txt; x-unix-mode=0644Download
diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 03ddddc3c2..e4be4d499e 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
#124Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#123)
Re: [PoC] Federated Authn/z with OAUTHBEARER

(Thanks for the commit, Peter!)

On Wed, Sep 11, 2024 at 6:44 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On 11 Sep 2024, at 09:37, Peter Eisentraut <peter@eisentraut.org> wrote:

Is there any sense in dealing with the libpq and backend patches separately in sequence, or is this split just for ease of handling?

I think it's just make reviewing a bit easier. At this point I think they can
be merged together, it's mostly out of historic reasons IIUC since the patchset
earlier on supported more than one library.

I can definitely do that (and yeah, it was to make the review slightly
less daunting). The server side could potentially be committed
independently, if you want to parallelize a bit, but it'd have to be
torn back out if the libpq stuff didn't land in time.

(I suppose the 0004 "review comments" patch should be folded into the respective other patches?)

Yes. I'm using that patch as a holding area while I write tests for
the hunks, and then moving them backwards.

I added a warning to autconf in case --with-oauth is used without --with-python
since this combination will error out in running the tests. Might be
superfluous but I had an embarrassingly long headscratcher myself as to why the
tests kept failing =)

Whoops, sorry. I guess we should just skip them if Python isn't there?

CURL_IGNORE_DEPRECATION(x;) broke pgindent, it needs to keep the semicolon on
the outside like CURL_IGNORE_DEPRECATION(x);. This doesn't really work well
with how the macro is defined, not sure how we should handle that best (the
attached makes the style as per how pgindent want's it with the semicolon
returned).

Ugh... maybe a case for a pre_indent rule in pgindent?

The oauth_validator test module need to load Makefile.global before exporting
the symbols from there.

Hm. Why was that passing the CI, though...?

There is a first stab at documenting the validator module API, more to come (it
doesn't compile right now).

It contains a pgindent and pgperltidy run to keep things as close to in final
sync as we can to catch things like the curl deprecation macro mentioned above
early.

Thanks!

What could be the next steps to keep this moving along, other than stare at the remaining patches until we're content with them? ;-)

I'm in the "stare at things" stage now to try and get this into the tree =)

Yeah, and I still owe you all an updated roadmap.

While I fix up the tests, I've also been picking away at the JSON
encoding problem that was mentioned in [1]/messages/by-id/ZjxQnOD1OoCkEeMN@paquier.xyz; the recent SASLprep fix
was fallout from that, since I'm planning to pull in pieces of its
UTF-8 validation. I will eventually want to fuzz the heck out of this.

To further pick away at this huge patch I propose to merge the SASL message
length hunk which can be extracted separately. The attached .txt (to keep the
CFBot from poking at it) contains a diff which can be committed ahead of the
rest of this patch to make it a tad smaller and to keep the history of that
change a bit clearer.

LGTM!

--

Peter asked me if there were plans to provide a "standard" validator
module, say as part of contrib. The tricky thing is that Bearer
validation is issuer-specific, and many providers give you an opaque
token that you're not supposed to introspect at all.

We could use token introspection (RFC 7662) for online verification,
but last I looked at it, no one had actually implemented those
endpoints. For offline verification, I think the best we could do
would be to provide a generic JWT Profile (RFC 9068) validator, but
again I don't know if anyone is actually providing those token formats
in practice. I'm inclined to push that out into the future.

Thanks,
--Jacob

[1]: /messages/by-id/ZjxQnOD1OoCkEeMN@paquier.xyz

#125Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#124)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Sep 11, 2024 at 3:54 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Yeah, and I still owe you all an updated roadmap.

Okay, here goes. New reviewers: start here!

== What is This? ==

OAuth 2.0 is a way for a trusted third party (a "provider") to tell a
server whether a client on the other end of the line is allowed to do
something. This patchset adds OAuth support to libpq with libcurl,
provides a server-side API so that extension modules can add support
for specific OAuth providers, and extends our SASL support to carry
the OAuth access tokens over the OAUTHBEARER mechanism.

Most OAuth clients use a web browser to perform the third-party
handshake. (These are your "Okta logins", "sign in with XXX", etc.)
But there are plenty of people who use psql without a local browser,
and invoking a browser safely across all supported platforms is
actually surprisingly fraught. So this patchset implements something
called device authorization, where the client will display a link and
a code, and then you can log in on whatever device is convenient for
you. Once you've told your provider that you trust libpq to connect to
Postgres on your behalf, it'll give libpq an access token, and libpq
will forward that on to the server.

== How This Fits, or: The Sales Pitch ==

The most popular third-party auth methods we have today are probably
the Kerberos family (AD/GSS/SSPI) and LDAP. If you're not already in
an MS ecosystem, it's unlikely that you're using the former. And users
of the latter are, in my experience, more-or-less resigned to its use,
in spite of LDAP's architectural security problems and the fact that
you have to run weird synchronization scripts to tell Postgres what
certain users are allowed to do.

OAuth provides a decently mature and widely-deployed third option. You
don't have to be running the infrastructure yourself, as long as you
have a provider you trust. If you are running your own infrastructure
(or if your provider is configurable), the tokens being passed around
can carry org-specific user privileges, so that Postgres can figure
out who's allowed to do what without out-of-band synchronization
scripts. And those access tokens are a straight upgrade over
passwords: even if they're somehow stolen, they are time-limited, they
are optionally revocable, and they can be scoped to specific actions.

== Extension Points ==

This patchset provides several points of customization:

Server-side validation is farmed out entirely to an extension, which
we do not provide. (Each OAuth provider is free to come up with its
own proprietary method of verifying its access tokens, and so far the
big players have absolutely not standardized.) Depending on the
provider, the extension may need to contact an external server to see
what the token has been authorized to do, or it may be able to do that
offline using signing keys and an agreed-upon token format.

The client driver using libpq may replace the device authorization
prompt (which by default is done on standard error), for example to
move it into an existing GUI, display a scannable QR code instead of a
link, and so on.

The driver may also replace the entire OAuth flow. For example, a
client that already interacts with browsers may be able to use one of
the more standard web-based methods to get an access token. And
clients attached to a service rather than an end user could use a more
straightforward server-to-server flow, with pre-established
credentials.

== Architecture ==

The client needs to speak HTTP, which is implemented entirely with
libcurl. Originally, I used another OAuth library for rapid
prototyping, but the quality just wasn't there and I ported the
implementation. An internal abstraction layer remains in the libpq
code, so if a better client library comes along, switching to it
shouldn't be too painful.

The client-side hooks all go through a single extension point, so that
we don't continually add entry points in the API for each new piece of
authentication data that a driver may be able to provide. If we wanted
to, we could potentially move the existing SSL passphrase hook into
that, or even handle password retries within libpq itself, but I don't
see any burning reason to do that now.

I wanted to make sure that OAuth could be dropped into existing
deployments without driver changes. (Drivers will probably *want* to
look at the extension hooks for better UX, but they shouldn't
necessarily *have* to.) That has driven several parts of the design.

Drivers using the async APIs should continue to work without blocking,
even during the long HTTP handshakes. So the new client code is
structured as a typical event-driven state machine (similar to
PQconnectPoll). The protocol machine hands off control to the OAuth
machine during authentication, without really needing to know how it
works, because the OAuth machine replaces the PQsocket with a
general-purpose multiplexer that handles all of the HTTP sockets and
events. Once that's completed, the OAuth machine hands control right
back and we return to the Postgres protocol on the wire.

This decision led to a major compromise: Windows client support is
nonexistent. Multiplexer handles exist in Windows (for example with
WSAEventSelect, IIUC), but last I checked they were completely
incompatible with Winsock select(), which means existing async-aware
drivers would fail. We could compromise by providing synchronous-only
support, or by cobbling together a socketpair plus thread pool (or
IOCP?), or simply by saying that existing Windows clients need a new
API other than PQsocket() to be able to work properly. None of those
approaches have been attempted yet, though.

== Areas of Concern ==

Here are the iffy things that a committer is signing up for:

The client implementation is roughly 3k lines, requiring domain
knowledge of Curl, HTTP, JSON, and OAuth, the specifications of which
are spread across several separate standards bodies. (And some big
providers ignore those anyway.)

The OAUTHBEARER mechanism is extensible, but not in the same way as
HTTP. So sometimes, it looks like people design new OAuth features
that rely heavily on HTTP and forget to "port" them over to SASL. That
may be a point of future frustration.

C is not really anyone's preferred language for implementing an
extensible authn/z protocol running on top of HTTP, and constant
vigilance is going to be required to maintain safety. What's more, we
don't really "trust" the endpoints we're talking to in the same way
that we normally trust our servers. It's a fairly hostile environment
for maintainers.

Along the same lines, our JSON implementation assumes some level of
trust in the JSON data -- which is true for the backend, and can be
assumed for a DBA running our utilities, but is absolutely not the
case for a libpq client downloading data from Some Server on the
Internet. I've been working to fuzz the implementation and there are a
few known problems registered in the CF already.

Curl is not a lightweight dependency by any means. Typically, libcurl
is configured with a wide variety of nice options, a tiny subset of
which we're actually going to use, but all that code (and its
transitive dependencies!) is going to arrive in our process anyway.
That might not be a lot of fun if you're not using OAuth.

It's possible that the application embedding libpq is also a direct
client of libcurl. We need to make sure we're not stomping on their
toes at any point.

== TODOs/Known Issues ==

The client does not deal with verification failure well at the moment;
it just keeps retrying with a new OAuth handshake.

Some people are not going to be okay with just contacting any web
server that Postgres tells them to. There's a more paranoid mode
sketched out that lets the connection string specify the trusted
issuer, but it's not complete.

The new code still needs to play well with orthogonal connection
options, like connect_timeout and require_auth.

The server does not deal well with multi-issuer setups yet. And you
only get one oauth_validator_library...

Harden, harden, harden. There are still a handful of inline TODOs
around double-checking certain pieces of the response before
continuing with the handshake. Servers should not be able to run our
recursive descent parser out of stack. And my JSON code is using
assertions too liberally, which will turn bugs into DoS vectors. I've
been working to fit a fuzzer into more and more places, and I'm hoping
to eventually drive it directly from the socket.

Documentation still needs to be filled in. (Thanks Daniel for your work here!)

== Future Features ==

There is no support for token caching (refresh or otherwise). Each new
connection needs a new approval, and the only way to change that for
v1 is to replace the entire flow. I think that's eventually going to
annoy someone. The question is, where do you persist it? Does that
need to be another extensibility point?

We already have pretty good support for client certificates, and it'd
be great if we could bind our tokens to those. That way, even if you
somehow steal the tokens, you can't do anything with them without the
private key! But the state of proof-of-possession in OAuth is an
absolute mess, involving at least three competing standards (Token
Binding, mTLS, DPoP). I don't know what's going to win.

--

Hope this helps! Next I'll be working to fold the patches together, as
discussed upthread.

Thanks,
--Jacob

#126Antonin Houska
ah@cybertec.at
In reply to: Jacob Champion (#124)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> wrote:

Peter asked me if there were plans to provide a "standard" validator
module, say as part of contrib. The tricky thing is that Bearer
validation is issuer-specific, and many providers give you an opaque
token that you're not supposed to introspect at all.

We could use token introspection (RFC 7662) for online verification,
but last I looked at it, no one had actually implemented those
endpoints. For offline verification, I think the best we could do
would be to provide a generic JWT Profile (RFC 9068) validator, but
again I don't know if anyone is actually providing those token formats
in practice. I'm inclined to push that out into the future.

Have you considered sending the token for validation to the server, like this

curl -X GET "https://www.googleapis.com/oauth2/v3/userinfo&quot; -H "Authorization: Bearer $TOKEN"

and getting the userid (e.g. email address) from the response, as described in
[1]: https://www.oauth.com/oauth2-servers/signing-in-with-google/verifying-the-user-info/
get_user_profile() function in web/pgadmin/authenticate/oauth2.py.

[1]: https://www.oauth.com/oauth2-servers/signing-in-with-google/verifying-the-user-info/

--
Antonin Houska
Web: https://www.cybertec-postgresql.com

#127Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Antonin Houska (#126)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Sep 27, 2024 at 10:58 AM Antonin Houska <ah@cybertec.at> wrote:

Have you considered sending the token for validation to the server, like this

curl -X GET "https://www.googleapis.com/oauth2/v3/userinfo&quot; -H "Authorization: Bearer $TOKEN"

In short, no, but I'm glad you asked. I think it's going to be a
common request, and I need to get better at explaining why it's not
safe, so we can document it clearly. Or else someone can point out
that I'm misunderstanding, which honestly would make all this much
easier and less complicated. I would love to be able to do it that
way.

We cannot, for the same reason libpq must send the server an access
token instead of an ID token. The /userinfo endpoint tells you who the
end user is, but it doesn't tell you whether the Bearer is actually
allowed to access the database. That difference is critical: it's
entirely possible for an end user to be authorized to access the
database, *and yet* the Bearer token may not actually carry that
authorization on their behalf. (In fact, the user may have actively
refused to give the Bearer that permission.) That's why people are so
pedantic about saying that OAuth is an authorization framework and not
an authentication framework.

To illustrate, think about all the third-party web services out there
that ask you to Sign In with Google. They ask Google for permission to
access your personal ID, and Google asks you if you're okay with that,
and you either allow or deny it. Now imagine that I ran one of those
services, and I decided to become evil. I could take my legitimately
acquired Bearer token -- which should only give me permission to query
your Google ID -- and send it to a Postgres database you're authorized
to access.

The server is supposed to introspect it, say, "hey, this token doesn't
give the bearer access to the database at all," and shut everything
down. For extra credit, the server could notice that the client ID
tied to the access token isn't even one that it recognizes! But if all
the server does is ask Google, "what's the email address associated
with this token's end user?", then it's about to make some very bad
decisions. The email address it gets back doesn't belong to Jacob the
Evil Bearer; it belongs to you.

Now, the token introspection endpoint I mentioned upthread should give
us the required information (scopes, etc.). But Google doesn't
implement that one. In fact they don't seem to have implemented custom
scopes at all in the years since I started work on this feature, which
makes me think that people are probably not going to be able to safely
log into Postgres using Google tokens. Hopefully there's some feature
buried somewhere that I haven't seen.

Let me know if that makes sense. (And again: I'd love to be proven
wrong. It would improve the reach of the feature considerably if I
am.)

Thanks,
--Jacob

#128Antonin Houska
ah@cybertec.at
In reply to: Jacob Champion (#127)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Fri, Sep 27, 2024 at 10:58 AM Antonin Houska <ah@cybertec.at> wrote:

Have you considered sending the token for validation to the server, like this

curl -X GET "https://www.googleapis.com/oauth2/v3/userinfo&quot; -H "Authorization: Bearer $TOKEN"

In short, no, but I'm glad you asked. I think it's going to be a
common request, and I need to get better at explaining why it's not
safe, so we can document it clearly. Or else someone can point out
that I'm misunderstanding, which honestly would make all this much
easier and less complicated. I would love to be able to do it that
way.

We cannot, for the same reason libpq must send the server an access
token instead of an ID token. The /userinfo endpoint tells you who the
end user is, but it doesn't tell you whether the Bearer is actually
allowed to access the database. That difference is critical: it's
entirely possible for an end user to be authorized to access the
database, *and yet* the Bearer token may not actually carry that
authorization on their behalf. (In fact, the user may have actively
refused to give the Bearer that permission.)

That's why people are so pedantic about saying that OAuth is an
authorization framework and not an authentication framework.

This statement alone sounds as if you missed *authentication*, but you seem to
admit above that the /userinfo endpoint provides it ("tells you who the end
user is"). I agree that it does. My understanding is that this endpoint, as
well as the concept of "claims" and "scopes", is introduced by OpenID, which
is an *authentication* framework, although it's built on top of OAuth.

Regarding *authorization*, I agree that the bearer token may not contain
enough information to determine whether the owner of the token is allowed to
access the database. However, I consider database a special kind of
"application", which can handle authorization on its own. In this case, the
authorization can be controlled by (not) assigning the user the LOGIN
attribute, as well as by (not) granting it privileges on particular database
objects. In short, I think that *authentication* is all we need.

To illustrate, think about all the third-party web services out there
that ask you to Sign In with Google. They ask Google for permission to
access your personal ID, and Google asks you if you're okay with that,
and you either allow or deny it. Now imagine that I ran one of those
services, and I decided to become evil. I could take my legitimately
acquired Bearer token -- which should only give me permission to query
your Google ID -- and send it to a Postgres database you're authorized
to access.

The server is supposed to introspect it, say, "hey, this token doesn't
give the bearer access to the database at all," and shut everything
down. For extra credit, the server could notice that the client ID
tied to the access token isn't even one that it recognizes! But if all
the server does is ask Google, "what's the email address associated
with this token's end user?", then it's about to make some very bad
decisions. The email address it gets back doesn't belong to Jacob the
Evil Bearer; it belongs to you.

Are you sure you can legitimately acquire the bearer token containing my email
address? I think the email address returned by the /userinfo endpoint is one
of the standard claims [1]https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims. Thus by returning the particular value of "email"
from the endpoint the identity provider asserts that the token owner does have
this address. (And that, if "email_verified" claim is "true", it spent some
effort to verify that the email address is controlled by that user.)

Now, the token introspection endpoint I mentioned upthread

Can you please point me to the particular message?

should give us the required information (scopes, etc.). But Google doesn't
implement that one. In fact they don't seem to have implemented custom
scopes at all in the years since I started work on this feature, which makes
me think that people are probably not going to be able to safely log into
Postgres using Google tokens. Hopefully there's some feature buried
somewhere that I haven't seen.

Let me know if that makes sense. (And again: I'd love to be proven
wrong. It would improve the reach of the feature considerably if I
am.)

Another question, assuming the token verification is resolved somehow:
wouldn't it be sufficient for the initial implementation if the client could
pass the bearer token to libpq in the connection string?

Obviously, one use case is than an application / web server which needs the
token to authenticate the user could eventually pass the token to the database
server. Thus, if users could authenticate to the database using their
individual ids, it would no longer be necessary to store a separate userid /
password for the application in a configuration file.

Also, if libpq accepted the bearer token via the connection string, it would
be possible to implement the authorization as a separate front-end application
(e.g. pg_oauth_login) rather than adding more complexity to libpq itself.

(I'm learning this stuff on-the-fly, so there might be something naive in my
comments.)

[1]: https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims

--
Antonin Houska
Web: https://www.cybertec-postgresql.com

#129Antonin Houska
ah@cybertec.at
In reply to: Antonin Houska (#128)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Antonin Houska <ah@cybertec.at> wrote:

Jacob Champion <jacob.champion@enterprisedb.com> wrote:

Now, the token introspection endpoint I mentioned upthread

Can you please point me to the particular message?

Please ignore this dumb question. You probably referred to the email I was
responding to.

--
Antonin Houska
Web: https://www.cybertec-postgresql.com

#130Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Antonin Houska (#128)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Sep 30, 2024 at 6:38 AM Antonin Houska <ah@cybertec.at> wrote:

Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Fri, Sep 27, 2024 at 10:58 AM Antonin Houska <ah@cybertec.at> wrote:
That's why people are so pedantic about saying that OAuth is an
authorization framework and not an authentication framework.

This statement alone sounds as if you missed *authentication*, but you seem to
admit above that the /userinfo endpoint provides it ("tells you who the end
user is"). I agree that it does. My understanding is that this endpoint, as
well as the concept of "claims" and "scopes", is introduced by OpenID, which
is an *authentication* framework, although it's built on top of OAuth.

OpenID is an authentication framework, but it's generally focused on a
type of client known as a Relying Party. In the architecture of this
patchset, the Relying Party would be libpq, which has the option of
retrieving authentication claims from the provider. Unfortunately for
us, libpq has no use for those claims. It's not trying to authenticate
the user for its own purposes.

The Postgres server, on the other hand, is not a Relying Party. (It's
an OAuth resource server, in this architecture.) It's not performing
any of the OIDC flows, it's not talking to the end user and the
provider at the same time, and it is very restricted in its ability to
influence the client exchange via the SASL mechanism.

Regarding *authorization*, I agree that the bearer token may not contain
enough information to determine whether the owner of the token is allowed to
access the database. However, I consider database a special kind of
"application", which can handle authorization on its own. In this case, the
authorization can be controlled by (not) assigning the user the LOGIN
attribute, as well as by (not) granting it privileges on particular database
objects. In short, I think that *authentication* is all we need.

Authorizing the *end user's* access to the database using scopes is
optional. Authorizing the *bearer's* ability to connect on behalf of
the end user, however, is mandatory. Hopefully the below clarifies.

(I agree that most people probably want to use authentication, so that
the database can then make decisions based on HBA settings. OIDC is a
fine way to do that.)

Are you sure you can legitimately acquire the bearer token containing my email
address?

Yes. In general that's how OpenID-based "Sign in with <Service>"
works. All those third-party services are running around with tokens
that identify you, but unless they've asked for more abilities and
you've granted them the associated scopes, identifying you is all they
can do.

I think the email address returned by the /userinfo endpoint is one
of the standard claims [1]. Thus by returning the particular value of "email"
from the endpoint the identity provider asserts that the token owner does have
this address.

We agree that /userinfo gives authentication claims for the end user.
It's just insufficient for our use case.

For example, there are enterprise applications out there that will ask
for read access to your Google Calendar. If you're willing to grant
that, then you probably won't mind if those applications also know
your email address, but you probably do mind if they're suddenly able
to access your production databases just because you gave them your
email.

Put another way: if you log into Postgres using OAuth, and your
provider doesn't show you a big message saying "this application is
about to access *your* prod database using *your* identity; do you
want to allow that?", then your DBA has deployed a really dangerous
configuration. That's a critical protection feature you get from your
OAuth provider. Otherwise, what's stopping somebody else from setting
up their own malicious service to farm access tokens? All they'd have
to do is ask for your email.

Another question, assuming the token verification is resolved somehow:
wouldn't it be sufficient for the initial implementation if the client could
pass the bearer token to libpq in the connection string?

It was discussed wayyy upthread:

/messages/by-id/CAAWbhmhmBe9v3aCffz5j8Sg4HMWWkB5FvTDCSZ_Vh8E1fX91Gw@mail.gmail.com

Basically, at that point the entire implementation becomes an exercise
for the reader. I want to avoid that if possible. I'm not adamantly
opposed to it, but I think the client-side hook implementation is
going to be better for the use cases that have been discussed so far.

Also, if libpq accepted the bearer token via the connection string, it would
be possible to implement the authorization as a separate front-end application
(e.g. pg_oauth_login) rather than adding more complexity to libpq itself.

The application would still need to parse the server error response.
There was (a small) consensus at the time [1]/messages/by-id/CACrwV54_euYe+v7bcLrxnje-JuM=KRX5azOcmmrXJ5qrffVZfg@mail.gmail.com that parsing error
messages for that purpose would be really unpleasant; hence the hook
architecture.

(I'm learning this stuff on-the-fly, so there might be something naive in my
comments.)

No worries! Please keep the questions coming; this OAuth architecture
is unintuitive, and I need to be able to defend it.

Thanks,
--Jacob

[1]: /messages/by-id/CACrwV54_euYe+v7bcLrxnje-JuM=KRX5azOcmmrXJ5qrffVZfg@mail.gmail.com

#131Antonin Houska
ah@cybertec.at
In reply to: Jacob Champion (#130)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Mon, Sep 30, 2024 at 6:38 AM Antonin Houska <ah@cybertec.at> wrote:

Are you sure you can legitimately acquire the bearer token containing my email
address?

Yes. In general that's how OpenID-based "Sign in with <Service>"
works. All those third-party services are running around with tokens
that identify you, but unless they've asked for more abilities and
you've granted them the associated scopes, identifying you is all they
can do.

I think the email address returned by the /userinfo endpoint is one
of the standard claims [1]. Thus by returning the particular value of "email"
from the endpoint the identity provider asserts that the token owner does have
this address.

We agree that /userinfo gives authentication claims for the end user.
It's just insufficient for our use case.

For example, there are enterprise applications out there that will ask
for read access to your Google Calendar. If you're willing to grant
that, then you probably won't mind if those applications also know
your email address, but you probably do mind if they're suddenly able
to access your production databases just because you gave them your
email.

Put another way: if you log into Postgres using OAuth, and your
provider doesn't show you a big message saying "this application is
about to access *your* prod database using *your* identity; do you
want to allow that?", then your DBA has deployed a really dangerous
configuration. That's a critical protection feature you get from your
OAuth provider. Otherwise, what's stopping somebody else from setting
up their own malicious service to farm access tokens? All they'd have
to do is ask for your email.

Perhaps I understand now. I use getmail [2]https://github.com/getmail6/getmail6/ to retrieve email messages from my
Google account. What made me confused is that the getmail application,
although installed on my workstation (and thus the bearer token it eventually
gets contains my email address), it's "someone else" (in particular the
"Relying Party") from the perspective of the OpenID protocol. And the same
applies to "psql" in the context of your patch.

Thus, in addition to the email, we'd need special claims which authorize the
RPs to access the database and only the database. Does this sound correct?

(I'm learning this stuff on-the-fly, so there might be something naive in my
comments.)

No worries! Please keep the questions coming; this OAuth architecture
is unintuitive, and I need to be able to defend it.

I'd like to play with the code a bit and provide some review before or during
the next CF. That will probably generate some more questions.

[1] /messages/by-id/CACrwV54_euYe+v7bcLrxnje-JuM=KRX5azOcmmrXJ5qrffVZfg@mail.gmail.com

[2]: https://github.com/getmail6/getmail6/

--
Antonin Houska
Web: https://www.cybertec-postgresql.com

#132Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Antonin Houska (#131)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Oct 8, 2024 at 3:46 AM Antonin Houska <ah@cybertec.at> wrote:

Perhaps I understand now. I use getmail [2] to retrieve email messages from my
Google account. What made me confused is that the getmail application,
although installed on my workstation (and thus the bearer token it eventually
gets contains my email address), it's "someone else" (in particular the
"Relying Party") from the perspective of the OpenID protocol. And the same
applies to "psql" in the context of your patch.

Thus, in addition to the email, we'd need special claims which authorize the
RPs to access the database and only the database. Does this sound correct?

Yes. (One nitpick: the "special claims" in this case are not OpenID
claims at all, but OAuth scopes. The HBA will be configured with the
list of scopes that the server requires, and it requests those from
the client during the SASL handshake.)

I'd like to play with the code a bit and provide some review before or during
the next CF. That will probably generate some more questions.

Thanks very much for the review!

--Jacob

#133Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#124)
5 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi all,

Here's v30, which aims to fix the infinite-retry problem: you get a
maximum of two OAuth connections to the server, and we won't prompt
the user more than once per transport. (I still need to wrap my head
around the retry behavior during transport negotiation.)

On Wed, Sep 11, 2024 at 3:54 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

On Wed, Sep 11, 2024 at 6:44 AM Daniel Gustafsson <daniel@yesql.se> wrote:

I think it's just make reviewing a bit easier. At this point I think they can
be merged together, it's mostly out of historic reasons IIUC since the patchset
earlier on supported more than one library.

I can definitely do that (and yeah, it was to make the review slightly
less daunting). The server side could potentially be committed
independently, if you want to parallelize a bit, but it'd have to be
torn back out if the libpq stuff didn't land in time.

I plan to do the combination in the near future, when I'm not making a
bunch of other changes at the same time.

(I suppose the 0004 "review comments" patch should be folded into the respective other patches?)

Yes. I'm using that patch as a holding area while I write tests for
the hunks, and then moving them backwards.

This is almost gone; just one piece remaining as of v30. Everything
else has been folded in with new tests.

CURL_IGNORE_DEPRECATION(x;) broke pgindent, it needs to keep the semicolon on
the outside like CURL_IGNORE_DEPRECATION(x);. This doesn't really work well
with how the macro is defined, not sure how we should handle that best (the
attached makes the style as per how pgindent want's it with the semicolon
returned).

Ugh... maybe a case for a pre_indent rule in pgindent?

I've taken a stab at a pre_indent rule that seems to work well enough
(though the regex itself is write-once-read-never).

There is a first stab at documenting the validator module API, more to come (it
doesn't compile right now).

It contains a pgindent and pgperltidy run to keep things as close to in final
sync as we can to catch things like the curl deprecation macro mentioned above
early.

The rest of your second comments patch has been incorporated now, with
the exception of the following hunk:

    - read($read_fh, $port, 7) // die "failed to read port number: $!";
    + read($read_fh, $port, 7) or die "failed to read port number: $!";

read() doesn't set $! unless it returns undef, according to the docs
[1]: https://perldoc.perl.org/functions/read

To further pick away at this huge patch I propose to merge the SASL message
length hunk which can be extracted separately.

I've pulled this out into 0001.

Thanks,
--Jacob

[1]: https://perldoc.perl.org/functions/read

Attachments:

v30-0001-Make-SASL-max-message-length-configurable.patchapplication/octet-stream; name=v30-0001-Make-SASL-max-message-length-configurable.patchDownload
From a2b4d75bf8311ff94d8693f6bd284fcdb29598d7 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 7 Oct 2024 16:56:26 -0700
Subject: [PATCH v30 1/5] Make SASL max message length configurable

The proposed OAUTHBEARER SASL mechanism will need to allow larger
messages in the exchange, since tokens are sent directly by the client.
Move this limit into the pg_be_sasl_mech struct so that it can be
changed per-mechanism.
---
 src/backend/libpq/auth-sasl.c  | 10 +---------
 src/backend/libpq/auth-scram.c |  4 +++-
 src/include/libpq/sasl.h       | 11 +++++++++++
 3 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 03ddddc3c2..e4be4d499e 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
-- 
2.34.1

v30-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v30-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From e339e7844e671ff70072e3b98cd37757d894e082 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Wed, 11 Sep 2024 09:41:29 +0200
Subject: [PATCH v30 2/5] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                        |   19 +
 configure                                 |  144 ++
 configure.ac                              |   29 +
 doc/src/sgml/libpq.sgml                   |   76 +
 meson.build                               |   31 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   14 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2258 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  666 ++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   86 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   75 +
 src/interfaces/libpq/libpq-int.h          |   15 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/pgindent               |   14 +
 src/tools/pgindent/typedefs.list          |   11 +
 26 files changed, 3634 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index c9577313e4..b27cee91a1 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8404,6 +8407,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -12922,6 +12971,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -13947,6 +14080,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 382984f594..ddf8a36f12 100644
--- a/configure.ac
+++ b/configure.ac
@@ -917,6 +917,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1395,6 +1415,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1586,6 +1611,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index afc9346757..ffec0431e3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2336,6 +2336,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9961,6 +9998,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/meson.build b/meson.build
index 58e67975e8..7518520ec9 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3042,6 +3071,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3714,6 +3744,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 6c04347d4e..e01ab16704 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -712,6 +715,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..477c834b40 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..85ed65bcd7
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2258 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because
+		 * it's inefficient and pointless if your event loop has already
+		 * handed you the exact sockets that are ready. But that's not our use
+		 * case -- our client has no way to tell us which sockets are ready.
+		 * (They don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the
+		 * foot in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so we have to handle 400/401 here too.
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	if (response_code != 200)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		if (err.error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse device authorization";
+	if (!parse_device_authz(actx, &actx->authz))
+		return false;			/* error message already set */
+
+	return true;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+	}
+	else
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
+	 * response uses either 400 Bad Request or 401 Unauthorized.
+	 *
+	 * TODO: there are references online to 403 appearing in the wild...
+	 */
+	if (response_code != 200
+		&& response_code != 400
+		 /* && response_code != 401 TODO */ )
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+	}
+	else if (!parse_token_error(actx, &tok->err))
+		return false;
+
+	return true;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
+	 */
+	err = &tok.err;
+	if (!err->error)
+	{
+		/* TODO test */
+		actx_error(actx, "unknown error");
+		goto token_cleanup;
+	}
+
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		if (err->error_description)
+			appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+
+		appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		fprintf(stderr, "Visit %s and enter the code: %s",
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..73718ac3b1
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/*
+		 * Use our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..5d311d4107 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +588,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +674,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +704,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1026,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1195,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1212,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1542,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 61c025ff3b..b53f8eae9b 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3694,6 +3713,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3849,6 +3869,17 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3882,7 +3913,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3919,6 +3960,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4600,6 +4676,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4717,6 +4794,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7239,6 +7321,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..a38c571107 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +105,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +728,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 9579f80353..7bd75be2df 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..0181e5cc03 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e..362b20a94f 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a65e1c07c5..6788898e3c 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1717,6 +1721,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1781,6 +1786,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1941,11 +1947,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3459,6 +3468,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v30-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v30-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 1a9422911f70d6aa04f5d11fe7d1864ef38cca89 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v30 3/5] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 configure                                     |   5 +
 configure.ac                                  |   4 +
 doc/src/sgml/client-auth.sgml                 |  28 +
 doc/src/sgml/config.sgml                      |  13 +
 doc/src/sgml/filelist.sgml                    |   1 +
 doc/src/sgml/oauth-validators.sgml            |  96 +++
 doc/src/sgml/postgres.sgml                    |   1 +
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 681 ++++++++++++++++++
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/interfaces/libpq/fe-auth-oauth-curl.c     |  12 +-
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  33 +
 src/test/modules/oauth_validator/meson.build  |  33 +
 .../modules/oauth_validator/t/001_server.pl   | 254 +++++++
 .../modules/oauth_validator/t/oauth_server.py | 304 ++++++++
 src/test/modules/oauth_validator/validator.c  |  97 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   4 +
 29 files changed, 1779 insertions(+), 36 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 90cb95c868..302cf0487b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -691,8 +696,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/configure b/configure
index b27cee91a1..f6b8d2656c 100755
--- a/configure
+++ b/configure
@@ -8444,6 +8444,11 @@ $as_echo "#define USE_OAUTH 1" >>confdefs.h
 
 $as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
 
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
+  fi
 elif test x"$with_oauth" != x"no"; then
   as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
 fi
diff --git a/configure.ac b/configure.ac
index ddf8a36f12..e25481aec4 100644
--- a/configure.ac
+++ b/configure.ac
@@ -929,6 +929,10 @@ fi
 if test x"$with_oauth" = x"curl"; then
   AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
   AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
+  fi
 elif test x"$with_oauth" != x"no"; then
   AC_MSG_ERROR([--with-oauth must specify curl])
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..fb78b6c886 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 9707d5238d..573da1a0ed 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1201,6 +1201,19 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+	    TODO
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a7ff5f8264..91cf16678e 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..c9914519fc
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,96 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth tokens.
+ </para>
+ <para>
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading a shared library
+   with the <xref linkend="guc-oauth-validator-library"/>'s name as the library
+   base name. The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..321d4590a3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -264,6 +264,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..dea973247a
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,681 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	pg_unreachable();
+	return NULL;
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2fd96a7129..735fd05373 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1748,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2066,8 +2069,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2454,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 686309db58..4075a6c95b 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4783,6 +4784,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 85ed65bcd7..8370d82660 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -145,7 +145,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -1899,6 +1899,9 @@ handle_token_response(struct async_ctx *actx, char **token)
 	if (!finish_token_request(actx, &tok))
 		goto token_cleanup;
 
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
 	if (tok.access_token)
 	{
 		/* Construct our Bearer token. */
@@ -1927,13 +1930,6 @@ handle_token_response(struct async_ctx *actx, char **token)
 	 * anything else and we bail.
 	 */
 	err = &tok.err;
-	if (!err->error)
-	{
-		/* TODO test */
-		actx_error(actx, "unknown error");
-		goto token_cleanup;
-	}
-
 	if (strcmp(err->error, "authorization_pending") != 0 &&
 		strcmp(err->error, "slow_down") != 0)
 	{
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..402369504d
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,33 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_oauth
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..86813e8b6a
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..e7a5f8c875
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,254 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+elsif ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr => qr/failed to obtain device authorization: response is too large/
+);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr => qr/failed to obtain access token: response is too large/
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the issuer.
+$common_connstr = "user=test dbname=postgres oauth_issuer=$issuer";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr oauth_client_id=f02c6361-0635",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..ab6d6ccd2f
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,304 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..d12c79e2a2
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,97 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index c793f2135d..a176b5895d 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2491,6 +2491,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2534,7 +2539,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..a13240cd01
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 6788898e3c..6b11f58b79 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1722,6 +1722,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3074,6 +3075,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3669,6 +3672,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v30-0004-Review-comments.patchapplication/octet-stream; name=v30-0004-Review-comments.patchDownload
From 0b522f3117d4dc827a6cf949687113a7a019951c Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Thu, 28 Mar 2024 21:59:02 +0100
Subject: [PATCH v30 4/5] Review comments

Fixes and tidy-ups following a review of v21, a few items
are (listed in no specific order):

* Implement a version check for libcurl in autoconf, the equivalent
  check for Meson is still a TODO. [ed: moved to an earlier commit]
* Address a few TODOs in the code
* libpq JSON support memory management fixups [ed: moved to an earlier
  commit]
---
 src/interfaces/libpq/fe-auth-oauth-curl.c | 36 ++++++++++++-----------
 1 file changed, 19 insertions(+), 17 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 8370d82660..35c9a30ef4 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -1852,32 +1852,34 @@ finish_token_request(struct async_ctx *actx, struct token *tok)
 	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
 
 	/*
-	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
-	 * response uses either 400 Bad Request or 401 Unauthorized.
-	 *
-	 * TODO: there are references online to 403 appearing in the wild...
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
 	 */
-	if (response_code != 200
-		&& response_code != 400
-		 /* && response_code != 401 TODO */ )
+	if (response_code == 200)
 	{
-		actx_error(actx, "unexpected response code %ld", response_code);
-		return false;
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
 	}
 
 	/*
-	 * Pull the fields we care about from the document.
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
 	 */
-	if (response_code == 200)
+	if (response_code == 400 || response_code == 401)
 	{
-		actx->errctx = "failed to parse access token response";
-		if (!parse_access_token(actx, tok))
-			return false;		/* error message already set */
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
 	}
-	else if (!parse_token_error(actx, &tok->err))
-		return false;
 
-	return true;
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
 }
 
 /*
-- 
2.34.1

v30-0005-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v30-0005-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 4a1cd6e1cec605ec3b8a27e09d34aac3bdbb89ee Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v30 5/5] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1910 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5629 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 302cf0487b..175b2eff79 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -375,6 +376,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -385,7 +388,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index 7518520ec9..ce9a7d28f2 100644
--- a/meson.build
+++ b/meson.build
@@ -3385,6 +3385,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3546,6 +3549,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7f..c7fce098eb 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..a9244d43f1
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1910 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy reponse, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://example.com/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": openid_provider.discovery_uri,
+            }
+
+            fail_oauth_handshake(conn, resp)
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..ee39c2a14e
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..8fed4a9716
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

#134Alexander Lakhin
exclusion@gmail.com
In reply to: Peter Eisentraut (#122)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hello Peter,

11.09.2024 10:37, Peter Eisentraut wrote:

This has been committed.

I've discovered that starting from 0785d1b8b,
make check -C src/bin/pg_combinebackup
fails under Valgrind, with the following diagnostics:
2024-10-15 14:29:52.883 UTC [3338981] 002_compare_backups.pl STATEMENT:  UPLOAD_MANIFEST
==00:00:00:20.028 3338981== Conditional jump or move depends on uninitialised value(s)
==00:00:00:20.028 3338981==    at 0xA3E68F: json_lex (jsonapi.c:1496)
==00:00:00:20.028 3338981==    by 0xA3ED13: json_lex (jsonapi.c:1666)
==00:00:00:20.028 3338981==    by 0xA3D5AF: pg_parse_json_incremental (jsonapi.c:822)
==00:00:00:20.028 3338981==    by 0xA40ECF: json_parse_manifest_incremental_chunk (parse_manifest.c:194)
==00:00:00:20.028 3338981==    by 0x31656B: FinalizeIncrementalManifest (basebackup_incremental.c:237)
==00:00:00:20.028 3338981==    by 0x73B4A4: UploadManifest (walsender.c:709)
==00:00:00:20.028 3338981==    by 0x73DF4A: exec_replication_command (walsender.c:2185)
==00:00:00:20.028 3338981==    by 0x7C58C3: PostgresMain (postgres.c:4762)
==00:00:00:20.028 3338981==    by 0x7BBDA7: BackendMain (backend_startup.c:107)
==00:00:00:20.028 3338981==    by 0x6CF60F: postmaster_child_launch (launch_backend.c:274)
==00:00:00:20.028 3338981==    by 0x6D546F: BackendStartup (postmaster.c:3415)
==00:00:00:20.028 3338981==    by 0x6D2B21: ServerLoop (postmaster.c:1648)
==00:00:00:20.028 3338981==

(Initializing
        dummy_lex.inc_state = NULL;
before
        partial_result = json_lex(&dummy_lex);
makes these TAP tests pass for me.)

Best regards,
Alexander

#135Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Alexander Lakhin (#134)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Oct 15, 2024 at 11:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:

I've discovered that starting from 0785d1b8b,
make check -C src/bin/pg_combinebackup
fails under Valgrind, with the following diagnostics:

Yep, sorry for that (and thanks for the report!). It's currently
tracked over at [1]/messages/by-id/CAOYmi+kiiM83=H6YQ77NSSCtkGAzAnZfC0vZS=aLM9QZx=Rn_A@mail.gmail.com, but I should have mentioned it here. The patch I
used is attached, renamed to not stress out the cfbot.

--Jacob

[1]: /messages/by-id/CAOYmi+kiiM83=H6YQ77NSSCtkGAzAnZfC0vZS=aLM9QZx=Rn_A@mail.gmail.com

Attachments:

v4-0002-jsonapi-fully-initialize-dummy-lexer.patch.txttext/plain; charset=US-ASCII; name=v4-0002-jsonapi-fully-initialize-dummy-lexer.patch.txtDownload
From d3e639ba2bacf64fc0d2eb3aa9364a87030335f5 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 7 Oct 2024 14:41:31 -0700
Subject: [PATCH v4 2/2] jsonapi: fully initialize dummy lexer

Valgrind reports that checks on lex->inc_state are undefined for the
"dummy lexer" used for incremental parsing, since it's only partially
initialized on the stack. This was introduced in 0785d1b8b2.
Zero-initialize the whole struct.
---
 src/common/jsonapi.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/common/jsonapi.c b/src/common/jsonapi.c
index df6e633b5e..0e2a82ad7a 100644
--- a/src/common/jsonapi.c
+++ b/src/common/jsonapi.c
@@ -1622,7 +1622,7 @@ json_lex(JsonLexContext *lex)
 		jsonapi_StrValType *ptok = &(lex->inc_state->partial_token);
 		size_t		added = 0;
 		bool		tok_done = false;
-		JsonLexContext dummy_lex;
+		JsonLexContext dummy_lex = {0};
 		JsonParseErrorType partial_result;
 
 		if (ptok->data[0] == '"')
-- 
2.34.1

#136Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#133)
5 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Oct 10, 2024 at 4:08 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Here's v30, which aims to fix the infinite-retry problem: you get a
maximum of two OAuth connections to the server, and we won't prompt
the user more than once per transport. (I still need to wrap my head
around the retry behavior during transport negotiation.)

v31 folds in the remainder of the review patch (hooray!) and makes a
change to the 401 handling. If the server doesn't tell the client why
a token request failed, but we see that we got a 401 Unauthorized, the
error message now suggests that the oauth_client_secret setting is
unacceptable.

(If anyone out there happens to be testing this against real-world
implementations, I would love to get your feedback on this failure
mode. I can no longer get Entra ID to require a client secret during a
device flow, the way I used to be able to with Azure AD.)

I've also made the default device prompt translatable.

--Jacob

Attachments:

since-v30.diff.txttext/plain; charset=US-ASCII; name=since-v30.diff.txtDownload
1:  a2b4d75bf8 = 1:  696b0b2af4 Make SASL max message length configurable
2:  e339e7844e ! 2:  80afe8b107 libpq: add OAUTHBEARER SASL mechanism
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	int			running;		/* is asynchronous work in progress? */
     +	bool		user_prompted;	/* have we already sent the authz prompt? */
    ++	bool		used_basic_auth;	/* did we send a client secret? */
     +	bool		debugging;		/* can we give unsafe developer assistance? */
     +};
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	return result;
     +}
     +
    ++/*
    ++ * Constructs a message from the token error response and puts it into
    ++ * actx->errbuf.
    ++ */
    ++static void
    ++record_token_error(struct async_ctx *actx, const struct token_error *err)
    ++{
    ++	if (err->error_description)
    ++		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
    ++	else
    ++	{
    ++		/*
    ++		 * Try to get some more helpful detail into the error string. A 401
    ++		 * status in particular implies that the oauth_client_secret is
    ++		 * missing or wrong.
    ++		 */
    ++		long		response_code;
    ++
    ++		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
    ++
    ++		if (response_code == 401)
    ++		{
    ++			actx_error(actx, actx->used_basic_auth
    ++					   ? "provider rejected the oauth_client_secret"
    ++					   : "provider requires client authentication, and no oauth_client_secret is set");
    ++			actx_error_str(actx, " ");
    ++		}
    ++	}
    ++
    ++	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
    ++}
    ++
     +static bool
     +parse_access_token(struct async_ctx *actx, struct token *tok)
     +{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 *   scheme (or other password-based HTTP authentication schemes).
     +		 *
     +		 * TODO: should we omit client_id from the body in this case?
    ++		 * TODO: url-encode...?
     +		 */
     +		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
     +		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
     +		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
    ++
    ++		actx->used_basic_auth = true;
     +	}
     +	else
    ++	{
     +		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
    ++		actx->used_basic_auth = false;
    ++	}
     +
     +	return start_request(actx);
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
     +
     +	/*
    -+	 * The device authorization endpoint uses the same error response as the
    -+	 * token endpoint, so we have to handle 400/401 here too.
    ++	 * Per RFC 8628, Section 3, a successful device authorization response
    ++	 * uses 200 OK.
     +	 */
    -+	if (response_code != 200
    -+		&& response_code != 400
    -+		 /* && response_code != 401 TODO */ )
    ++	if (response_code == 200)
     +	{
    -+		actx_error(actx, "unexpected response code %ld", response_code);
    -+		return false;
    ++		actx->errctx = "failed to parse device authorization";
    ++		if (!parse_device_authz(actx, &actx->authz))
    ++			return false;		/* error message already set */
    ++
    ++		return true;
     +	}
     +
    -+	if (response_code != 200)
    ++	/*
    ++	 * The device authorization endpoint uses the same error response as the
    ++	 * token endpoint, so the error handling roughly follows
    ++	 * finish_token_request(). The key difference is that an error here is
    ++	 * immediately fatal.
    ++	 */
    ++	if (response_code == 400 || response_code == 401)
     +	{
     +		struct token_error err = {0};
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			return false;
     +		}
     +
    -+		if (err.error_description)
    -+			appendPQExpBuffer(&actx->errbuf, "%s ", err.error_description);
    -+
    -+		appendPQExpBuffer(&actx->errbuf, "(%s)", err.error);
    ++		record_token_error(actx, &err);
     +
     +		free_token_error(&err);
     +		return false;
     +	}
     +
    -+	/*
    -+	 * Pull the fields we care about from the document.
    -+	 */
    -+	actx->errctx = "failed to parse device authorization";
    -+	if (!parse_device_authz(actx, &actx->authz))
    -+		return false;			/* error message already set */
    -+
    -+	return true;
    ++	/* Any other response codes are considered invalid */
    ++	actx_error(actx, "unexpected response code %ld", response_code);
    ++	return false;
     +}
     +
     +/*
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 *   scheme (or other password-based HTTP authentication schemes).
     +		 *
     +		 * TODO: should we omit client_id from the body in this case?
    ++		 * TODO: url-encode...?
     +		 */
     +		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
     +		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
     +		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
    ++
    ++		actx->used_basic_auth = true;
     +	}
     +	else
    ++	{
     +		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
    ++		actx->used_basic_auth = false;
    ++	}
     +
     +	resetPQExpBuffer(work_buffer);
     +	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
     +
     +	/*
    -+	 * Per RFC 6749, Section 5, a successful response uses 200 OK. An error
    -+	 * response uses either 400 Bad Request or 401 Unauthorized.
    -+	 *
    -+	 * TODO: there are references online to 403 appearing in the wild...
    ++	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
     +	 */
    -+	if (response_code != 200
    -+		&& response_code != 400
    -+		 /* && response_code != 401 TODO */ )
    ++	if (response_code == 200)
     +	{
    -+		actx_error(actx, "unexpected response code %ld", response_code);
    -+		return false;
    ++		actx->errctx = "failed to parse access token response";
    ++		if (!parse_access_token(actx, tok))
    ++			return false;		/* error message already set */
    ++
    ++		return true;
     +	}
     +
     +	/*
    -+	 * Pull the fields we care about from the document.
    ++	 * An error response uses either 400 Bad Request or 401 Unauthorized.
    ++	 * There are references online to implementations using 403 for error
    ++	 * return which would violate the specification. For now we stick to the
    ++	 * specification but we might have to revisit this.
     +	 */
    -+	if (response_code == 200)
    ++	if (response_code == 400 || response_code == 401)
     +	{
    -+		actx->errctx = "failed to parse access token response";
    -+		if (!parse_access_token(actx, tok))
    -+			return false;		/* error message already set */
    ++		if (!parse_token_error(actx, &tok->err))
    ++			return false;
    ++
    ++		return true;
     +	}
    -+	else if (!parse_token_error(actx, &tok->err))
    -+		return false;
     +
    -+	return true;
    ++	/* Any other response codes are considered invalid */
    ++	actx_error(actx, "unexpected response code %ld", response_code);
    ++	return false;
     +}
     +
     +/*
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	if (strcmp(err->error, "authorization_pending") != 0 &&
     +		strcmp(err->error, "slow_down") != 0)
     +	{
    -+		if (err->error_description)
    -+			appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
    -+
    -+		appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
    ++		record_token_error(actx, err);
     +		goto token_cleanup;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (!res)
     +	{
    -+		fprintf(stderr, "Visit %s and enter the code: %s",
    ++		/*
    ++		 * translator: The first %s is a URL for the user to visit in a
    ++		 * browser, and the second %s is a code to be copy-pasted there.
    ++		 */
    ++		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
     +				prompt.verification_uri, prompt.user_code);
     +	}
     +	else if (res < 0)
3:  1a9422911f ! 3:  66505336d6 backend: add OAUTHBEARER SASL mechanism
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +);
     +
     +$node->connect_fails(
    ++	connstr(stage => 'token', error_code => "invalid_grant"),
    ++	"bad token response: invalid_grant, no description",
    ++	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
    ++$node->connect_fails(
    ++	connstr(
    ++		stage => 'token',
    ++		error_code => "invalid_grant",
    ++		error_desc => "grant expired"),
    ++	"bad token response: expired grant",
    ++	expected_stderr =>
    ++	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
    ++$node->connect_fails(
    ++	connstr(
    ++		stage => 'token',
    ++		error_code => "invalid_client",
    ++		error_status => 401),
    ++	"bad token response: client authentication failure, default description",
    ++	expected_stderr =>
    ++	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
    ++);
    ++$node->connect_fails(
    ++	connstr(
    ++		stage => 'token',
    ++		error_code => "invalid_client",
    ++		error_status => 401,
    ++		error_desc => "authn failure"),
    ++	"bad token response: client authentication failure, provided description",
    ++	expected_stderr =>
    ++	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
    ++
    ++$node->connect_fails(
     +	connstr(stage => 'token', token => ""),
     +	"server rejects access: empty token",
     +	expected_stderr => qr/bearer authentication failed/);
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +	"server rejects access: invalid token contents",
     +	expected_stderr => qr/bearer authentication failed/);
     +
    ++# Test behavior of the oauth_client_secret.
    ++$common_connstr = "$common_connstr oauth_client_secret=12345";
    ++
    ++$node->connect_ok(
    ++	connstr(stage => 'all', expected_secret => '12345'),
    ++	"oauth_client_secret",
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
    ++
    ++$node->connect_fails(
    ++	connstr(
    ++		stage => 'token',
    ++		error_code => "invalid_client",
    ++		error_status => 401),
    ++	"bad token response: client authentication failure, default description with oauth_client_secret",
    ++	expected_stderr =>
    ++	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
    ++);
    ++$node->connect_fails(
    ++	connstr(
    ++		stage => 'token',
    ++		error_code => "invalid_client",
    ++		error_status => 401,
    ++		error_desc => "mutual TLS required for client"),
    ++	"bad token response: client authentication failure, provided description with oauth_client_secret",
    ++	expected_stderr =>
    ++	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
    ++);
    ++
     +#
     +# This section of tests reconfigures the validator module itself, rather than
     +# the OAuth server.
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        elif self._parameterized:
     +            self.path = self.path.removeprefix("/param")
     +
    ++    def _check_authn(self):
    ++        """
    ++        Checks the expected value of the Authorization header, if any.
    ++        """
    ++        secret = self._get_param("expected_secret", None)
    ++        if secret is None:
    ++            return
    ++
    ++        if "Authorization" not in self.headers:
    ++            raise RuntimeError("client did not send Authorization header")
    ++
    ++        method, creds = self.headers["Authorization"].split()
    ++
    ++        if method != "Basic":
    ++            raise RuntimeError(f"client used {method} auth; expected Basic")
    ++
    ++        expected_creds = f"{self.client_id}:{secret}"
    ++        if creds.encode() != base64.b64encode(expected_creds.encode()):
    ++            raise RuntimeError(
    ++                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
    ++            )
    ++
     +    def do_GET(self):
     +        self._response_code = 200
     +        self._check_issuer()
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +            js = base64.b64decode(self.client_id)
     +            self._test_params = json.loads(js)
     +
    ++        self._check_authn()
    ++
     +        if self.path == "/authorize":
     +            resp = self.authorization()
     +        elif self.path == "/token":
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        return resp
     +
     +    def token(self) -> JsonObject:
    ++        if err := self._get_param("error_code", None):
    ++            self._response_code = self._get_param("error_status", 400)
    ++
    ++            resp = {"error": err}
    ++            if desc := self._get_param("error_desc", ""):
    ++                resp["error_description"] = desc
    ++
    ++            return resp
    ++
     +        if self._should_modify() and "retries" in self._test_params:
     +            retries = self._test_params["retries"]
     +
4:  0b522f3117 < -:  ---------- Review comments
5:  4a1cd6e1ce ! 4:  3c32d8f59c DO NOT MERGE: Add pytest suite for OAuth
    @@ src/test/python/client/test_oauth.py (new)
     +    [
     +        pytest.param(
     +            (
    -+                400,
    ++                401,
     +                {
     +                    "error": "invalid_client",
     +                    "error_description": "client authentication failed",
    @@ src/test/python/client/test_oauth.py (new)
     +            id="broken error response",
     +        ),
     +        pytest.param(
    ++            (401, {"error": "invalid_client"}),
    ++            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
    ++            id="failed authentication without description",
    ++        ),
    ++        pytest.param(
     +            (200, RawResponse(r'{ "interval": 3.5.8 }')),
     +            r"failed to parse device authorization: Token .* is invalid",
     +            id="non-numeric interval",
    @@ src/test/python/client/test_oauth.py (new)
     +            id="empty error response",
     +        ),
     +        pytest.param(
    ++            (401, {"error": "invalid_client"}),
    ++            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
    ++            id="authentication failure without description",
    ++        ),
    ++        pytest.param(
     +            (200, {}, {}),
     +            r"failed to parse access token response: no content type was provided",
     +            id="missing content type",
v31-0001-Make-SASL-max-message-length-configurable.patchapplication/octet-stream; name=v31-0001-Make-SASL-max-message-length-configurable.patchDownload
From 696b0b2af4015dbfcbbfaee75fe6060a733d738c Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 7 Oct 2024 16:56:26 -0700
Subject: [PATCH v31 1/4] Make SASL max message length configurable

The proposed OAUTHBEARER SASL mechanism will need to allow larger
messages in the exchange, since tokens are sent directly by the client.
Move this limit into the pg_be_sasl_mech struct so that it can be
changed per-mechanism.
---
 src/backend/libpq/auth-sasl.c  | 10 +---------
 src/backend/libpq/auth-scram.c |  4 +++-
 src/include/libpq/sasl.h       | 11 +++++++++++
 3 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 03ddddc3c2..e4be4d499e 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
-- 
2.34.1

v31-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v31-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 80afe8b10773a3aee1235063ee45d1209d2299e0 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Wed, 11 Sep 2024 09:41:29 +0200
Subject: [PATCH v31 2/4] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                        |   19 +
 configure                                 |  144 ++
 configure.ac                              |   29 +
 doc/src/sgml/libpq.sgml                   |   76 +
 meson.build                               |   31 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   14 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2305 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  666 ++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   86 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   75 +
 src/interfaces/libpq/libpq-int.h          |   15 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/pgindent               |   14 +
 src/tools/pgindent/typedefs.list          |   11 +
 26 files changed, 3681 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 3a577e463b..b41539bdb1 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8404,6 +8407,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -12922,6 +12971,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -13947,6 +14080,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 55f6c46d33..05e5bbca11 100644
--- a/configure.ac
+++ b/configure.ac
@@ -917,6 +917,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1395,6 +1415,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1586,6 +1611,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index afc9346757..ffec0431e3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2336,6 +2336,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9961,6 +9998,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/meson.build b/meson.build
index 58e67975e8..7518520ec9 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3042,6 +3071,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3714,6 +3744,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 427030f31a..bd27d9279d 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -223,6 +223,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -700,6 +703,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..477c834b40 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..831677119e
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2305 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because
+		 * it's inefficient and pointless if your event loop has already
+		 * handed you the exact sockets that are ready. But that's not our use
+		 * case -- our client has no way to tell us which sockets are ready.
+		 * (They don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the
+		 * foot in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 * TODO: url-encode...?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+		actx->used_basic_auth = false;
+	}
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 * TODO: url-encode...?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+		actx->used_basic_auth = false;
+	}
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
+	 */
+	err = &tok.err;
+	if (!err->error)
+	{
+		/* TODO test */
+		actx_error(actx, "unknown error");
+		goto token_cleanup;
+	}
+
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..73718ac3b1
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/*
+		 * Use our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..5d311d4107 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +588,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +674,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +704,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1026,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1195,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1212,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1542,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 61c025ff3b..b53f8eae9b 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3694,6 +3713,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3849,6 +3869,17 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3882,7 +3913,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3919,6 +3960,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4600,6 +4676,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4717,6 +4794,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7239,6 +7321,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..a38c571107 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +105,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +728,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd..75043bbc8f 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..0181e5cc03 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e..362b20a94f 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 57de1acff3..450c79d5f0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1717,6 +1721,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1781,6 +1786,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1941,11 +1947,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3459,6 +3468,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v31-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v31-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchDownload
From 66505336d63a79e697c352b44e07fafdc5bcd909 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v31 3/4] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 configure                                     |   5 +
 configure.ac                                  |   4 +
 doc/src/sgml/client-auth.sgml                 |  28 +
 doc/src/sgml/config.sgml                      |  13 +
 doc/src/sgml/filelist.sgml                    |   1 +
 doc/src/sgml/oauth-validators.sgml            |  96 +++
 doc/src/sgml/postgres.sgml                    |   1 +
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 681 ++++++++++++++++++
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/interfaces/libpq/fe-auth-oauth-curl.c     |  12 +-
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  33 +
 src/test/modules/oauth_validator/meson.build  |  33 +
 .../modules/oauth_validator/t/001_server.pl   | 314 ++++++++
 .../modules/oauth_validator/t/oauth_server.py | 337 +++++++++
 src/test/modules/oauth_validator/validator.c  |  97 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   4 +
 29 files changed, 1872 insertions(+), 36 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 90cb95c868..302cf0487b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -691,8 +696,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/configure b/configure
index b41539bdb1..c13ba42941 100755
--- a/configure
+++ b/configure
@@ -8444,6 +8444,11 @@ $as_echo "#define USE_OAUTH 1" >>confdefs.h
 
 $as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
 
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
+  fi
 elif test x"$with_oauth" != x"no"; then
   as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
 fi
diff --git a/configure.ac b/configure.ac
index 05e5bbca11..a4e22e2dde 100644
--- a/configure.ac
+++ b/configure.ac
@@ -929,6 +929,10 @@ fi
 if test x"$with_oauth" = x"curl"; then
   AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
   AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
+  fi
 elif test x"$with_oauth" != x"no"; then
   AC_MSG_ERROR([--with-oauth must specify curl])
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..fb78b6c886 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 934ef5e469..f089a8ff4c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1201,6 +1201,19 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+	    TODO
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..c9914519fc
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,96 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth tokens.
+ </para>
+ <para>
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading a shared library
+   with the <xref linkend="guc-oauth-validator-library"/>'s name as the library
+   base name. The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..321d4590a3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -264,6 +264,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..dea973247a
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,681 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	pg_unreachable();
+	return NULL;
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2fd96a7129..735fd05373 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1748,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2066,8 +2069,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2454,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2c4cc8cd41..e91d211b7b 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4783,6 +4784,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 831677119e..8747b0eb08 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -145,7 +145,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -1945,6 +1945,9 @@ handle_token_response(struct async_ctx *actx, char **token)
 	if (!finish_token_request(actx, &tok))
 		goto token_cleanup;
 
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
 	if (tok.access_token)
 	{
 		/* Construct our Bearer token. */
@@ -1973,13 +1976,6 @@ handle_token_response(struct async_ctx *actx, char **token)
 	 * anything else and we bail.
 	 */
 	err = &tok.err;
-	if (!err->error)
-	{
-		/* TODO test */
-		actx_error(actx, "unknown error");
-		goto token_cleanup;
-	}
-
 	if (strcmp(err->error, "authorization_pending") != 0 &&
 		strcmp(err->error, "slow_down") != 0)
 	{
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..402369504d
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,33 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_oauth
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..86813e8b6a
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..3b8f057a26
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,314 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+elsif ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr => qr/failed to obtain device authorization: response is too large/
+);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr => qr/failed to obtain access token: response is too large/
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$common_connstr = "$common_connstr oauth_client_secret=12345";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => '12345'),
+	"oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the issuer.
+$common_connstr = "user=test dbname=postgres oauth_issuer=$issuer";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr oauth_client_id=f02c6361-0635",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..fb731ed2e5
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,337 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send Authorization header")
+
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        expected_creds = f"{self.client_id}:{secret}"
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..d12c79e2a2
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,97 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 110b53ba0d..3761e2aa7a 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2499,6 +2499,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2542,7 +2547,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..a13240cd01
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 450c79d5f0..7c9e3fb701 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1722,6 +1722,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3074,6 +3075,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3670,6 +3673,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v31-0004-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v31-0004-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 3c32d8f59cca8542f5f31cc179f0ef33e29bd567 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v31 4/4] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1920 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5639 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 302cf0487b..175b2eff79 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -375,6 +376,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -385,7 +388,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index 7518520ec9..ce9a7d28f2 100644
--- a/meson.build
+++ b/meson.build
@@ -3385,6 +3385,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3546,6 +3549,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7f..c7fce098eb 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..dd047423de
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1920 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy reponse, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://example.com/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": openid_provider.discovery_uri,
+            }
+
+            fail_oauth_handshake(conn, resp)
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..ee39c2a14e
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..8fed4a9716
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

#137Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#135)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 15.10.24 20:10, Jacob Champion wrote:

On Tue, Oct 15, 2024 at 11:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:

I've discovered that starting from 0785d1b8b,
make check -C src/bin/pg_combinebackup
fails under Valgrind, with the following diagnostics:

Yep, sorry for that (and thanks for the report!). It's currently
tracked over at [1], but I should have mentioned it here. The patch I
used is attached, renamed to not stress out the cfbot.

I have committed this fix.

#138Antonin Houska
ah@cybertec.at
In reply to: Antonin Houska (#131)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Antonin Houska <ah@cybertec.at> wrote:

I'd like to play with the code a bit and provide some review before or during
the next CF. That will probably generate some more questions.

This is the 1st round, based on reading the code. I'll continue paying
attention to the project and possibly post some more comments in the future.

* Information on the new method should be added to pg_hba.conf.sample.method.

* Is it important that fe_oauth_state.token also contains the "Bearer"
keyword? I'd expect only the actual token value here. The keyword can be
added to the authentication message w/o storing it.

The same applies to the 'token' structure in fe-auth-oauth-curl.c.

* Does PQdefaultAuthDataHook() have to be declared extern and exported via
libpq/exports.txt ? Even if the user was interested in it, he can use
PQgetAuthDataHook() to get the pointer (unless he already installed his
custom hook).

* I wonder if the hooks (PQauthDataHook) can be implemented in a separate
diff. Couldn't the first version of the feature be commitable without these
hooks?

* Instead of allocating an instance of PQoauthBearerRequest, assigning it to
fe_oauth_state.async_ctx, and eventually having to all its cleanup()
function, wouldn't it be simpler to embed PQoauthBearerRequest as a member
in fe_oauth_state ?

* oauth_validator_library is defined as PGC_SIGHUP - is that intentional?

And regardless, the library appears to be loaded by every backend during
authentication. Why isn't it loaded by postmaster like libraries listed in
shared_preload_libraries? fork() would then ensure that the backends do have
the library in their address space.

* pg_fe_run_oauth_flow()

When first time here
case OAUTH_STEP_TOKEN_REQUEST:
if (!handle_token_response(actx, &state->token))
goto error_return;

the user hasn't been prompted yet so ISTM that the first token request must
always fail. It seems more logical if the prompt is set to the user before
sending the token request to the server. (Although the user probably won't
be that fast to make the first request succeed, so consider this just a
hint.)

* As long as I understand, the following comment would make sense:

diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index f943a31cc08..97259fb5654 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -518,6 +518,7 @@ oauth_exchange(void *opaq, bool final,
        switch (state->state)
        {
                case FE_OAUTH_INIT:
+                       /* Initial Client Response */
                        Assert(inputlen == -1);

if (!derive_discovery_uri(conn))

Or, doesn't the FE_OAUTH_INIT branch of the switch statement actually fit
better into oauth_init()? A side-effect of that might be (I only judge from
reading the code, haven't tried to implement this suggestion) that
oauth_exchange() would no longer return the SASL_ASYNC status. Furthermore,
I'm not sure if pg_SASL_continue() can receive the SASL_ASYNC at all. So I
wonder if moving that part from oauth_exchange() to oauth_init() would make
the SASL_ASYNC state unnecessary.

* Finally, the user documentation is almost missing. I say that just for the
sake of completeness, you obviously know it. (On the other hand, I think
that the lack of user information might discourage some people from running
the code and testing it.)

--
Antonin Houska
Web: https://www.cybertec-postgresql.com

#139Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#136)
5 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 16 Oct 2024, at 19:21, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

v31 folds in the remainder of the review patch (hooray!) and makes a
change to the 401 handling. If the server doesn't tell the client why
a token request failed, but we see that we got a 401 Unauthorized, the
error message now suggests that the oauth_client_secret setting is
unacceptable.

+1

The attached diff expands the documentation a little to chip away at the
non-trivial task of documenting this feature. I will keep working on this part
as well reviewing the rest of the patchset. A few more comments are added as
well but nothing groundbreaking. I've attached your v31 vanilla as well to
keep the CFBot happy.

A few small comments on things in the attached review 0005 comment:

-    (errcode(ERRCODE_PROTOCOL_VIOLATION),
-     errmsg("client selected an invalid SASL authentication mechanism")));
+    errcode(ERRCODE_PROTOCOL_VIOLATION),
+    errmsg("client selected an invalid SASL authentication mechanism"));
Might be a nitpick but in new code I think we should use the new style of
ereport() calls without extra parens around the aux functions.

In validate() it seems to me we should clear out ret->authn_id on failure to
pair belts with suspenders. Fixed by calling explicit_bzero on it in the error
path.

parse_hba_line() didn't enforce mandatory parameters, and one could configure a
usermap and skipping of the usermap at the same time which seems nonsensical.

src/test/modules/oauth_validator didn't check for PG_TEST_EXTRA which is should
since it opens a server. We should also not test capabilities with if-elsif
since they are really separate things.

A few smaller bits and pieces of cleanup is also included.

--
Daniel Gustafsson

Attachments:

v31-0001-Make-SASL-max-message-length-configurable.patchapplication/octet-stream; name=v31-0001-Make-SASL-max-message-length-configurable.patch; x-unix-mode=0644Download
From 696b0b2af4015dbfcbbfaee75fe6060a733d738c Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 7 Oct 2024 16:56:26 -0700
Subject: [PATCH v31 1/4] Make SASL max message length configurable

The proposed OAUTHBEARER SASL mechanism will need to allow larger
messages in the exchange, since tokens are sent directly by the client.
Move this limit into the pg_be_sasl_mech struct so that it can be
changed per-mechanism.
---
 src/backend/libpq/auth-sasl.c  | 10 +---------
 src/backend/libpq/auth-scram.c |  4 +++-
 src/include/libpq/sasl.h       | 11 +++++++++++
 3 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/src/backend/libpq/auth-sasl.c b/src/backend/libpq/auth-sasl.c
index 08b24d90b4..4039e7fa3e 100644
--- a/src/backend/libpq/auth-sasl.c
+++ b/src/backend/libpq/auth-sasl.c
@@ -20,14 +20,6 @@
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 
-/*
- * Maximum accepted size of SASL messages.
- *
- * The messages that the server or libpq generate are much smaller than this,
- * but have some headroom.
- */
-#define PG_MAX_SASL_MESSAGE_LENGTH	1024
-
 /*
  * Perform a SASL exchange with a libpq client, using a specific mechanism
  * implementation.
@@ -103,7 +95,7 @@ CheckSASLAuth(const pg_be_sasl_mech *mech, Port *port, char *shadow_pass,
 
 		/* Get the actual SASL message */
 		initStringInfo(&buf);
-		if (pq_getmessage(&buf, PG_MAX_SASL_MESSAGE_LENGTH))
+		if (pq_getmessage(&buf, mech->max_message_length))
 		{
 			/* EOF - pq_getmessage already logged error */
 			pfree(buf.data);
diff --git a/src/backend/libpq/auth-scram.c b/src/backend/libpq/auth-scram.c
index 03ddddc3c2..e4be4d499e 100644
--- a/src/backend/libpq/auth-scram.c
+++ b/src/backend/libpq/auth-scram.c
@@ -113,7 +113,9 @@ static int	scram_exchange(void *opaq, const char *input, int inputlen,
 const pg_be_sasl_mech pg_be_scram_mech = {
 	scram_get_mechanisms,
 	scram_init,
-	scram_exchange
+	scram_exchange,
+
+	PG_MAX_SASL_MESSAGE_LENGTH
 };
 
 /*
diff --git a/src/include/libpq/sasl.h b/src/include/libpq/sasl.h
index 7a1f970cca..3f2c02b8f2 100644
--- a/src/include/libpq/sasl.h
+++ b/src/include/libpq/sasl.h
@@ -26,6 +26,14 @@
 #define PG_SASL_EXCHANGE_SUCCESS		1
 #define PG_SASL_EXCHANGE_FAILURE		2
 
+/*
+ * Maximum accepted size of SASL messages.
+ *
+ * The messages that the server or libpq generate are much smaller than this,
+ * but have some headroom.
+ */
+#define PG_MAX_SASL_MESSAGE_LENGTH	1024
+
 /*
  * Backend SASL mechanism callbacks.
  *
@@ -127,6 +135,9 @@ typedef struct pg_be_sasl_mech
 							 const char *input, int inputlen,
 							 char **output, int *outputlen,
 							 const char **logdetail);
+
+	/* The maximum size allowed for client SASLResponses. */
+	int			max_message_length;
 } pg_be_sasl_mech;
 
 /* Common implementation for auth.c */
-- 
2.34.1

v31-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v31-0002-libpq-add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 80afe8b10773a3aee1235063ee45d1209d2299e0 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Wed, 11 Sep 2024 09:41:29 +0200
Subject: [PATCH v31 2/4] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                        |   19 +
 configure                                 |  144 ++
 configure.ac                              |   29 +
 doc/src/sgml/libpq.sgml                   |   76 +
 meson.build                               |   31 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   14 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2305 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  666 ++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   86 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   75 +
 src/interfaces/libpq/libpq-int.h          |   15 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/pgindent               |   14 +
 src/tools/pgindent/typedefs.list          |   11 +
 26 files changed, 3681 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 3a577e463b..b41539bdb1 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8404,6 +8407,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -12922,6 +12971,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -13947,6 +14080,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 55f6c46d33..05e5bbca11 100644
--- a/configure.ac
+++ b/configure.ac
@@ -917,6 +917,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1395,6 +1415,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1586,6 +1611,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index afc9346757..ffec0431e3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2336,6 +2336,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9961,6 +9998,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/meson.build b/meson.build
index 58e67975e8..7518520ec9 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3042,6 +3071,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3714,6 +3744,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 427030f31a..bd27d9279d 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -223,6 +223,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -700,6 +703,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..477c834b40 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..831677119e
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2305 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because
+		 * it's inefficient and pointless if your event loop has already
+		 * handed you the exact sockets that are ready. But that's not our use
+		 * case -- our client has no way to tell us which sockets are ready.
+		 * (They don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the
+		 * foot in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 * TODO: url-encode...?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+		actx->used_basic_auth = false;
+	}
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 * TODO: url-encode...?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+		actx->used_basic_auth = false;
+	}
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
+	 */
+	err = &tok.err;
+	if (!err->error)
+	{
+		/* TODO test */
+		actx_error(actx, "unknown error");
+		goto token_cleanup;
+	}
+
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..73718ac3b1
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/*
+		 * Use our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 4eecf53a15..83bdab1f40 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..5d311d4107 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +588,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +674,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +704,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1026,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1195,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1212,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1542,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 61c025ff3b..b53f8eae9b 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3694,6 +3713,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3849,6 +3869,17 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3882,7 +3913,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3919,6 +3960,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4600,6 +4676,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4717,6 +4794,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7239,6 +7321,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..a38c571107 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +105,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +728,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd..75043bbc8f 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..0181e5cc03 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e..362b20a94f 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 57de1acff3..450c79d5f0 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1717,6 +1721,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1781,6 +1786,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1941,11 +1947,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3459,6 +3468,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v31-0003-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v31-0003-backend-add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 66505336d63a79e697c352b44e07fafdc5bcd909 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v31 3/4] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 configure                                     |   5 +
 configure.ac                                  |   4 +
 doc/src/sgml/client-auth.sgml                 |  28 +
 doc/src/sgml/config.sgml                      |  13 +
 doc/src/sgml/filelist.sgml                    |   1 +
 doc/src/sgml/oauth-validators.sgml            |  96 +++
 doc/src/sgml/postgres.sgml                    |   1 +
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 681 ++++++++++++++++++
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/interfaces/libpq/fe-auth-oauth-curl.c     |  12 +-
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  33 +
 src/test/modules/oauth_validator/meson.build  |  33 +
 .../modules/oauth_validator/t/001_server.pl   | 314 ++++++++
 .../modules/oauth_validator/t/oauth_server.py | 337 +++++++++
 src/test/modules/oauth_validator/validator.c  |  97 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   4 +
 29 files changed, 1872 insertions(+), 36 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 90cb95c868..302cf0487b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -691,8 +696,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/configure b/configure
index b41539bdb1..c13ba42941 100755
--- a/configure
+++ b/configure
@@ -8444,6 +8444,11 @@ $as_echo "#define USE_OAUTH 1" >>confdefs.h
 
 $as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
 
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
+  fi
 elif test x"$with_oauth" != x"no"; then
   as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
 fi
diff --git a/configure.ac b/configure.ac
index 05e5bbca11..a4e22e2dde 100644
--- a/configure.ac
+++ b/configure.ac
@@ -929,6 +929,10 @@ fi
 if test x"$with_oauth" = x"curl"; then
   AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
   AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
+  fi
 elif test x"$with_oauth" != x"no"; then
   AC_MSG_ERROR([--with-oauth must specify curl])
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..fb78b6c886 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 934ef5e469..f089a8ff4c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1201,6 +1201,19 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+	    TODO
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..c9914519fc
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,96 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth tokens.
+ </para>
+ <para>
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading a shared library
+   with the <xref linkend="guc-oauth-validator-library"/>'s name as the library
+   base name. The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..321d4590a3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -264,6 +264,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..dea973247a
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,681 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	pg_unreachable();
+	return NULL;
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2fd96a7129..735fd05373 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1748,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2066,8 +2069,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2454,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2c4cc8cd41..e91d211b7b 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4783,6 +4784,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 831677119e..8747b0eb08 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -145,7 +145,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -1945,6 +1945,9 @@ handle_token_response(struct async_ctx *actx, char **token)
 	if (!finish_token_request(actx, &tok))
 		goto token_cleanup;
 
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
 	if (tok.access_token)
 	{
 		/* Construct our Bearer token. */
@@ -1973,13 +1976,6 @@ handle_token_response(struct async_ctx *actx, char **token)
 	 * anything else and we bail.
 	 */
 	err = &tok.err;
-	if (!err->error)
-	{
-		/* TODO test */
-		actx_error(actx, "unknown error");
-		goto token_cleanup;
-	}
-
 	if (strcmp(err->error, "authorization_pending") != 0 &&
 		strcmp(err->error, "slow_down") != 0)
 	{
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..402369504d
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,33 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_oauth
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..86813e8b6a
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..3b8f057a26
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,314 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+elsif ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr => qr/failed to obtain device authorization: response is too large/
+);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr => qr/failed to obtain access token: response is too large/
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$common_connstr = "$common_connstr oauth_client_secret=12345";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => '12345'),
+	"oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the issuer.
+$common_connstr = "user=test dbname=postgres oauth_issuer=$issuer";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr oauth_client_id=f02c6361-0635",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..fb731ed2e5
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,337 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send Authorization header")
+
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        expected_creds = f"{self.client_id}:{secret}"
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..d12c79e2a2
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,97 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 110b53ba0d..3761e2aa7a 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2499,6 +2499,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2542,7 +2547,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..a13240cd01
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 450c79d5f0..7c9e3fb701 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1722,6 +1722,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3074,6 +3075,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3670,6 +3673,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v31-0004-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v31-0004-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patch; x-unix-mode=0644Download
From 3c32d8f59cca8542f5f31cc179f0ef33e29bd567 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v31 4/4] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1920 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5639 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 302cf0487b..175b2eff79 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -375,6 +376,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -385,7 +388,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index 7518520ec9..ce9a7d28f2 100644
--- a/meson.build
+++ b/meson.build
@@ -3385,6 +3385,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3546,6 +3549,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7f..c7fce098eb 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..dd047423de
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1920 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy reponse, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://example.com/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": openid_provider.discovery_uri,
+            }
+
+            fail_oauth_handshake(conn, resp)
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..ee39c2a14e
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..8fed4a9716
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

v31-0005-v30-review-comments.patchapplication/octet-stream; name=v31-0005-v30-review-comments.patch; x-unix-mode=0644Download
From e77e769792a3280e759aefc162fbcaa790b98123 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Fri, 18 Oct 2024 12:40:26 +0200
Subject: [PATCH v31 5/5] v30-review-comments

---
 doc/src/sgml/client-auth.sgml                 |  98 +++++++++-
 doc/src/sgml/config.sgml                      |  17 ++
 doc/src/sgml/installation.sgml                |  29 +++
 doc/src/sgml/libpq.sgml                       |  14 +-
 doc/src/sgml/oauth-validators.sgml            |  36 +++-
 doc/src/sgml/postgres.sgml                    |   2 +-
 doc/src/sgml/regress.sgml                     |  10 +
 src/backend/libpq/auth-oauth.c                | 172 +++++++++++-------
 src/backend/libpq/hba.c                       |  26 +++
 src/interfaces/libpq/fe-auth-oauth.c          |   5 +-
 .../modules/oauth_validator/t/001_server.pl   |   9 +-
 src/test/modules/oauth_validator/validator.c  |   7 +-
 12 files changed, 343 insertions(+), 82 deletions(-)

diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index fb78b6c886..8d351c2089 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -2336,7 +2336,103 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </indexterm>
 
    <para>
-    TODO
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+
+    <itemizedlist>
+     <listitem>
+      <para>
+       Resource owner: The user or system who owns protected resources and can
+       grant access to them.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Client: The system which accesses to the protected resources using access
+       tokens.  Applications using libpq are the clients in connecting to a
+       <productname>PostgreSQL</productname> cluster.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Authentication server: The system which recieves requests from, and
+       issues access tokens to, the client upon successful authentication by
+       the resource owner.
+      </para>
+     </listitem>
+
+     <listitem>
+      <para>
+       Resource server: The system which owns the protected resources and can
+       grant access to them. The <productname>PostgreSQL</productname> cluster
+       being connected to is the resource server.
+      </para>
+     </listitem>
+
+    </itemizedlist>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authentication server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        The <acronym>URL</acronym> of the OAuth issuing party, which the client
+        must contact to receive a bearer token.  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        The OAuth scope required for the server to authenticate and/or authorize
+        the user. The value of the scope is dependent on the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more information
+        on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified the user name returned from the OAuth validator
+        must match the role being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>trust_validator_authz</literal></term>
+      <listitem>
+       <para>
+        When set to <literal>1</literal> standard user mapping is skipped. If
+        the OAuth token is validated the user can connect under its desired
+        role.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
    </para>
   </sect1>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index f089a8ff4c..73bf21c599 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1214,6 +1214,23 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library to use for validating OAuth connection tokens. If set to
+        an empty string (the default), OAuth connections will be refused. For
+        more information on implementing OAuth validators see
+        <xref linkend="oauth-validators" />. This parameter can only be set in
+        the <filename>postgresql.conf</filename> file.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3a491b5989..9a76aac08b 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1064,6 +1064,20 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-oauth">
+       <term><option>--with-oauth=<replaceable>LIBRARY</replaceable></option></term>
+       <listitem>
+        <para>
+         Build with OAuth authentication and authorization support.  The only
+         <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-systemd">
        <term><option>--with-systemd</option></term>
        <listitem>
@@ -2508,6 +2522,21 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-oauth">
+      <term><option>-Doauth={ auto | <replaceable>LIBRARY</replaceable> | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with OAuth authentication and authorization support.  The only
+        <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-systemd-meson">
       <term><option>-Dsystemd={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index ffec0431e3..86fd146af2 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2341,7 +2341,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_client_id</literal></term>
       <listitem>
        <para>
-        TODO
+        The client identifier as issued by the authorization server.
        </para>
       </listitem>
      </varlistentry>
@@ -2350,7 +2350,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_client_secret</literal></term>
       <listitem>
        <para>
-        TODO
+        The client password.
        </para>
       </listitem>
      </varlistentry>
@@ -2368,7 +2368,8 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_scope</literal></term>
       <listitem>
        <para>
-        TODO
+        The scope of the access request sent to the authorization server.
+        This parameter is optional.
        </para>
       </listitem>
      </varlistentry>
@@ -10017,6 +10018,11 @@ void PQinitSSL(int do_ssl);
 void PQsetAuthDataHook(PQauthDataHook_type hook);
 </synopsis>
       </para>
+
+      <para>
+       If <replaceable>hook</replaceable> is set to a null pointer instead of
+       a function pointer, the default hook will be installed.
+      </para>
      </listitem>
     </varlistentry>
 
@@ -10025,7 +10031,7 @@ void PQsetAuthDataHook(PQauthDataHook_type hook);
 
      <listitem>
       <para>
-       TODO
+       Retrieves the current value of <literal>PGauthDataHook</literal>.
 <synopsis>
 PQauthDataHook_type PQgetAuthDataHook(void);
 </synopsis>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index c9914519fc..4615159a9f 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -1,13 +1,13 @@
 <!-- doc/src/sgml/oauth-validators.sgml -->
 
 <chapter id="oauth-validators">
- <title>Implementing OAuth Validator Modules</title>
+ <title>OAuth Validator Modules</title>
  <indexterm zone="oauth-validators">
   <primary>OAuth Validators</primary>
  </indexterm>
  <para>
   <productname>PostgreSQL</productname> provides infrastructure for creating
-  custom modules to perform server-side validation of OAuth tokens.
+  custom modules to perform server-side validation of OAuth bearer tokens.
  </para>
  <para>
   OAuth validation modules must at least consist of an initialization function
@@ -74,9 +74,41 @@ typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
   <sect2 id="oauth-validator-callback-validate">
    <title>Validate Callback</title>
    <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth. The token is
+    parsed to ensure being well-formed syntactically, but no semantical check
+    has been performed. Any state set in previous calls will be available in
+    <structfield>state->privata_data</structfield>.
+
 <programlisting>
 typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
 </programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate,
+    <replaceable>role</replaceable> will contain the role the user request to
+    log in as. The callback must return a <literal>ValidatorModuleResult</literal>
+    struct which is defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    If <structfield>authorized</structfield> is set to <literal>true</literal>
+    the bearer token is defined to be valid.
+    To authenticate the user, the authenticated user name shall be returned in
+    the <structfield>authn_id</structfield> field. When authenticating against
+    a HBA rule with <literal>trust_validator_authz</literal> turned on the
+    <structfield>authn_id</structfield> user name must exactly match the role
+    expected to login as.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.
    </para>
   </sect2>
 
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 321d4590a3..af476c82fc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
@@ -264,7 +265,6 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
-  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f..ae4732df65 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index dea973247a..cfa5769b10 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -90,10 +90,10 @@ oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 {
 	struct oauth_ctx *ctx;
 
-	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("client selected an invalid SASL authentication mechanism")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
 
 	ctx = palloc0(sizeof(*ctx));
 
@@ -142,14 +142,14 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	 */
 	if (inputlen == 0)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("The message is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
 	if (inputlen != strlen(input))
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message length does not match input length.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
 
 	switch (ctx->state)
 	{
@@ -165,9 +165,9 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 			 */
 			if (inputlen != 1 || *input != KVSEP)
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Client did not send a kvsep response.")));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
 
 			/* The (failed) handshake is now complete. */
 			ctx->state = OAUTH_STATE_FINISHED;
@@ -193,9 +193,9 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	{
 		case 'p':
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
 			break;
 
 		case 'y':				/* fall through */
@@ -203,19 +203,19 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 			p++;
 			if (*p != ',')
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Comma expected, but found character \"%s\".",
-								   sanitize_char(*p))));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
 			p++;
 			break;
 
 		default:
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("Unexpected channel-binding flag %s.",
-							   sanitize_char(cbind_flag))));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
 	}
 
 	/*
@@ -223,38 +223,38 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	 */
 	if (*p == 'a')
 		ereport(ERROR,
-				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				 errmsg("client uses authorization identity, but it is not supported")));
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
 	if (*p != ',')
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Unexpected attribute %s in client-first-message.",
-						   sanitize_char(*p))));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
 	p++;
 
 	/* All remaining fields are separated by the RFC's kvsep (\x01). */
 	if (*p != KVSEP)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Key-value separator expected, but found character %s.",
-						   sanitize_char(*p))));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
 	p++;
 
 	auth = parse_kvpairs_for_auth(&p);
 	if (!auth)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message does not contain an auth value.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
 
 	/* We should be at the end of our message. */
 	if (*p)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message contains additional data after the final terminator.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
 
 	if (!validate(ctx->port, auth))
 	{
@@ -308,16 +308,16 @@ validate_kvpair(const char *key, const char *val)
 
 	if (!key[0])
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message contains an empty key name.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
 
 	span = strspn(key, key_allowed_set);
 	if (key[span] != '\0')
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message contains an invalid key name.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
 
 	/*-----
 	 * From Sec 3.1:
@@ -341,9 +341,9 @@ validate_kvpair(const char *key, const char *val)
 
 			default:
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Message contains an invalid value.")));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
 		}
 	}
 }
@@ -386,9 +386,9 @@ parse_kvpairs_for_auth(char **input)
 		end = strchr(pos, KVSEP);
 		if (!end)
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("Message contains an unterminated key/value pair.")));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
 		*end = '\0';
 
 		if (pos == end)
@@ -404,9 +404,9 @@ parse_kvpairs_for_auth(char **input)
 		sep = strchr(pos, '=');
 		if (!sep)
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("Message contains a key without a value.")));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
 		*sep = '\0';
 
 		/* Both key and value are now safely terminated. */
@@ -418,9 +418,9 @@ parse_kvpairs_for_auth(char **input)
 		{
 			if (auth)
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Message contains multiple auth values.")));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
 
 			auth = value;
 		}
@@ -438,9 +438,9 @@ parse_kvpairs_for_auth(char **input)
 	}
 
 	ereport(ERROR,
-			(errcode(ERRCODE_PROTOCOL_VIOLATION),
-			 errmsg("malformed OAUTHBEARER message"),
-			 errdetail("Message did not contain a final terminator.")));
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
 
 	pg_unreachable();
 	return NULL;
@@ -461,9 +461,9 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	 */
 	if (!ctx->issuer || !ctx->scope)
 		ereport(FATAL,
-				(errcode(ERRCODE_INTERNAL_ERROR),
-				 errmsg("OAuth is not properly configured for this user"),
-				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
 
 	/*------
 	 * Build the .well-known URI based on our issuer.
@@ -603,6 +603,7 @@ validate(Port *port, const char *auth)
 	int			map_status;
 	ValidatorModuleResult *ret;
 	const char *token;
+	bool		status;
 
 	/* Ensure that we have a correct token to validate */
 	if (!(token = validate_token_format(auth)))
@@ -613,7 +614,10 @@ validate(Port *port, const char *auth)
 										  token, port->user_name);
 
 	if (!ret->authorized)
-		return false;
+	{
+		status = false;
+		goto cleanup;
+	}
 
 	if (ret->authn_id)
 		set_authn_id(port, ret->authn_id);
@@ -626,7 +630,8 @@ validate(Port *port, const char *auth)
 		 * validator implementation; all that matters is that the validator
 		 * says the user can log in with the target role.
 		 */
-		return true;
+		status = true;
+		goto cleanup;
 	}
 
 	/* Make sure the validator authenticated the user. */
@@ -642,9 +647,31 @@ validate(Port *port, const char *auth)
 	/* Finally, check the user map. */
 	map_status = check_usermap(port->hba->usermap, port->user_name,
 							   MyClientConnectionInfo.authn_id, false);
-	return (map_status == STATUS_OK);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it to avoid accidental re-use.
+	 */
+	if (ret->authn_id != NULL)
+	{
+		explicit_bzero(ret->authn_id, strlen(ret->authn_id));
+		pfree(ret->authn_id);
+	}
+	pfree(ret);
+
+	return status;
 }
 
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or of it fails to load, then error out
+ * since token validation won't be possible.
+ */
 static void
 load_validator_library(void)
 {
@@ -652,20 +679,25 @@ load_validator_library(void)
 
 	if (OAuthValidatorLibrary[0] == '\0')
 		ereport(ERROR,
-				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-				 errmsg("oauth_validator_library is not set")));
+				errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				errmsg("oauth_validator_library is not set"));
 
 	validator_init = (OAuthValidatorModuleInit)
 		load_external_function(OAuthValidatorLibrary,
 							   "_PG_oauth_validator_module_init", false, NULL);
 
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
 	if (validator_init == NULL)
 		ereport(ERROR,
-				(errmsg("%s module \"%s\" have to define the symbol %s",
-						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+				errmsg("%s modules \"%s\" have to define the symbol %s",
+					   "OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init"));
 
 	ValidatorCallbacks = (*validator_init) ();
 
+	/* Allocate memory for validator library private state data */
 	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
 	if (ValidatorCallbacks->startup_cb != NULL)
 		ValidatorCallbacks->startup_cb(validator_module_state);
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 735fd05373..dcb1558ad3 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -2042,6 +2042,32 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping
+		 * is nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+					/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "trust_validator_authz"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with trust_validator_authz";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 73718ac3b1..f5fc6ebc23 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -89,8 +89,11 @@ client_initial_response(PGconn *conn, const char *token)
 
 	if (!PQExpBufferDataBroken(buf))
 		response = strdup(buf.data);
-
 	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
 	return response;
 }
 
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 3b8f057a26..3f06f5be76 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -11,11 +11,18 @@ use PostgreSQL::Test::Utils;
 use PostgreSQL::Test::OAuthServer;
 use Test::More;
 
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
 if ($ENV{with_oauth} ne 'curl')
 {
 	plan skip_all => 'client-side OAuth not supported by this build';
 }
-elsif ($ENV{with_python} ne 'yes')
+
+if ($ENV{with_python} ne 'yes')
 {
 	plan skip_all => 'OAuth tests require --with-python to run';
 }
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
index d12c79e2a2..dbba326bc4 100644
--- a/src/test/modules/oauth_validator/validator.c
+++ b/src/test/modules/oauth_validator/validator.c
@@ -67,7 +67,10 @@ validator_startup(ValidatorModuleState *state)
 static void
 validator_shutdown(ValidatorModuleState *state)
 {
-	/* do nothing */
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
 }
 
 static ValidatorModuleResult *
@@ -77,7 +80,7 @@ validate_token(ValidatorModuleState *state, const char *token, const char *role)
 
 	/* Check to make sure our private state still exists. */
 	if (state->private_data != PRIVATE_COOKIE)
-		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
 			 state->private_data);
 
 	res = palloc(sizeof(ValidatorModuleResult));
-- 
2.39.3 (Apple Git-146)

#140Daniel Gustafsson
daniel@yesql.se
In reply to: Daniel Gustafsson (#139)
4 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

I have pushed the 0001 patch to make the max SASL message length configurable
per mechanism, so re-sending v31 as v32 without that patch to keep CFbot et.al
happy. There are no other changes over v31.

--
Daniel Gustafsson

Attachments:

v32-0001-libpq-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v32-0001-libpq-add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 01b90cff691f87e302f07ac2b9038179238caa17 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Wed, 11 Sep 2024 09:41:29 +0200
Subject: [PATCH v32 1/4] libpq: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628) on the client side. When speaking to a OAuth-enabled
server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                        |   19 +
 configure                                 |  144 ++
 configure.ac                              |   29 +
 doc/src/sgml/libpq.sgml                   |   76 +
 meson.build                               |   31 +
 meson_options.txt                         |    4 +
 src/Makefile.global.in                    |    1 +
 src/include/common/oauth-common.h         |   19 +
 src/include/pg_config.h.in                |    9 +
 src/interfaces/libpq/Makefile             |   14 +-
 src/interfaces/libpq/exports.txt          |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2305 +++++++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c      |  666 ++++++
 src/interfaces/libpq/fe-auth-oauth.h      |   42 +
 src/interfaces/libpq/fe-auth-sasl.h       |   10 +-
 src/interfaces/libpq/fe-auth-scram.c      |    6 +-
 src/interfaces/libpq/fe-auth.c            |  105 +-
 src/interfaces/libpq/fe-auth.h            |    9 +-
 src/interfaces/libpq/fe-connect.c         |   86 +-
 src/interfaces/libpq/fe-misc.c            |    7 +-
 src/interfaces/libpq/libpq-fe.h           |   75 +
 src/interfaces/libpq/libpq-int.h          |   15 +
 src/interfaces/libpq/meson.build          |    7 +
 src/makefiles/meson.build                 |    1 +
 src/tools/pgindent/pgindent               |   14 +
 src/tools/pgindent/typedefs.list          |   11 +
 26 files changed, 3681 insertions(+), 27 deletions(-)
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h

diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 3a577e463b..b41539bdb1 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8404,6 +8407,52 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -12922,6 +12971,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -13947,6 +14080,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 55f6c46d33..05e5bbca11 100644
--- a/configure.ac
+++ b/configure.ac
@@ -917,6 +917,26 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1395,6 +1415,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1586,6 +1611,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index afc9346757..ffec0431e3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2336,6 +2336,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9961,6 +9998,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/meson.build b/meson.build
index 58e67975e8..7518520ec9 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3042,6 +3071,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3714,6 +3744,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 427030f31a..bd27d9279d 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -223,6 +223,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -700,6 +703,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..477c834b40 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..831677119e
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2305 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because
+		 * it's inefficient and pointless if your event loop has already
+		 * handed you the exact sockets that are ready. But that's not our use
+		 * case -- our client has no way to tell us which sockets are ready.
+		 * (They don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the
+		 * foot in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 * TODO: url-encode...?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+		actx->used_basic_auth = false;
+	}
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 * TODO: url-encode...?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+		actx->used_basic_auth = false;
+	}
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
+	 */
+	err = &tok.err;
+	if (!err->error)
+	{
+		/* TODO test */
+		actx_error(actx, "unknown error");
+		goto token_cleanup;
+	}
+
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..73718ac3b1
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/*
+		 * Use our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 258bfd0564..b47011d077 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..5d311d4107 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +588,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +674,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +704,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1026,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1195,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1212,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1542,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 61c025ff3b..b53f8eae9b 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3694,6 +3713,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3849,6 +3869,17 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3882,7 +3913,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3919,6 +3960,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4600,6 +4676,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4717,6 +4794,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7239,6 +7321,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..a38c571107 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +105,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +728,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd..75043bbc8f 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..0181e5cc03 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e..362b20a94f 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 54bf29be24..d4e40fae71 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1717,6 +1721,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
 OM_uint32
 OP
 OSAPerGroupState
@@ -1781,6 +1786,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1941,11 +1947,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3460,6 +3469,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.39.3 (Apple Git-146)

v32-0002-backend-add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v32-0002-backend-add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 7f7b4655147750879240cd2c49613221d6d911b7 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Tue, 4 May 2021 16:21:11 -0700
Subject: [PATCH v32 2/4] backend: add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) on the server side. This adds a new
auth method, oauth, to pg_hba.

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |  15 +-
 configure                                     |   5 +
 configure.ac                                  |   4 +
 doc/src/sgml/client-auth.sgml                 |  28 +
 doc/src/sgml/config.sgml                      |  13 +
 doc/src/sgml/filelist.sgml                    |   1 +
 doc/src/sgml/oauth-validators.sgml            |  96 +++
 doc/src/sgml/postgres.sgml                    |   1 +
 src/backend/libpq/Makefile                    |   1 +
 src/backend/libpq/auth-oauth.c                | 681 ++++++++++++++++++
 src/backend/libpq/auth.c                      |  26 +-
 src/backend/libpq/hba.c                       |  31 +-
 src/backend/libpq/meson.build                 |   1 +
 src/backend/utils/misc/guc_tables.c           |  12 +
 src/include/libpq/auth.h                      |  17 +
 src/include/libpq/hba.h                       |   6 +-
 src/include/libpq/oauth.h                     |  49 ++
 src/interfaces/libpq/fe-auth-oauth-curl.c     |  12 +-
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/oauth_validator/.gitignore   |   4 +
 src/test/modules/oauth_validator/Makefile     |  33 +
 src/test/modules/oauth_validator/meson.build  |  33 +
 .../modules/oauth_validator/t/001_server.pl   | 314 ++++++++
 .../modules/oauth_validator/t/oauth_server.py | 337 +++++++++
 src/test/modules/oauth_validator/validator.c  |  97 +++
 src/test/perl/PostgreSQL/Test/Cluster.pm      |  20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |  65 ++
 src/tools/pgindent/typedefs.list              |   4 +
 29 files changed, 1872 insertions(+), 36 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 90cb95c868..302cf0487b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -691,8 +696,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/configure b/configure
index b41539bdb1..c13ba42941 100755
--- a/configure
+++ b/configure
@@ -8444,6 +8444,11 @@ $as_echo "#define USE_OAUTH 1" >>confdefs.h
 
 $as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
 
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
+  fi
 elif test x"$with_oauth" != x"no"; then
   as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
 fi
diff --git a/configure.ac b/configure.ac
index 05e5bbca11..a4e22e2dde 100644
--- a/configure.ac
+++ b/configure.ac
@@ -929,6 +929,10 @@ fi
 if test x"$with_oauth" = x"curl"; then
   AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
   AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
+  fi
 elif test x"$with_oauth" != x"no"; then
   AC_MSG_ERROR([--with-oauth must specify curl])
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..fb78b6c886 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 934ef5e469..f089a8ff4c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1201,6 +1201,19 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+	    TODO
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..c9914519fc
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,96 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth tokens.
+ </para>
+ <para>
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading a shared library
+   with the <xref linkend="guc-oauth-validator-library"/>'s name as the library
+   base name. The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..321d4590a3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -264,6 +264,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..dea973247a
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,681 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	pg_unreachable();
+	return NULL;
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2fd96a7129..735fd05373 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1748,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2066,8 +2069,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2454,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2c4cc8cd41..e91d211b7b 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4783,6 +4784,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 831677119e..8747b0eb08 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -145,7 +145,7 @@ free_token(struct token *tok)
 /* States for the overall async machine. */
 typedef enum
 {
-	OAUTH_STEP_INIT,
+	OAUTH_STEP_INIT = 0,
 	OAUTH_STEP_DISCOVERY,
 	OAUTH_STEP_DEVICE_AUTHORIZATION,
 	OAUTH_STEP_TOKEN_REQUEST,
@@ -1945,6 +1945,9 @@ handle_token_response(struct async_ctx *actx, char **token)
 	if (!finish_token_request(actx, &tok))
 		goto token_cleanup;
 
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
 	if (tok.access_token)
 	{
 		/* Construct our Bearer token. */
@@ -1973,13 +1976,6 @@ handle_token_response(struct async_ctx *actx, char **token)
 	 * anything else and we bail.
 	 */
 	err = &tok.err;
-	if (!err->error)
-	{
-		/* TODO test */
-		actx_error(actx, "unknown error");
-		goto token_cleanup;
-	}
-
 	if (strcmp(err->error, "authorization_pending") != 0 &&
 		strcmp(err->error, "slow_down") != 0)
 	{
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..402369504d
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,33 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_oauth
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..86813e8b6a
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..3b8f057a26
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,314 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+elsif ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr => qr/failed to obtain device authorization: response is too large/
+);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr => qr/failed to obtain access token: response is too large/
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$common_connstr = "$common_connstr oauth_client_secret=12345";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => '12345'),
+	"oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the issuer.
+$common_connstr = "user=test dbname=postgres oauth_issuer=$issuer";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr oauth_client_id=f02c6361-0635",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..fb731ed2e5
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,337 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send Authorization header")
+
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        expected_creds = f"{self.client_id}:{secret}"
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..d12c79e2a2
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,97 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 007571e948..83360b397a 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2499,6 +2499,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2542,7 +2547,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..a13240cd01
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d4e40fae71..f18fccafa5 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1722,6 +1722,7 @@ NumericSortSupport
 NumericSumAccum
 NumericVar
 OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -3074,6 +3075,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3671,6 +3674,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.39.3 (Apple Git-146)

v32-0003-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v32-0003-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patch; x-unix-mode=0644Download
From 24b6d921f5d480d43e50de375ab72d32a1d989e4 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v32 3/4] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1920 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5639 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 302cf0487b..175b2eff79 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -375,6 +376,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -385,7 +388,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index 7518520ec9..ce9a7d28f2 100644
--- a/meson.build
+++ b/meson.build
@@ -3385,6 +3385,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3546,6 +3549,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7f..c7fce098eb 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..dd047423de
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1920 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy reponse, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://example.com/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": openid_provider.discovery_uri,
+            }
+
+            fail_oauth_handshake(conn, resp)
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..ee39c2a14e
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..8fed4a9716
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute 0x00",  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.39.3 (Apple Git-146)

v32-0004-v30-review-comments.patchapplication/octet-stream; name=v32-0004-v30-review-comments.patch; x-unix-mode=0644Download
From e78a3c0c0325383f546917f46fda12d227f0cd4a Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Fri, 18 Oct 2024 12:40:26 +0200
Subject: [PATCH v32 4/4] v30-review-comments

---
 doc/src/sgml/client-auth.sgml                 |  98 +++++++++-
 doc/src/sgml/config.sgml                      |  17 ++
 doc/src/sgml/installation.sgml                |  29 +++
 doc/src/sgml/libpq.sgml                       |  14 +-
 doc/src/sgml/oauth-validators.sgml            |  36 +++-
 doc/src/sgml/postgres.sgml                    |   2 +-
 doc/src/sgml/regress.sgml                     |  10 +
 src/backend/libpq/auth-oauth.c                | 172 +++++++++++-------
 src/backend/libpq/hba.c                       |  26 +++
 src/interfaces/libpq/fe-auth-oauth.c          |   5 +-
 .../modules/oauth_validator/t/001_server.pl   |   9 +-
 src/test/modules/oauth_validator/validator.c  |   7 +-
 12 files changed, 343 insertions(+), 82 deletions(-)

diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index fb78b6c886..8d351c2089 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -2336,7 +2336,103 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </indexterm>
 
    <para>
-    TODO
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+
+    <itemizedlist>
+     <listitem>
+      <para>
+       Resource owner: The user or system who owns protected resources and can
+       grant access to them.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Client: The system which accesses to the protected resources using access
+       tokens.  Applications using libpq are the clients in connecting to a
+       <productname>PostgreSQL</productname> cluster.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Authentication server: The system which recieves requests from, and
+       issues access tokens to, the client upon successful authentication by
+       the resource owner.
+      </para>
+     </listitem>
+
+     <listitem>
+      <para>
+       Resource server: The system which owns the protected resources and can
+       grant access to them. The <productname>PostgreSQL</productname> cluster
+       being connected to is the resource server.
+      </para>
+     </listitem>
+
+    </itemizedlist>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authentication server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        The <acronym>URL</acronym> of the OAuth issuing party, which the client
+        must contact to receive a bearer token.  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        The OAuth scope required for the server to authenticate and/or authorize
+        the user. The value of the scope is dependent on the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more information
+        on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified the user name returned from the OAuth validator
+        must match the role being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>trust_validator_authz</literal></term>
+      <listitem>
+       <para>
+        When set to <literal>1</literal> standard user mapping is skipped. If
+        the OAuth token is validated the user can connect under its desired
+        role.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
    </para>
   </sect1>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index f089a8ff4c..73bf21c599 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1214,6 +1214,23 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library to use for validating OAuth connection tokens. If set to
+        an empty string (the default), OAuth connections will be refused. For
+        more information on implementing OAuth validators see
+        <xref linkend="oauth-validators" />. This parameter can only be set in
+        the <filename>postgresql.conf</filename> file.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3a491b5989..9a76aac08b 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1064,6 +1064,20 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-oauth">
+       <term><option>--with-oauth=<replaceable>LIBRARY</replaceable></option></term>
+       <listitem>
+        <para>
+         Build with OAuth authentication and authorization support.  The only
+         <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-systemd">
        <term><option>--with-systemd</option></term>
        <listitem>
@@ -2508,6 +2522,21 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-oauth">
+      <term><option>-Doauth={ auto | <replaceable>LIBRARY</replaceable> | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with OAuth authentication and authorization support.  The only
+        <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-systemd-meson">
       <term><option>-Dsystemd={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index ffec0431e3..86fd146af2 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2341,7 +2341,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_client_id</literal></term>
       <listitem>
        <para>
-        TODO
+        The client identifier as issued by the authorization server.
        </para>
       </listitem>
      </varlistentry>
@@ -2350,7 +2350,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_client_secret</literal></term>
       <listitem>
        <para>
-        TODO
+        The client password.
        </para>
       </listitem>
      </varlistentry>
@@ -2368,7 +2368,8 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_scope</literal></term>
       <listitem>
        <para>
-        TODO
+        The scope of the access request sent to the authorization server.
+        This parameter is optional.
        </para>
       </listitem>
      </varlistentry>
@@ -10017,6 +10018,11 @@ void PQinitSSL(int do_ssl);
 void PQsetAuthDataHook(PQauthDataHook_type hook);
 </synopsis>
       </para>
+
+      <para>
+       If <replaceable>hook</replaceable> is set to a null pointer instead of
+       a function pointer, the default hook will be installed.
+      </para>
      </listitem>
     </varlistentry>
 
@@ -10025,7 +10031,7 @@ void PQsetAuthDataHook(PQauthDataHook_type hook);
 
      <listitem>
       <para>
-       TODO
+       Retrieves the current value of <literal>PGauthDataHook</literal>.
 <synopsis>
 PQauthDataHook_type PQgetAuthDataHook(void);
 </synopsis>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index c9914519fc..4615159a9f 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -1,13 +1,13 @@
 <!-- doc/src/sgml/oauth-validators.sgml -->
 
 <chapter id="oauth-validators">
- <title>Implementing OAuth Validator Modules</title>
+ <title>OAuth Validator Modules</title>
  <indexterm zone="oauth-validators">
   <primary>OAuth Validators</primary>
  </indexterm>
  <para>
   <productname>PostgreSQL</productname> provides infrastructure for creating
-  custom modules to perform server-side validation of OAuth tokens.
+  custom modules to perform server-side validation of OAuth bearer tokens.
  </para>
  <para>
   OAuth validation modules must at least consist of an initialization function
@@ -74,9 +74,41 @@ typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
   <sect2 id="oauth-validator-callback-validate">
    <title>Validate Callback</title>
    <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth. The token is
+    parsed to ensure being well-formed syntactically, but no semantical check
+    has been performed. Any state set in previous calls will be available in
+    <structfield>state->privata_data</structfield>.
+
 <programlisting>
 typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
 </programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate,
+    <replaceable>role</replaceable> will contain the role the user request to
+    log in as. The callback must return a <literal>ValidatorModuleResult</literal>
+    struct which is defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    If <structfield>authorized</structfield> is set to <literal>true</literal>
+    the bearer token is defined to be valid.
+    To authenticate the user, the authenticated user name shall be returned in
+    the <structfield>authn_id</structfield> field. When authenticating against
+    a HBA rule with <literal>trust_validator_authz</literal> turned on the
+    <structfield>authn_id</structfield> user name must exactly match the role
+    expected to login as.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.
    </para>
   </sect2>
 
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 321d4590a3..af476c82fc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
@@ -264,7 +265,6 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
-  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f..ae4732df65 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index dea973247a..cfa5769b10 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -90,10 +90,10 @@ oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 {
 	struct oauth_ctx *ctx;
 
-	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("client selected an invalid SASL authentication mechanism")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
 
 	ctx = palloc0(sizeof(*ctx));
 
@@ -142,14 +142,14 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	 */
 	if (inputlen == 0)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("The message is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
 	if (inputlen != strlen(input))
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message length does not match input length.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
 
 	switch (ctx->state)
 	{
@@ -165,9 +165,9 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 			 */
 			if (inputlen != 1 || *input != KVSEP)
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Client did not send a kvsep response.")));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
 
 			/* The (failed) handshake is now complete. */
 			ctx->state = OAUTH_STATE_FINISHED;
@@ -193,9 +193,9 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	{
 		case 'p':
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
 			break;
 
 		case 'y':				/* fall through */
@@ -203,19 +203,19 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 			p++;
 			if (*p != ',')
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Comma expected, but found character \"%s\".",
-								   sanitize_char(*p))));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
 			p++;
 			break;
 
 		default:
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("Unexpected channel-binding flag %s.",
-							   sanitize_char(cbind_flag))));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
 	}
 
 	/*
@@ -223,38 +223,38 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	 */
 	if (*p == 'a')
 		ereport(ERROR,
-				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				 errmsg("client uses authorization identity, but it is not supported")));
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
 	if (*p != ',')
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Unexpected attribute %s in client-first-message.",
-						   sanitize_char(*p))));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
 	p++;
 
 	/* All remaining fields are separated by the RFC's kvsep (\x01). */
 	if (*p != KVSEP)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Key-value separator expected, but found character %s.",
-						   sanitize_char(*p))));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
 	p++;
 
 	auth = parse_kvpairs_for_auth(&p);
 	if (!auth)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message does not contain an auth value.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
 
 	/* We should be at the end of our message. */
 	if (*p)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message contains additional data after the final terminator.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
 
 	if (!validate(ctx->port, auth))
 	{
@@ -308,16 +308,16 @@ validate_kvpair(const char *key, const char *val)
 
 	if (!key[0])
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message contains an empty key name.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
 
 	span = strspn(key, key_allowed_set);
 	if (key[span] != '\0')
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message contains an invalid key name.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
 
 	/*-----
 	 * From Sec 3.1:
@@ -341,9 +341,9 @@ validate_kvpair(const char *key, const char *val)
 
 			default:
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Message contains an invalid value.")));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
 		}
 	}
 }
@@ -386,9 +386,9 @@ parse_kvpairs_for_auth(char **input)
 		end = strchr(pos, KVSEP);
 		if (!end)
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("Message contains an unterminated key/value pair.")));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
 		*end = '\0';
 
 		if (pos == end)
@@ -404,9 +404,9 @@ parse_kvpairs_for_auth(char **input)
 		sep = strchr(pos, '=');
 		if (!sep)
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("Message contains a key without a value.")));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
 		*sep = '\0';
 
 		/* Both key and value are now safely terminated. */
@@ -418,9 +418,9 @@ parse_kvpairs_for_auth(char **input)
 		{
 			if (auth)
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Message contains multiple auth values.")));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
 
 			auth = value;
 		}
@@ -438,9 +438,9 @@ parse_kvpairs_for_auth(char **input)
 	}
 
 	ereport(ERROR,
-			(errcode(ERRCODE_PROTOCOL_VIOLATION),
-			 errmsg("malformed OAUTHBEARER message"),
-			 errdetail("Message did not contain a final terminator.")));
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
 
 	pg_unreachable();
 	return NULL;
@@ -461,9 +461,9 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	 */
 	if (!ctx->issuer || !ctx->scope)
 		ereport(FATAL,
-				(errcode(ERRCODE_INTERNAL_ERROR),
-				 errmsg("OAuth is not properly configured for this user"),
-				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
 
 	/*------
 	 * Build the .well-known URI based on our issuer.
@@ -603,6 +603,7 @@ validate(Port *port, const char *auth)
 	int			map_status;
 	ValidatorModuleResult *ret;
 	const char *token;
+	bool		status;
 
 	/* Ensure that we have a correct token to validate */
 	if (!(token = validate_token_format(auth)))
@@ -613,7 +614,10 @@ validate(Port *port, const char *auth)
 										  token, port->user_name);
 
 	if (!ret->authorized)
-		return false;
+	{
+		status = false;
+		goto cleanup;
+	}
 
 	if (ret->authn_id)
 		set_authn_id(port, ret->authn_id);
@@ -626,7 +630,8 @@ validate(Port *port, const char *auth)
 		 * validator implementation; all that matters is that the validator
 		 * says the user can log in with the target role.
 		 */
-		return true;
+		status = true;
+		goto cleanup;
 	}
 
 	/* Make sure the validator authenticated the user. */
@@ -642,9 +647,31 @@ validate(Port *port, const char *auth)
 	/* Finally, check the user map. */
 	map_status = check_usermap(port->hba->usermap, port->user_name,
 							   MyClientConnectionInfo.authn_id, false);
-	return (map_status == STATUS_OK);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it to avoid accidental re-use.
+	 */
+	if (ret->authn_id != NULL)
+	{
+		explicit_bzero(ret->authn_id, strlen(ret->authn_id));
+		pfree(ret->authn_id);
+	}
+	pfree(ret);
+
+	return status;
 }
 
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or of it fails to load, then error out
+ * since token validation won't be possible.
+ */
 static void
 load_validator_library(void)
 {
@@ -652,20 +679,25 @@ load_validator_library(void)
 
 	if (OAuthValidatorLibrary[0] == '\0')
 		ereport(ERROR,
-				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-				 errmsg("oauth_validator_library is not set")));
+				errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				errmsg("oauth_validator_library is not set"));
 
 	validator_init = (OAuthValidatorModuleInit)
 		load_external_function(OAuthValidatorLibrary,
 							   "_PG_oauth_validator_module_init", false, NULL);
 
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
 	if (validator_init == NULL)
 		ereport(ERROR,
-				(errmsg("%s module \"%s\" have to define the symbol %s",
-						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+				errmsg("%s modules \"%s\" have to define the symbol %s",
+					   "OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init"));
 
 	ValidatorCallbacks = (*validator_init) ();
 
+	/* Allocate memory for validator library private state data */
 	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
 	if (ValidatorCallbacks->startup_cb != NULL)
 		ValidatorCallbacks->startup_cb(validator_module_state);
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 735fd05373..dcb1558ad3 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -2042,6 +2042,32 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping
+		 * is nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+					/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "trust_validator_authz"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with trust_validator_authz";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 73718ac3b1..f5fc6ebc23 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -89,8 +89,11 @@ client_initial_response(PGconn *conn, const char *token)
 
 	if (!PQExpBufferDataBroken(buf))
 		response = strdup(buf.data);
-
 	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
 	return response;
 }
 
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 3b8f057a26..3f06f5be76 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -11,11 +11,18 @@ use PostgreSQL::Test::Utils;
 use PostgreSQL::Test::OAuthServer;
 use Test::More;
 
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
 if ($ENV{with_oauth} ne 'curl')
 {
 	plan skip_all => 'client-side OAuth not supported by this build';
 }
-elsif ($ENV{with_python} ne 'yes')
+
+if ($ENV{with_python} ne 'yes')
 {
 	plan skip_all => 'OAuth tests require --with-python to run';
 }
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
index d12c79e2a2..dbba326bc4 100644
--- a/src/test/modules/oauth_validator/validator.c
+++ b/src/test/modules/oauth_validator/validator.c
@@ -67,7 +67,10 @@ validator_startup(ValidatorModuleState *state)
 static void
 validator_shutdown(ValidatorModuleState *state)
 {
-	/* do nothing */
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
 }
 
 static ValidatorModuleResult *
@@ -77,7 +80,7 @@ validate_token(ValidatorModuleState *state, const char *token, const char *role)
 
 	/* Check to make sure our private state still exists. */
 	if (state->private_data != PRIVATE_COOKIE)
-		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
 			 state->private_data);
 
 	res = palloc(sizeof(ValidatorModuleResult));
-- 
2.39.3 (Apple Git-146)

#141Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#140)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Oct 23, 2024 at 8:46 AM Daniel Gustafsson <daniel@yesql.se> wrote:

I have pushed the 0001 patch to make the max SASL message length configurable
per mechanism, so re-sending v31 as v32 without that patch to keep CFbot et.al
happy. There are no other changes over v31.

Awesome, thanks! I'm still working on the feedback from you and
Antonin upthread, so let's take this opportunity to squash together
the two main patches as one, horrifyingly large, v33-0001.

I've also rearranged the patch order, to tweak a recorded failure
message in the Python tests. No other changes have been made to the
OAuth implementation.

--Jacob

Attachments:

v33-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v33-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From f45af21f3f9abda91920d7a7114e1d54b7a15caf Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v33 1/3] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied (see below).

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

= Server-Side Validation =

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

= OAuth HBA Method =

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   19 +
 configure                                     |  149 ++
 configure.ac                                  |   33 +
 doc/src/sgml/client-auth.sgml                 |   28 +
 doc/src/sgml/config.sgml                      |   13 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/libpq.sgml                       |   76 +
 doc/src/sgml/oauth-validators.sgml            |   96 +
 doc/src/sgml/postgres.sgml                    |    1 +
 meson.build                                   |   31 +
 meson_options.txt                             |    4 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  681 +++++
 src/backend/libpq/auth.c                      |   26 +-
 src/backend/libpq/hba.c                       |   31 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |   17 +
 src/include/libpq/hba.h                       |    6 +-
 src/include/libpq/oauth.h                     |   49 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   14 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2301 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          |  666 +++++
 src/interfaces/libpq/fe-auth-oauth.h          |   42 +
 src/interfaces/libpq/fe-auth-sasl.h           |   10 +-
 src/interfaces/libpq/fe-auth-scram.c          |    6 +-
 src/interfaces/libpq/fe-auth.c                |  105 +-
 src/interfaces/libpq/fe-auth.h                |    9 +-
 src/interfaces/libpq/fe-connect.c             |   86 +-
 src/interfaces/libpq/fe-misc.c                |    7 +-
 src/interfaces/libpq/libpq-fe.h               |   75 +
 src/interfaces/libpq/libpq-int.h              |   15 +
 src/interfaces/libpq/meson.build              |    7 +
 src/makefiles/meson.build                     |    1 +
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   33 +
 src/test/modules/oauth_validator/meson.build  |   33 +
 .../modules/oauth_validator/t/001_server.pl   |  314 +++
 .../modules/oauth_validator/t/oauth_server.py |  337 +++
 src/test/modules/oauth_validator/validator.c  |   97 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |   65 +
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   15 +
 51 files changed, 5545 insertions(+), 55 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 90cb95c868..302cf0487b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -691,8 +696,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 3a577e463b..c13ba42941 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8404,6 +8407,57 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
+  fi
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -12922,6 +12976,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -13947,6 +14085,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 55f6c46d33..a4e22e2dde 100644
--- a/configure.ac
+++ b/configure.ac
@@ -917,6 +917,30 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
+  fi
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1395,6 +1419,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1586,6 +1615,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..fb78b6c886 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,18 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    TODO
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 934ef5e469..f089a8ff4c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1201,6 +1201,19 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+	    TODO
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index afc9346757..ffec0431e3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2336,6 +2336,43 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        TODO
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9961,6 +9998,45 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..c9914519fc
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,96 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth tokens.
+ </para>
+ <para>
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading a shared library
+   with the <xref linkend="guc-oauth-validator-library"/>'s name as the library
+   base name. The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..321d4590a3 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -264,6 +264,7 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
+  &oauth-validators;
 
  </part>
 
diff --git a/meson.build b/meson.build
index 58e67975e8..7518520ec9 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3042,6 +3071,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3714,6 +3744,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..dea973247a
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,681 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("client selected an invalid SASL authentication mechanism")));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("The message is empty.")));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message length does not match input length.")));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Client did not send a kvsep response.")));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Comma expected, but found character \"%s\".",
+								   sanitize_char(*p))));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Unexpected channel-binding flag %s.",
+							   sanitize_char(cbind_flag))));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("client uses authorization identity, but it is not supported")));
+	if (*p != ',')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Unexpected attribute %s in client-first-message.",
+						   sanitize_char(*p))));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Key-value separator expected, but found character %s.",
+						   sanitize_char(*p))));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message does not contain an auth value.")));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains additional data after the final terminator.")));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an empty key name.")));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_PROTOCOL_VIOLATION),
+				 errmsg("malformed OAUTHBEARER message"),
+				 errdetail("Message contains an invalid key name.")));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains an invalid value.")));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains an unterminated key/value pair.")));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					(errcode(ERRCODE_PROTOCOL_VIOLATION),
+					 errmsg("malformed OAUTHBEARER message"),
+					 errdetail("Message contains a key without a value.")));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						(errcode(ERRCODE_PROTOCOL_VIOLATION),
+						 errmsg("malformed OAUTHBEARER message"),
+						 errdetail("Message contains multiple auth values.")));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_PROTOCOL_VIOLATION),
+			 errmsg("malformed OAUTHBEARER message"),
+			 errdetail("Message did not contain a final terminator.")));
+
+	pg_unreachable();
+	return NULL;
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				(errcode(ERRCODE_INTERNAL_ERROR),
+				 errmsg("OAuth is not properly configured for this user"),
+				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("oauth_validator_library is not set")));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	if (validator_init == NULL)
+		ereport(ERROR,
+				(errmsg("%s module \"%s\" have to define the symbol %s",
+						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2fd96a7129..735fd05373 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1748,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2066,8 +2069,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2454,27 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		if (hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2c4cc8cd41..e91d211b7b 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4783,6 +4784,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 427030f31a..bd27d9279d 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -223,6 +223,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -700,6 +703,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..477c834b40 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..8747b0eb08
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2301 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because
+		 * it's inefficient and pointless if your event loop has already
+		 * handed you the exact sockets that are ready. But that's not our use
+		 * case -- our client has no way to tell us which sockets are ready.
+		 * (They don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the
+		 * foot in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 * TODO: url-encode...?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+		actx->used_basic_auth = false;
+	}
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 * TODO: url-encode...?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+		actx->used_basic_auth = false;
+	}
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..73718ac3b1
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,666 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+
+	termPQExpBuffer(&buf);
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/*
+		 * Use our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 258bfd0564..b47011d077 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..5d311d4107 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +588,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +674,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +704,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1026,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1195,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1212,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1542,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 61c025ff3b..b53f8eae9b 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3694,6 +3713,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3849,6 +3869,17 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3882,7 +3913,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3919,6 +3960,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4600,6 +4676,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4717,6 +4794,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7239,6 +7321,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..a38c571107 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +105,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +728,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd..75043bbc8f 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..0181e5cc03 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..402369504d
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,33 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_oauth
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..86813e8b6a
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..3b8f057a26
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,314 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+elsif ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr => qr/failed to obtain device authorization: response is too large/
+);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr => qr/failed to obtain access token: response is too large/
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$common_connstr = "$common_connstr oauth_client_secret=12345";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => '12345'),
+	"oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the issuer.
+$common_connstr = "user=test dbname=postgres oauth_issuer=$issuer";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr oauth_client_id=f02c6361-0635",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..fb731ed2e5
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,337 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send Authorization header")
+
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        expected_creds = f"{self.client_id}:{secret}"
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..d12c79e2a2
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,97 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* do nothing */
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 007571e948..83360b397a 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2499,6 +2499,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2542,7 +2547,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..a13240cd01
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e..362b20a94f 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 54bf29be24..f18fccafa5 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1717,6 +1721,8 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1781,6 +1787,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1941,11 +1948,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3065,6 +3075,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3460,6 +3472,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
@@ -3660,6 +3674,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v33-0002-v30-review-comments.patchapplication/octet-stream; name=v33-0002-v30-review-comments.patchDownload
From 472d402726dbe44861ec80e5a53aa53f5fdf4036 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Fri, 18 Oct 2024 12:40:26 +0200
Subject: [PATCH v33 2/3] v30-review-comments

---
 doc/src/sgml/client-auth.sgml                 |  98 +++++++++-
 doc/src/sgml/config.sgml                      |  17 ++
 doc/src/sgml/installation.sgml                |  29 +++
 doc/src/sgml/libpq.sgml                       |  14 +-
 doc/src/sgml/oauth-validators.sgml            |  36 +++-
 doc/src/sgml/postgres.sgml                    |   2 +-
 doc/src/sgml/regress.sgml                     |  10 +
 src/backend/libpq/auth-oauth.c                | 172 +++++++++++-------
 src/backend/libpq/hba.c                       |  26 +++
 src/interfaces/libpq/fe-auth-oauth.c          |   5 +-
 .../modules/oauth_validator/t/001_server.pl   |   9 +-
 src/test/modules/oauth_validator/validator.c  |   7 +-
 12 files changed, 343 insertions(+), 82 deletions(-)

diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index fb78b6c886..8d351c2089 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -2336,7 +2336,103 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </indexterm>
 
    <para>
-    TODO
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+
+    <itemizedlist>
+     <listitem>
+      <para>
+       Resource owner: The user or system who owns protected resources and can
+       grant access to them.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Client: The system which accesses to the protected resources using access
+       tokens.  Applications using libpq are the clients in connecting to a
+       <productname>PostgreSQL</productname> cluster.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Authentication server: The system which recieves requests from, and
+       issues access tokens to, the client upon successful authentication by
+       the resource owner.
+      </para>
+     </listitem>
+
+     <listitem>
+      <para>
+       Resource server: The system which owns the protected resources and can
+       grant access to them. The <productname>PostgreSQL</productname> cluster
+       being connected to is the resource server.
+      </para>
+     </listitem>
+
+    </itemizedlist>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authentication server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        The <acronym>URL</acronym> of the OAuth issuing party, which the client
+        must contact to receive a bearer token.  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        The OAuth scope required for the server to authenticate and/or authorize
+        the user. The value of the scope is dependent on the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more information
+        on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified the user name returned from the OAuth validator
+        must match the role being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>trust_validator_authz</literal></term>
+      <listitem>
+       <para>
+        When set to <literal>1</literal> standard user mapping is skipped. If
+        the OAuth token is validated the user can connect under its desired
+        role.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
    </para>
   </sect1>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index f089a8ff4c..73bf21c599 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1214,6 +1214,23 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library to use for validating OAuth connection tokens. If set to
+        an empty string (the default), OAuth connections will be refused. For
+        more information on implementing OAuth validators see
+        <xref linkend="oauth-validators" />. This parameter can only be set in
+        the <filename>postgresql.conf</filename> file.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3a491b5989..9a76aac08b 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1064,6 +1064,20 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-oauth">
+       <term><option>--with-oauth=<replaceable>LIBRARY</replaceable></option></term>
+       <listitem>
+        <para>
+         Build with OAuth authentication and authorization support.  The only
+         <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-systemd">
        <term><option>--with-systemd</option></term>
        <listitem>
@@ -2508,6 +2522,21 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-oauth">
+      <term><option>-Doauth={ auto | <replaceable>LIBRARY</replaceable> | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with OAuth authentication and authorization support.  The only
+        <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-systemd-meson">
       <term><option>-Dsystemd={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index ffec0431e3..86fd146af2 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2341,7 +2341,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_client_id</literal></term>
       <listitem>
        <para>
-        TODO
+        The client identifier as issued by the authorization server.
        </para>
       </listitem>
      </varlistentry>
@@ -2350,7 +2350,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_client_secret</literal></term>
       <listitem>
        <para>
-        TODO
+        The client password.
        </para>
       </listitem>
      </varlistentry>
@@ -2368,7 +2368,8 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_scope</literal></term>
       <listitem>
        <para>
-        TODO
+        The scope of the access request sent to the authorization server.
+        This parameter is optional.
        </para>
       </listitem>
      </varlistentry>
@@ -10017,6 +10018,11 @@ void PQinitSSL(int do_ssl);
 void PQsetAuthDataHook(PQauthDataHook_type hook);
 </synopsis>
       </para>
+
+      <para>
+       If <replaceable>hook</replaceable> is set to a null pointer instead of
+       a function pointer, the default hook will be installed.
+      </para>
      </listitem>
     </varlistentry>
 
@@ -10025,7 +10031,7 @@ void PQsetAuthDataHook(PQauthDataHook_type hook);
 
      <listitem>
       <para>
-       TODO
+       Retrieves the current value of <literal>PGauthDataHook</literal>.
 <synopsis>
 PQauthDataHook_type PQgetAuthDataHook(void);
 </synopsis>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index c9914519fc..4615159a9f 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -1,13 +1,13 @@
 <!-- doc/src/sgml/oauth-validators.sgml -->
 
 <chapter id="oauth-validators">
- <title>Implementing OAuth Validator Modules</title>
+ <title>OAuth Validator Modules</title>
  <indexterm zone="oauth-validators">
   <primary>OAuth Validators</primary>
  </indexterm>
  <para>
   <productname>PostgreSQL</productname> provides infrastructure for creating
-  custom modules to perform server-side validation of OAuth tokens.
+  custom modules to perform server-side validation of OAuth bearer tokens.
  </para>
  <para>
   OAuth validation modules must at least consist of an initialization function
@@ -74,9 +74,41 @@ typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
   <sect2 id="oauth-validator-callback-validate">
    <title>Validate Callback</title>
    <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth. The token is
+    parsed to ensure being well-formed syntactically, but no semantical check
+    has been performed. Any state set in previous calls will be available in
+    <structfield>state->privata_data</structfield>.
+
 <programlisting>
 typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
 </programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate,
+    <replaceable>role</replaceable> will contain the role the user request to
+    log in as. The callback must return a <literal>ValidatorModuleResult</literal>
+    struct which is defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    If <structfield>authorized</structfield> is set to <literal>true</literal>
+    the bearer token is defined to be valid.
+    To authenticate the user, the authenticated user name shall be returned in
+    the <structfield>authn_id</structfield> field. When authenticating against
+    a HBA rule with <literal>trust_validator_authz</literal> turned on the
+    <structfield>authn_id</structfield> user name must exactly match the role
+    expected to login as.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.
    </para>
   </sect2>
 
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 321d4590a3..af476c82fc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
@@ -264,7 +265,6 @@ break is not needed in a wider output rendering.
   &bki;
   &planstats;
   &backup-manifest;
-  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f..ae4732df65 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index dea973247a..cfa5769b10 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -90,10 +90,10 @@ oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
 {
 	struct oauth_ctx *ctx;
 
-	if (strcmp(selected_mech, OAUTHBEARER_NAME))
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("client selected an invalid SASL authentication mechanism")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
 
 	ctx = palloc0(sizeof(*ctx));
 
@@ -142,14 +142,14 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	 */
 	if (inputlen == 0)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("The message is empty.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
 	if (inputlen != strlen(input))
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message length does not match input length.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
 
 	switch (ctx->state)
 	{
@@ -165,9 +165,9 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 			 */
 			if (inputlen != 1 || *input != KVSEP)
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Client did not send a kvsep response.")));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
 
 			/* The (failed) handshake is now complete. */
 			ctx->state = OAUTH_STATE_FINISHED;
@@ -193,9 +193,9 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	{
 		case 'p':
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data.")));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
 			break;
 
 		case 'y':				/* fall through */
@@ -203,19 +203,19 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 			p++;
 			if (*p != ',')
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Comma expected, but found character \"%s\".",
-								   sanitize_char(*p))));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
 			p++;
 			break;
 
 		default:
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("Unexpected channel-binding flag %s.",
-							   sanitize_char(cbind_flag))));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
 	}
 
 	/*
@@ -223,38 +223,38 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	 */
 	if (*p == 'a')
 		ereport(ERROR,
-				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				 errmsg("client uses authorization identity, but it is not supported")));
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
 	if (*p != ',')
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Unexpected attribute %s in client-first-message.",
-						   sanitize_char(*p))));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
 	p++;
 
 	/* All remaining fields are separated by the RFC's kvsep (\x01). */
 	if (*p != KVSEP)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Key-value separator expected, but found character %s.",
-						   sanitize_char(*p))));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
 	p++;
 
 	auth = parse_kvpairs_for_auth(&p);
 	if (!auth)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message does not contain an auth value.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
 
 	/* We should be at the end of our message. */
 	if (*p)
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message contains additional data after the final terminator.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
 
 	if (!validate(ctx->port, auth))
 	{
@@ -308,16 +308,16 @@ validate_kvpair(const char *key, const char *val)
 
 	if (!key[0])
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message contains an empty key name.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
 
 	span = strspn(key, key_allowed_set);
 	if (key[span] != '\0')
 		ereport(ERROR,
-				(errcode(ERRCODE_PROTOCOL_VIOLATION),
-				 errmsg("malformed OAUTHBEARER message"),
-				 errdetail("Message contains an invalid key name.")));
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
 
 	/*-----
 	 * From Sec 3.1:
@@ -341,9 +341,9 @@ validate_kvpair(const char *key, const char *val)
 
 			default:
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Message contains an invalid value.")));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
 		}
 	}
 }
@@ -386,9 +386,9 @@ parse_kvpairs_for_auth(char **input)
 		end = strchr(pos, KVSEP);
 		if (!end)
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("Message contains an unterminated key/value pair.")));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
 		*end = '\0';
 
 		if (pos == end)
@@ -404,9 +404,9 @@ parse_kvpairs_for_auth(char **input)
 		sep = strchr(pos, '=');
 		if (!sep)
 			ereport(ERROR,
-					(errcode(ERRCODE_PROTOCOL_VIOLATION),
-					 errmsg("malformed OAUTHBEARER message"),
-					 errdetail("Message contains a key without a value.")));
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
 		*sep = '\0';
 
 		/* Both key and value are now safely terminated. */
@@ -418,9 +418,9 @@ parse_kvpairs_for_auth(char **input)
 		{
 			if (auth)
 				ereport(ERROR,
-						(errcode(ERRCODE_PROTOCOL_VIOLATION),
-						 errmsg("malformed OAUTHBEARER message"),
-						 errdetail("Message contains multiple auth values.")));
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
 
 			auth = value;
 		}
@@ -438,9 +438,9 @@ parse_kvpairs_for_auth(char **input)
 	}
 
 	ereport(ERROR,
-			(errcode(ERRCODE_PROTOCOL_VIOLATION),
-			 errmsg("malformed OAUTHBEARER message"),
-			 errdetail("Message did not contain a final terminator.")));
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
 
 	pg_unreachable();
 	return NULL;
@@ -461,9 +461,9 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	 */
 	if (!ctx->issuer || !ctx->scope)
 		ereport(FATAL,
-				(errcode(ERRCODE_INTERNAL_ERROR),
-				 errmsg("OAuth is not properly configured for this user"),
-				 errdetail_log("The issuer and scope parameters must be set in pg_hba.conf.")));
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
 
 	/*------
 	 * Build the .well-known URI based on our issuer.
@@ -603,6 +603,7 @@ validate(Port *port, const char *auth)
 	int			map_status;
 	ValidatorModuleResult *ret;
 	const char *token;
+	bool		status;
 
 	/* Ensure that we have a correct token to validate */
 	if (!(token = validate_token_format(auth)))
@@ -613,7 +614,10 @@ validate(Port *port, const char *auth)
 										  token, port->user_name);
 
 	if (!ret->authorized)
-		return false;
+	{
+		status = false;
+		goto cleanup;
+	}
 
 	if (ret->authn_id)
 		set_authn_id(port, ret->authn_id);
@@ -626,7 +630,8 @@ validate(Port *port, const char *auth)
 		 * validator implementation; all that matters is that the validator
 		 * says the user can log in with the target role.
 		 */
-		return true;
+		status = true;
+		goto cleanup;
 	}
 
 	/* Make sure the validator authenticated the user. */
@@ -642,9 +647,31 @@ validate(Port *port, const char *auth)
 	/* Finally, check the user map. */
 	map_status = check_usermap(port->hba->usermap, port->user_name,
 							   MyClientConnectionInfo.authn_id, false);
-	return (map_status == STATUS_OK);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it to avoid accidental re-use.
+	 */
+	if (ret->authn_id != NULL)
+	{
+		explicit_bzero(ret->authn_id, strlen(ret->authn_id));
+		pfree(ret->authn_id);
+	}
+	pfree(ret);
+
+	return status;
 }
 
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or of it fails to load, then error out
+ * since token validation won't be possible.
+ */
 static void
 load_validator_library(void)
 {
@@ -652,20 +679,25 @@ load_validator_library(void)
 
 	if (OAuthValidatorLibrary[0] == '\0')
 		ereport(ERROR,
-				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-				 errmsg("oauth_validator_library is not set")));
+				errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				errmsg("oauth_validator_library is not set"));
 
 	validator_init = (OAuthValidatorModuleInit)
 		load_external_function(OAuthValidatorLibrary,
 							   "_PG_oauth_validator_module_init", false, NULL);
 
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
 	if (validator_init == NULL)
 		ereport(ERROR,
-				(errmsg("%s module \"%s\" have to define the symbol %s",
-						"OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init")));
+				errmsg("%s modules \"%s\" have to define the symbol %s",
+					   "OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init"));
 
 	ValidatorCallbacks = (*validator_init) ();
 
+	/* Allocate memory for validator library private state data */
 	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
 	if (ValidatorCallbacks->startup_cb != NULL)
 		ValidatorCallbacks->startup_cb(validator_module_state);
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 735fd05373..dcb1558ad3 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -2042,6 +2042,32 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping
+		 * is nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+					/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "trust_validator_authz"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with trust_validator_authz";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 73718ac3b1..f5fc6ebc23 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -89,8 +89,11 @@ client_initial_response(PGconn *conn, const char *token)
 
 	if (!PQExpBufferDataBroken(buf))
 		response = strdup(buf.data);
-
 	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
 	return response;
 }
 
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 3b8f057a26..3f06f5be76 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -11,11 +11,18 @@ use PostgreSQL::Test::Utils;
 use PostgreSQL::Test::OAuthServer;
 use Test::More;
 
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
 if ($ENV{with_oauth} ne 'curl')
 {
 	plan skip_all => 'client-side OAuth not supported by this build';
 }
-elsif ($ENV{with_python} ne 'yes')
+
+if ($ENV{with_python} ne 'yes')
 {
 	plan skip_all => 'OAuth tests require --with-python to run';
 }
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
index d12c79e2a2..dbba326bc4 100644
--- a/src/test/modules/oauth_validator/validator.c
+++ b/src/test/modules/oauth_validator/validator.c
@@ -67,7 +67,10 @@ validator_startup(ValidatorModuleState *state)
 static void
 validator_shutdown(ValidatorModuleState *state)
 {
-	/* do nothing */
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
 }
 
 static ValidatorModuleResult *
@@ -77,7 +80,7 @@ validate_token(ValidatorModuleState *state, const char *token, const char *role)
 
 	/* Check to make sure our private state still exists. */
 	if (state->private_data != PRIVATE_COOKIE)
-		elog(ERROR, "oauth_validator: private state cookie changed to %p",
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
 			 state->private_data);
 
 	res = palloc(sizeof(ValidatorModuleResult));
-- 
2.34.1

v33-0003-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v33-0003-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 5a3701853107dc341a9683f989f1dfc4a5084f33 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v33 3/3] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1920 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5639 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 302cf0487b..175b2eff79 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -375,6 +376,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -385,7 +388,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index 7518520ec9..ce9a7d28f2 100644
--- a/meson.build
+++ b/meson.build
@@ -3385,6 +3385,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3546,6 +3549,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7f..c7fce098eb 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..dd047423de
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1920 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy reponse, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://example.com/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": openid_provider.discovery_uri,
+            }
+
+            fail_oauth_handshake(conn, resp)
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..ee39c2a14e
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..ea31ad4f87
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

#142Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Antonin Houska (#138)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Oct 17, 2024 at 10:51 PM Antonin Houska <ah@cybertec.at> wrote:

This is the 1st round, based on reading the code. I'll continue paying
attention to the project and possibly post some more comments in the future.

Thanks again for the reviews!

* Information on the new method should be added to pg_hba.conf.sample.method.

Whoops, this will be fixed in v34.

* Is it important that fe_oauth_state.token also contains the "Bearer"
keyword? I'd expect only the actual token value here. The keyword can be
added to the authentication message w/o storing it.

The same applies to the 'token' structure in fe-auth-oauth-curl.c.

Excellent question; I've waffled a bit on that myself. I think you're
probably right, but here's some background on why I originally made
that decision.

RFC 7628 defines not only OAUTHBEARER but also a generic template for
future OAuth-based SASL methods, and as part of that, the definition
of the "auth" key is incredibly vague:

auth (REQUIRED): The payload that would be in the HTTP
Authorization header if this OAuth exchange was being carried
out over HTTP.

I was worried that forcing a specific format would prevent future
extensibility, if say the Bearer scheme were updated to add additional
auth-params. I was also wondering if maybe a future specification
would allow OAUTHBEARER to carry a different scheme altogether, such
as DPoP [1]https://datatracker.ietf.org/doc/html/rfc9449.

However:
- auth-param support for Bearer was considered at the draft stage and
explicitly removed, with the old drafts stating "If additional
parameters are needed in the future, a different scheme would need to
be defined."
- I think the intent of RFC 7628 is that a new SASL mechanism will be
named for each new scheme (even if the new scheme shares all of the
bones of the old one). So DPoP tokens wouldn't piggyback on
OAUTHBEARER, and instead something like an OAUTHDPOP mech would need
to be defined.

So: the additional complexity in the current API is probably a YAGNI
violation, and I should just hardcode the Bearer format as you
suggest. Any future OAuth SASL mechanisms we support will have to go
through a different PQAUTHDATA type, e.g. PQAUTHDATA_OAUTH_DPOP_TOKEN.
And I'll need to make sure that I'm not improperly coupling the
concepts elsewhere in the API.

* Does PQdefaultAuthDataHook() have to be declared extern and exported via
libpq/exports.txt ? Even if the user was interested in it, he can use
PQgetAuthDataHook() to get the pointer (unless he already installed his
custom hook).

I guess I don't have a strongly held opinion, but is there a good
reason not to? Exposing it means that a client application may answer
questions like "is the current hook set to the default?" and so on.
IME, hook-chain maintenance is not a lot of fun in general, and having
more visibility can be nice for third-party developers.

* I wonder if the hooks (PQauthDataHook) can be implemented in a separate
diff. Couldn't the first version of the feature be commitable without these
hooks?

I am more than happy to split things up as needed! But in the end, I
think this is a question that can only be answered by the first brave
committer to take a bite. :)

(The original patchset didn't have these hooks; they were added as a
compromise, to prevent the builtin implementation from having to be
all things for all people.)

* Instead of allocating an instance of PQoauthBearerRequest, assigning it to
fe_oauth_state.async_ctx, and eventually having to all its cleanup()
function, wouldn't it be simpler to embed PQoauthBearerRequest as a member
in fe_oauth_state ?

Hmm, that would maybe be simpler. But you'd still have to call
cleanup() and set the async_ctx, right? The primary gain would be in
reducing the number of malloc calls.

* oauth_validator_library is defined as PGC_SIGHUP - is that intentional?

Yes, I think it's going to be important to let DBAs migrate their
authentication modules without a full restart. That probably deserves
more explicit testing, now that you mention it. Is there a specific
concern that you have with that?

And regardless, the library appears to be loaded by every backend during
authentication. Why isn't it loaded by postmaster like libraries listed in
shared_preload_libraries? fork() would then ensure that the backends do have
the library in their address space.

It _can_ be, if you want -- there's nothing that I know of preventing
the validator from also being preloaded with its own _PG_init(), is
there? But I don't think it's a good idea to force that, for the same
reason we want to allow SIGHUP.

* pg_fe_run_oauth_flow()

When first time here
case OAUTH_STEP_TOKEN_REQUEST:
if (!handle_token_response(actx, &state->token))
goto error_return;

the user hasn't been prompted yet so ISTM that the first token request must
always fail. It seems more logical if the prompt is set to the user before
sending the token request to the server. (Although the user probably won't
be that fast to make the first request succeed, so consider this just a
hint.)

That's also intentional -- if the first token response fails for a
reason _other_ than "we're waiting for the user", then we want to
immediately fail hard instead of making them dig out their phone and
go on a two-minute trip, because they're going to come back and find
that it was all for nothing.

There's a comment immediately below the part you quoted that mentions
this briefly; maybe I should move it up a bit?

* As long as I understand, the following comment would make sense:

diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index f943a31cc08..97259fb5654 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -518,6 +518,7 @@ oauth_exchange(void *opaq, bool final,
switch (state->state)
{
case FE_OAUTH_INIT:
+                       /* Initial Client Response */
Assert(inputlen == -1);

if (!derive_discovery_uri(conn))

There are multiple "initial client response" cases, though. What
questions are you hoping to clarify with the comment? Maybe we can
find a more direct answer.

Or, doesn't the FE_OAUTH_INIT branch of the switch statement actually fit
better into oauth_init()?

oauth_init() is the mechanism initialization for the SASL framework
itself, which is shared with SCRAM. In the current architecture, the
init callback doesn't take the initial client response into
consideration at all.

Generating the client response is up to the exchange callback -- and
even if we moved the SASL_ASYNC processing elsewhere, I don't think we
can get rid of its added complexity. Something has to signal upwards
that it's time to transfer control to an async engine. And we can't
make the asynchronicity a static attribute of the mechanism itself,
because we can skip the flow if something gives us a cached token.

* Finally, the user documentation is almost missing. I say that just for the
sake of completeness, you obviously know it. (On the other hand, I think
that the lack of user information might discourage some people from running
the code and testing it.)

Yeah, the catch-22 of writing huge features... By the way, if anyone's
reading along and dissuaded by the lack of docs, please say so!
(Daniel has been helping me out so much with the docs; thanks again,
Daniel.)

--Jacob

[1]: https://datatracker.ietf.org/doc/html/rfc9449

#143Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#139)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Oct 18, 2024 at 4:38 AM Daniel Gustafsson <daniel@yesql.se> wrote:

In validate() it seems to me we should clear out ret->authn_id on failure to
pair belts with suspenders. Fixed by calling explicit_bzero on it in the error
path.

The new hunk says:

cleanup:
/*
* Clear and free the validation result from the validator module once
* we're done with it to avoid accidental re-use.
*/
if (ret->authn_id != NULL)
{
explicit_bzero(ret->authn_id, strlen(ret->authn_id));
pfree(ret->authn_id);
}
pfree(ret);

But I'm not clear on what's being protected against. Which code would
reuse this result?

Thanks,
--Jacob

#144Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#142)
4 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Oct 23, 2024 at 3:40 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

* Information on the new method should be added to pg_hba.conf.sample.method.

Whoops, this will be fixed in v34.

...which is now attached. This should also fix the build failure for
the docs themselves.

I have combed almost all of Daniel's feedback backwards into the main
patch (just the new bzero code remains, with the open question
upthread), and I've made further edits to flesh out more of the
documentation. A diff is provided so you don't have to go looking for
the doc changes. Feedback on the wording and level of detail is very
welcome!

Next up is, hopefully, url-encoding. I hadn't realized what an
absolute mess that would be [1]https://github.com/oauth-wg/oauth-v2-1/issues/128#issuecomment-1879632883.

Thanks,
--Jacob

[1]: https://github.com/oauth-wg/oauth-v2-1/issues/128#issuecomment-1879632883

Attachments:

since-v33.diff.txttext/plain; charset=US-ASCII; name=since-v33.diff.txtDownload
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 8d351c2089..c5d1a1fe69 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -2341,7 +2341,7 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
     to enable third-party applications to obtain limited access to a protected
     resource.
 
-    OAuth support has to be enabled when <productname>PostgreSQL</productname>
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
     is built, see <xref linkend="installation"/> for more information.
 
     <itemizedlist>
@@ -2353,23 +2353,23 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
      </listitem>
      <listitem>
       <para>
-       Client: The system which accesses to the protected resources using access
+       Client: The system which accesses the protected resources using access
        tokens.  Applications using libpq are the clients in connecting to a
        <productname>PostgreSQL</productname> cluster.
       </para>
      </listitem>
      <listitem>
       <para>
-       Authentication server: The system which recieves requests from, and
-       issues access tokens to, the client upon successful authentication by
-       the resource owner.
+       Authorization server: The system which receives requests from, and
+       issues access tokens to, the client after the authenticated resource
+       owner has given approval.
       </para>
      </listitem>
 
      <listitem>
       <para>
-       Resource server: The system which owns the protected resources and can
-       grant access to them. The <productname>PostgreSQL</productname> cluster
+       Resource server: The system which hosts the protected resources which are
+       accessed by the client. The <productname>PostgreSQL</productname> cluster
        being connected to is the resource server.
       </para>
      </listitem>
@@ -2379,7 +2379,7 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
 
    <para>
     <productname>PostgreSQL</productname> supports bearer tokens, defined in
-    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
     which are a type of access token used with OAuth 2.0 where the token is an
     opaque string.  The format of the access token is implementation specific
     and is chosen by each authentication server.
@@ -2402,10 +2402,11 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
       <term><literal>scope</literal></term>
       <listitem>
        <para>
-        The OAuth scope required for the server to authenticate and/or authorize
-        the user. The value of the scope is dependent on the OAuth validation
-        module used (see <xref linkend="oauth-validators" /> for more information
-        on validators).  This parameter is required.
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
        </para>
       </listitem>
      </varlistentry>
@@ -2416,8 +2417,9 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
        <para>
         Allows for mapping between OAuth identity provider and database user
         names.  See <xref linkend="auth-username-maps"/> for details.  If a
-        map is not specified the user name returned from the OAuth validator
-        must match the role being requested.  This parameter is optional.
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
        </para>
       </listitem>
      </varlistentry>
@@ -2426,10 +2428,29 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
       <term><literal>trust_validator_authz</literal></term>
       <listitem>
        <para>
-        When set to <literal>1</literal> standard user mapping is skipped. If
-        the OAuth token is validated the user can connect under its desired
-        role.
+        An advanced option which is not intended for common use.
        </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping is skipped, and
+        the OAuth validator takes full responsibility for mapping end user
+        identities to database roles.  If the validator authorizes the token,
+        the server trusts that the user is allowed to connect under the
+        requested role, and the connection is allowed to proceed regardless of
+        the authentication status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>trust_validator_authz</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
       </listitem>
      </varlistentry>
     </variablelist>
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 73bf21c599..7cb3afb7af 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1202,19 +1202,6 @@ include_dir 'conf.d'
       </listitem>
      </varlistentry>
 
-     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
-      <term><varname>oauth_validator_library</varname> (<type>string</type>)
-      <indexterm>
-       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
-      </indexterm>
-      </term>
-      <listitem>
-       <para>
-	    TODO
-       </para>
-      </listitem>
-     </varlistentry>
-
      <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
       <term><varname>oauth_validator_library</varname> (<type>string</type>)
       <indexterm>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 86fd146af2..62b8ae3b42 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2341,7 +2341,12 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_client_id</literal></term>
       <listitem>
        <para>
-        The client identifier as issued by the authorization server.
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth">custom OAuth
+        hook</link> is installed to provide one), then this parameter must be
+        set; otherwise, the connection will fail.
        </para>
       </listitem>
      </varlistentry>
@@ -2350,7 +2355,10 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_client_secret</literal></term>
       <listitem>
        <para>
-        The client password.
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
        </para>
       </listitem>
      </varlistentry>
@@ -2359,7 +2367,27 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_issuer</literal></term>
       <listitem>
        <para>
-        TODO
+        The HTTPS URL of an issuer to contact if the server requests an OAuth
+        token for the connection. This parameter is optional and intended for
+        advanced usage; see also <xref linkend="libpq-connect-oauth-scope"/>.
+       </para>
+       <para>
+        If no <literal>oauth_issuer</literal> is provided, the client will ask
+        the <productname>PostgreSQL</productname> server to provide an
+        acceptable issuer URL (as configured in its
+        <link linkend="auth-oauth">HBA settings</link>). This is convenient, but
+        it requires two separate network connections to the server per attempt.
+       </para>
+       <para>
+        Providing an explicit <literal>oauth_issuer</literal> (and, typically,
+        an accompanying <literal>oauth_scope</literal>) skips this initial
+        "discovery" phase, which may speed up certain custom OAuth flows.
+        <!-- TODO: note to reviewer: the following is only partially implemented. -->
+        This parameter may also be set defensively, to prevent the backend
+        server from directing the client to arbitrary URLs.
+        <emphasis>However:</emphasis> if the client's issuer setting differs
+        from the server's expected issuer, the server is likely to reject the
+        issued token, and the connection will fail.
        </para>
       </listitem>
      </varlistentry>
@@ -2368,8 +2396,26 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
       <term><literal>oauth_scope</literal></term>
       <listitem>
        <para>
-        The scope of the access request sent to the authorization server.
-        This parameter is optional.
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        <!-- TODO: note to reviewer: the following is only partially implemented. -->
+        If neither <xref linkend="libpq-connect-oauth-issuer"/> nor
+        <literal>oauth_scope</literal> is specified, the client will obtain
+        appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. Otherwise, the value of
+        this parameter will be used. Similarly to
+        <literal>oauth_issuer</literal>, if the client's scope setting does not
+        contain the server's required scopes, the server is likely to reject the
+        issued token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
        </para>
       </listitem>
      </varlistentry>
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index cfa5769b10..90e68dbc93 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -669,7 +669,7 @@ cleanup:
  *
  * Load the configured validator library in order to perform token validation.
  * There is no built-in fallback since validation is implementation specific. If
- * no validator library is configured, or of it fails to load, then error out
+ * no validator library is configured, or if it fails to load, then error out
  * since token validation won't be possible.
  */
 static void
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index dcb1558ad3..c623b8463d 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -2482,20 +2482,17 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 	}
 	else if (strcmp(name, "issuer") == 0)
 	{
-		if (hbaline->auth_method != uaOAuth)
-			INVALID_AUTH_OPTION("issuer", gettext_noop("oauth"));
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
 		hbaline->oauth_issuer = pstrdup(val);
 	}
 	else if (strcmp(name, "scope") == 0)
 	{
-		if (hbaline->auth_method != uaOAuth)
-			INVALID_AUTH_OPTION("scope", gettext_noop("oauth"));
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
 		hbaline->oauth_scope = pstrdup(val);
 	}
 	else if (strcmp(name, "trust_validator_authz") == 0)
 	{
-		if (hbaline->auth_method != uaOAuth)
-			INVALID_AUTH_OPTION("trust_validator_authz", gettext_noop("oauth"));
+		REQUIRE_AUTH_OPTION(uaOAuth, "trust_validator_authz", "oauth");
 		if (strcmp(val, "1") == 0)
 			hbaline->oauth_skip_usermap = true;
 		else
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a3..b64c8dea97 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
v34-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v34-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From f2ba496e26f121c5082163fa7e4dc00882408a38 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v34 1/3] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied (see below).

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

= Server-Side Validation =

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

= OAuth HBA Method =

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   19 +
 configure                                     |  149 ++
 configure.ac                                  |   33 +
 doc/src/sgml/client-auth.sgml                 |  145 ++
 doc/src/sgml/config.sgml                      |   17 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   29 +
 doc/src/sgml/libpq.sgml                       |  128 +
 doc/src/sgml/oauth-validators.sgml            |   96 +
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   31 +
 meson_options.txt                             |    4 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  694 +++++
 src/backend/libpq/auth.c                      |   26 +-
 src/backend/libpq/hba.c                       |   54 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |   17 +
 src/include/libpq/hba.h                       |    6 +-
 src/include/libpq/oauth.h                     |   49 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   14 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2301 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          |  669 +++++
 src/interfaces/libpq/fe-auth-oauth.h          |   42 +
 src/interfaces/libpq/fe-auth-sasl.h           |   10 +-
 src/interfaces/libpq/fe-auth-scram.c          |    6 +-
 src/interfaces/libpq/fe-auth.c                |  105 +-
 src/interfaces/libpq/fe-auth.h                |    9 +-
 src/interfaces/libpq/fe-connect.c             |   86 +-
 src/interfaces/libpq/fe-misc.c                |    7 +-
 src/interfaces/libpq/libpq-fe.h               |   75 +
 src/interfaces/libpq/libpq-int.h              |   15 +
 src/interfaces/libpq/meson.build              |    7 +
 src/makefiles/meson.build                     |    1 +
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   33 +
 src/test/modules/oauth_validator/meson.build  |   33 +
 .../modules/oauth_validator/t/001_server.pl   |  321 +++
 .../modules/oauth_validator/t/oauth_server.py |  337 +++
 src/test/modules/oauth_validator/validator.c  |  100 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |   65 +
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   15 +
 54 files changed, 5808 insertions(+), 57 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 90cb95c868..302cf0487b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -691,8 +696,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 3a577e463b..c13ba42941 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8404,6 +8407,57 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
+  fi
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -12922,6 +12976,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -13947,6 +14085,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 55f6c46d33..a4e22e2dde 100644
--- a/configure.ac
+++ b/configure.ac
@@ -917,6 +917,30 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
+  fi
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1395,6 +1419,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1586,6 +1615,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..c5d1a1fe69 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,135 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+
+    <itemizedlist>
+     <listitem>
+      <para>
+       Resource owner: The user or system who owns protected resources and can
+       grant access to them.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Client: The system which accesses the protected resources using access
+       tokens.  Applications using libpq are the clients in connecting to a
+       <productname>PostgreSQL</productname> cluster.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Authorization server: The system which receives requests from, and
+       issues access tokens to, the client after the authenticated resource
+       owner has given approval.
+      </para>
+     </listitem>
+
+     <listitem>
+      <para>
+       Resource server: The system which hosts the protected resources which are
+       accessed by the client. The <productname>PostgreSQL</productname> cluster
+       being connected to is the resource server.
+      </para>
+     </listitem>
+
+    </itemizedlist>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authentication server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        The <acronym>URL</acronym> of the OAuth issuing party, which the client
+        must contact to receive a bearer token.  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>trust_validator_authz</literal></term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping is skipped, and
+        the OAuth validator takes full responsibility for mapping end user
+        identities to database roles.  If the validator authorizes the token,
+        the server trusts that the user is allowed to connect under the
+        requested role, and the connection is allowed to proceed regardless of
+        the authentication status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>trust_validator_authz</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 934ef5e469..7cb3afb7af 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1201,6 +1201,23 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library to use for validating OAuth connection tokens. If set to
+        an empty string (the default), OAuth connections will be refused. For
+        more information on implementing OAuth validators see
+        <xref linkend="oauth-validators" />. This parameter can only be set in
+        the <filename>postgresql.conf</filename> file.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3a491b5989..9a76aac08b 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1064,6 +1064,20 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-oauth">
+       <term><option>--with-oauth=<replaceable>LIBRARY</replaceable></option></term>
+       <listitem>
+        <para>
+         Build with OAuth authentication and authorization support.  The only
+         <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-systemd">
        <term><option>--with-systemd</option></term>
        <listitem>
@@ -2508,6 +2522,21 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-oauth">
+      <term><option>-Doauth={ auto | <replaceable>LIBRARY</replaceable> | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with OAuth authentication and authorization support.  The only
+        <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-systemd-meson">
       <term><option>-Dsystemd={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index afc9346757..62b8ae3b42 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2336,6 +2336,90 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth">custom OAuth
+        hook</link> is installed to provide one), then this parameter must be
+        set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of an issuer to contact if the server requests an OAuth
+        token for the connection. This parameter is optional and intended for
+        advanced usage; see also <xref linkend="libpq-connect-oauth-scope"/>.
+       </para>
+       <para>
+        If no <literal>oauth_issuer</literal> is provided, the client will ask
+        the <productname>PostgreSQL</productname> server to provide an
+        acceptable issuer URL (as configured in its
+        <link linkend="auth-oauth">HBA settings</link>). This is convenient, but
+        it requires two separate network connections to the server per attempt.
+       </para>
+       <para>
+        Providing an explicit <literal>oauth_issuer</literal> (and, typically,
+        an accompanying <literal>oauth_scope</literal>) skips this initial
+        "discovery" phase, which may speed up certain custom OAuth flows.
+        <!-- TODO: note to reviewer: the following is only partially implemented. -->
+        This parameter may also be set defensively, to prevent the backend
+        server from directing the client to arbitrary URLs.
+        <emphasis>However:</emphasis> if the client's issuer setting differs
+        from the server's expected issuer, the server is likely to reject the
+        issued token, and the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        <!-- TODO: note to reviewer: the following is only partially implemented. -->
+        If neither <xref linkend="libpq-connect-oauth-issuer"/> nor
+        <literal>oauth_scope</literal> is specified, the client will obtain
+        appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. Otherwise, the value of
+        this parameter will be used. Similarly to
+        <literal>oauth_issuer</literal>, if the client's scope setting does not
+        contain the server's required scopes, the server is likely to reject the
+        issued token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9961,6 +10045,50 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+
+      <para>
+       If <replaceable>hook</replaceable> is set to a null pointer instead of
+       a function pointer, the default hook will be installed.
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       Retrieves the current value of <literal>PGauthDataHook</literal>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..c9914519fc
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,96 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>Implementing OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth tokens.
+ </para>
+ <para>
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading a shared library
+   with the <xref linkend="guc-oauth-validator-library"/>'s name as the library
+   base name. The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..af476c82fc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f..ae4732df65 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 58e67975e8..7518520ec9 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3042,6 +3071,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3714,6 +3744,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..05d86bb46a
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,694 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	ctx->state = OAUTH_STATE_FINISHED;
+	return PG_SASL_EXCHANGE_SUCCESS;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+
+	if (!ret->authorized)
+		return false;
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		return true;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+		return false;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	return (map_status == STATUS_OK);
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				errmsg("oauth_validator_library is not set"));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s modules \"%s\" have to define the symbol %s",
+					   "OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2fd96a7129..c623b8463d 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1748,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2042,32 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping
+		 * is nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+					/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "trust_validator_authz"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with trust_validator_authz";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2095,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2480,24 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "trust_validator_authz", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a3..b64c8dea97 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 2c4cc8cd41..e91d211b7b 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4783,6 +4784,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 427030f31a..bd27d9279d 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -223,6 +223,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -700,6 +703,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..477c834b40 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..8747b0eb08
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2301 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because
+		 * it's inefficient and pointless if your event loop has already
+		 * handed you the exact sockets that are ready. But that's not our use
+		 * case -- our client has no way to tell us which sockets are ready.
+		 * (They don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the
+		 * foot in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	if (conn->oauth_scope)
+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 * TODO: url-encode...?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+		actx->used_basic_auth = false;
+	}
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. TODO: url-encode */
+	resetPQExpBuffer(work_buffer);
+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
+	/* TODO check for broken buffer */
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	if (conn->oauth_client_secret)
+	{
+		/*----
+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * TODO: should we omit client_id from the body in this case?
+		 * TODO: url-encode...?
+		 */
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
+		actx->used_basic_auth = false;
+	}
+
+	resetPQExpBuffer(work_buffer);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..f5fc6ebc23
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,669 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/*
+		 * Use our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 258bfd0564..b47011d077 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..5d311d4107 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +588,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +674,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +704,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1026,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1195,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1212,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1542,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 61c025ff3b..b53f8eae9b 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3694,6 +3713,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3849,6 +3869,17 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3882,7 +3913,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3919,6 +3960,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4600,6 +4676,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4717,6 +4794,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7239,6 +7321,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..a38c571107 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +105,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +728,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd..75043bbc8f 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..0181e5cc03 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 256799f520..150dc1d908 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index d8fe059d23..60efa07b42 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..402369504d
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,33 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_oauth
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..86813e8b6a
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..3f06f5be76
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,321 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$common_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr => qr/failed to obtain device authorization: response is too large/
+);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr => qr/failed to obtain access token: response is too large/
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$common_connstr = "$common_connstr oauth_client_secret=12345";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => '12345'),
+	"oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the issuer.
+$common_connstr = "user=test dbname=postgres oauth_issuer=$issuer";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr oauth_client_id=f02c6361-0635",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..fb731ed2e5
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,337 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send Authorization header")
+
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        expected_creds = f"{self.client_id}:{secret}"
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body. self._parse_params() must
+        have been called first.
+        """
+        return self._params["client_id"][0]
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..dbba326bc4
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,100 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 007571e948..83360b397a 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2499,6 +2499,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2542,7 +2547,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..a13240cd01
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e..362b20a94f 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 54bf29be24..f18fccafa5 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1717,6 +1721,8 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1781,6 +1787,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1941,11 +1948,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3065,6 +3075,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3460,6 +3472,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
@@ -3660,6 +3674,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v34-0002-v30-review-comments.patchapplication/octet-stream; name=v34-0002-v30-review-comments.patchDownload
From 55068dfb46af757af615735a98df9395db5e4f48 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Fri, 18 Oct 2024 12:40:26 +0200
Subject: [PATCH v34 2/3] v30-review-comments

---
 doc/src/sgml/oauth-validators.sgml | 36 ++++++++++++++++++++++++++++--
 src/backend/libpq/auth-oauth.c     | 25 ++++++++++++++++++---
 2 files changed, 56 insertions(+), 5 deletions(-)

diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index c9914519fc..4615159a9f 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -1,13 +1,13 @@
 <!-- doc/src/sgml/oauth-validators.sgml -->
 
 <chapter id="oauth-validators">
- <title>Implementing OAuth Validator Modules</title>
+ <title>OAuth Validator Modules</title>
  <indexterm zone="oauth-validators">
   <primary>OAuth Validators</primary>
  </indexterm>
  <para>
   <productname>PostgreSQL</productname> provides infrastructure for creating
-  custom modules to perform server-side validation of OAuth tokens.
+  custom modules to perform server-side validation of OAuth bearer tokens.
  </para>
  <para>
   OAuth validation modules must at least consist of an initialization function
@@ -74,9 +74,41 @@ typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
   <sect2 id="oauth-validator-callback-validate">
    <title>Validate Callback</title>
    <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth. The token is
+    parsed to ensure being well-formed syntactically, but no semantical check
+    has been performed. Any state set in previous calls will be available in
+    <structfield>state->privata_data</structfield>.
+
 <programlisting>
 typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
 </programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate,
+    <replaceable>role</replaceable> will contain the role the user request to
+    log in as. The callback must return a <literal>ValidatorModuleResult</literal>
+    struct which is defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    If <structfield>authorized</structfield> is set to <literal>true</literal>
+    the bearer token is defined to be valid.
+    To authenticate the user, the authenticated user name shall be returned in
+    the <structfield>authn_id</structfield> field. When authenticating against
+    a HBA rule with <literal>trust_validator_authz</literal> turned on the
+    <structfield>authn_id</structfield> user name must exactly match the role
+    expected to login as.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.
    </para>
   </sect2>
 
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 05d86bb46a..90e68dbc93 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -603,6 +603,7 @@ validate(Port *port, const char *auth)
 	int			map_status;
 	ValidatorModuleResult *ret;
 	const char *token;
+	bool		status;
 
 	/* Ensure that we have a correct token to validate */
 	if (!(token = validate_token_format(auth)))
@@ -613,7 +614,10 @@ validate(Port *port, const char *auth)
 										  token, port->user_name);
 
 	if (!ret->authorized)
-		return false;
+	{
+		status = false;
+		goto cleanup;
+	}
 
 	if (ret->authn_id)
 		set_authn_id(port, ret->authn_id);
@@ -626,7 +630,8 @@ validate(Port *port, const char *auth)
 		 * validator implementation; all that matters is that the validator
 		 * says the user can log in with the target role.
 		 */
-		return true;
+		status = true;
+		goto cleanup;
 	}
 
 	/* Make sure the validator authenticated the user. */
@@ -642,7 +647,21 @@ validate(Port *port, const char *auth)
 	/* Finally, check the user map. */
 	map_status = check_usermap(port->hba->usermap, port->user_name,
 							   MyClientConnectionInfo.authn_id, false);
-	return (map_status == STATUS_OK);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it to avoid accidental re-use.
+	 */
+	if (ret->authn_id != NULL)
+	{
+		explicit_bzero(ret->authn_id, strlen(ret->authn_id));
+		pfree(ret->authn_id);
+	}
+	pfree(ret);
+
+	return status;
 }
 
 /*
-- 
2.34.1

v34-0003-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v34-0003-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From c01dbdd6cb64e10c2fb052d555de5bb4f99b46e4 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v34 3/3] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 1920 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 ++++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5639 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 302cf0487b..175b2eff79 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -375,6 +376,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -385,7 +388,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index 7518520ec9..ce9a7d28f2 100644
--- a/meson.build
+++ b/meson.build
@@ -3385,6 +3385,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3546,6 +3549,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7f..c7fce098eb 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..dd047423de
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,1920 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy reponse, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            params = urllib.parse.parse_qs(body)
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if not secret:
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://example.com/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": openid_provider.discovery_uri,
+            }
+
+            fail_oauth_handshake(conn, resp)
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..ee39c2a14e
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..ea31ad4f87
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

#145Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#144)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 25 Oct 2024, at 20:22, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

I have combed almost all of Daniel's feedback backwards into the main
patch (just the new bzero code remains, with the open question
upthread),

Re-reading I can't see a vector there, I guess I am just scarred from what
seemed to be harmless leaks in auth codepaths and treat every bit as
potentially important. Feel free to drop from the patchset for now.

Next up is, hopefully, url-encoding. I hadn't realized what an
absolute mess that would be [1].

Everything and anything involving urls is a hot mess =/

Looking more at the patchset I think we need to apply conditional compilation
of the backend for oauth like how we do with other opt-in schemes in configure
and meson. The attached .txt has a diff for making --with-oauth a requirement
for compiling support into backend libpq.

--
Daniel Gustafsson

Attachments:

backend_with_oauth.txttext/plain; name=backend_with_oauth.txt; x-unix-mode=0644Download
diff --git a/configure b/configure
index 39fe5a0542..b40cd836f1 100755
--- a/configure
+++ b/configure
@@ -8439,9 +8439,6 @@ fi
 
 if test x"$with_oauth" = x"curl"; then
 
-$as_echo "#define USE_OAUTH 1" >>confdefs.h
-
-
 $as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
 
   # OAuth requires python for testing
diff --git a/configure.ac b/configure.ac
index f7e1400b6e..82217e652e 100644
--- a/configure.ac
+++ b/configure.ac
@@ -927,8 +927,7 @@ if test x"$with_oauth" = x"" ; then
 fi
 
 if test x"$with_oauth" = x"curl"; then
-  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
-  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth=curl)])
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 98eb2a8242..ba502b0442 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,7 +15,6 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
-	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
@@ -30,6 +29,10 @@ OBJS = \
 	pqmq.o \
 	pqsignal.o
 
+ifneq ($(with_oauth),)
+OBJS += auth-oauth.o
+endif
+
 ifeq ($(with_ssl),openssl)
 OBJS += be-secure-openssl.o
 endif
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 0cf3e31c9f..57638f922e 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,7 +29,6 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
-#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -201,6 +200,14 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
+/*----------------------------------------------------------------
+ * OAuth Authentication
+ *----------------------------------------------------------------
+ */
+#ifdef USE_OAUTH
+#include "libpq/oauth.h"
+#endif
+
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -614,8 +621,13 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+
 		case uaOAuth:
+#ifdef USE_OAUTH
 			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+#else
+			Assert(false);
+#endif
 			break;
 	}
 
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index c623b8463d..719c7a881e 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -1749,7 +1749,11 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
 	else if (strcmp(token->string, "oauth") == 0)
+#ifdef USE_OAUTH
 		parsedline->auth_method = uaOAuth;
+#else
+		unsupauth = "oauth";
+#endif
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index c85527fb01..1c76dd80cc 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,7 +1,6 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
-  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
@@ -17,6 +16,10 @@ backend_sources += files(
   'pqsignal.c',
 )
 
+if oauth.found()
+  backend_sources += files('auth-oauth.c')
+endif
+
 if ssl.found()
   backend_sources += files('be-secure-openssl.c')
 endif
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 2f4e3a8f63..a60df328bb 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -703,10 +703,7 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
-/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
-#undef USE_OAUTH
-
-/* Define to 1 to use libcurl for OAuth support. */
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth=curl) */
 #undef USE_OAUTH_CURL
 
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
diff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h
index e49eb13e43..9bd655e306 100644
--- a/src/include/pg_config_manual.h
+++ b/src/include/pg_config_manual.h
@@ -172,6 +172,14 @@
 #define USE_SSL
 #endif
 
+/*
+ * USE_OAUTH code should be compiled only when compiling with OAuth support
+ * enabled.
+ */
+#ifdef USE_OAUTH_CURL
+#define USE_OAUTH
+#endif
+
 /*
  * This is the default directory in which AF_UNIX socket files are
  * placed.  Caution: changing this risks breaking your existing client
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 3f06f5be76..46684a56e0 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -19,7 +19,7 @@ if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
 
 if ($ENV{with_oauth} ne 'curl')
 {
-	plan skip_all => 'client-side OAuth not supported by this build';
+	plan skip_all => 'OAuth not supported by this build';
 }
 
 if ($ENV{with_python} ne 'yes')
#146Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#145)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Oct 28, 2024 at 6:24 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On 25 Oct 2024, at 20:22, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

I have combed almost all of Daniel's feedback backwards into the main
patch (just the new bzero code remains, with the open question
upthread),

Re-reading I can't see a vector there, I guess I am just scarred from what
seemed to be harmless leaks in auth codepaths and treat every bit as
potentially important. Feel free to drop from the patchset for now.

Okay. For authn_id specifically, which isn't secret and doesn't have
any power unless it's somehow copied into the ClientConnectionInfo,
I'm not sure that the bzero() gives us much. But I do see value in
clearing out, say, the Bearer token once we're finished with it.

Also in this validate() code path, I'm taking a look at the added
memory management with the pfree():
1. Should we add any more ceremony to the returned struct, to try to
ensure that the ABI matches? Or is it good enough to declare that
modules need to be compiled against a specific server version?
2. Should we split off a separate memory context to contain
allocations made by the validator?

Looking more at the patchset I think we need to apply conditional compilation
of the backend for oauth like how we do with other opt-in schemes in configure
and meson. The attached .txt has a diff for making --with-oauth a requirement
for compiling support into backend libpq.

Do we get the flexibility we need with that approach? With other
opt-in schemes, the backend and the frontend both need some sort of
third-party dependency, but that's not true for OAuth. I could see
some people wanting to support an offline token validator on the
server side but not wanting to build the HTTP dependency into their
clients.

I was considering going in the opposite direction: With the client
hooks, a user could plug in their own implementation without ever
having to touch the built-in flow, and I'm wondering if --with-oauth
should really just be --with-builtin-oauth or similar. Then if the
server sends OAUTHBEARER, the client only complains if it doesn't have
a flow available to use, rather than checking USE_OAUTH. This kind of
ties into the other big open question of "what do we do about users
that don't want the additional overhead of something they're not
using?"

--Jacob

#147Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#146)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 28 Oct 2024, at 17:09, Jacob Champion <jacob.champion@enterprisedb.com> wrote:
On Mon, Oct 28, 2024 at 6:24 AM Daniel Gustafsson <daniel@yesql.se> wrote:

Looking more at the patchset I think we need to apply conditional compilation
of the backend for oauth like how we do with other opt-in schemes in configure
and meson. The attached .txt has a diff for making --with-oauth a requirement
for compiling support into backend libpq.

Do we get the flexibility we need with that approach? With other
opt-in schemes, the backend and the frontend both need some sort of
third-party dependency, but that's not true for OAuth. I could see
some people wanting to support an offline token validator on the
server side but not wanting to build the HTTP dependency into their
clients.

Currently we don't support any conditional compilation which only affects
backend or frontend, all --without-XXX flags turn it off for both. Maybe this
is something which should change but I'm not sure that property should be
altered as part of a patch rather than discussed on its own merit.

I was considering going in the opposite direction: With the client
hooks, a user could plug in their own implementation without ever
having to touch the built-in flow, and I'm wondering if --with-oauth
should really just be --with-builtin-oauth or similar. Then if the
server sends OAUTHBEARER, the client only complains if it doesn't have
a flow available to use, rather than checking USE_OAUTH. This kind of
ties into the other big open question of "what do we do about users
that don't want the additional overhead of something they're not
using?"

We already know that GSS cause measurable performance impact on connections
even when compiled but not in use [0]20240610181212.auytluwmbfl7lb5n@awork3.anarazel.de, so I think we should be careful about
piling on more.

--
Daniel Gustafsson

[0]: 20240610181212.auytluwmbfl7lb5n@awork3.anarazel.de

#148Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#147)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Oct 29, 2024 at 3:52 AM Daniel Gustafsson <daniel@yesql.se> wrote:

Currently we don't support any conditional compilation which only affects
backend or frontend, all --without-XXX flags turn it off for both.

I don't think that's strictly true; see --with-pam which affects only
server-side code, since the hard part is in the server. Similarly,
--with-oauth currently affects only client-side code.

But in any case, that confusion is why I'm proposing a change to the
option name. I chose --with-oauth way before the architecture
solidified, and it doesn't reflect reality anymore. OAuth support on
the server side doesn't require Curl, and likely never will. So if you
want to support that on a Windows server, it's going to be strange if
we also force you to build the client with a libcurl dependency that
we won't even make use of on that platform.

We already know that GSS cause measurable performance impact on connections
even when compiled but not in use [0], so I think we should be careful about
piling on more.

I agree, but if the server asks for OAUTHBEARER, that's the end of it.
Either the client supports OAuth and initiates a token flow, or it
doesn't and the connection fails. That's very different from the
client-initiated transport negotiation.

On the other hand, if we're concerned about the link-time overhead
(time and/or RAM) of the new dependency, I think that's going to need
something different from a build-time switch. My guess is that
maintainers are only going to want to ship one libpq.

Thanks,
--Jacob

#149Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#148)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 29 Oct 2024, at 17:40, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Tue, Oct 29, 2024 at 3:52 AM Daniel Gustafsson <daniel@yesql.se> wrote:

Currently we don't support any conditional compilation which only affects
backend or frontend, all --without-XXX flags turn it off for both.

I don't think that's strictly true; see --with-pam which affects only
server-side code, since the hard part is in the server. Similarly,
--with-oauth currently affects only client-side code.

Fair, maybe it's an unwarranted concern. Question is though, if we added PAM
today would we have done the same?

But in any case, that confusion is why I'm proposing a change to the
option name.

+1

--
Daniel Gustafsson

#150Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#144)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Oct 25, 2024 at 11:22 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Next up is, hopefully, url-encoding. I hadn't realized what an
absolute mess that would be [1].

Here is v35, which attempts to perform URL encoding by almost entirely
deferring to Curl, in the naive hope that provider incompatibilities
with libcurl will be taken more seriously than incompatibilities with
a brand-new Postgres feature. I'm not thrilled that the IETF chose to
defer this part of the spec to WHATWG.

Additionally,
- the rest of the feedback patch has been incorporated, with
modifications to the bzero portion (which now focuses on clearing
`token` rather than `authn_id`)
- documentation for the validate_cb callback has been updated to
match, plus additional expansion
- markPQExpBufferBroken() has been promoted to the pqexpbuffer.h API,
because it happens to be useful for the encoding patch
- some duplication of the Authorization code has been refactored away
- "empty" (which is to say, default) scopes are now explicitly tested

Next up will be Antonin's suggested change to the Bearer handling, as
well as previously-discussed changes to the --with-oauth build option.

Thanks,
--Jacob

Attachments:

since-v34.diff.txttext/plain; charset=US-ASCII; name=since-v34.diff.txtDownload
1:  f2ba496e26 ! 1:  dc4f869365 Add OAUTHBEARER SASL mechanism
    @@ doc/src/sgml/oauth-validators.sgml (new)
     +<!-- doc/src/sgml/oauth-validators.sgml -->
     +
     +<chapter id="oauth-validators">
    -+ <title>Implementing OAuth Validator Modules</title>
    ++ <title>OAuth Validator Modules</title>
     + <indexterm zone="oauth-validators">
     +  <primary>OAuth Validators</primary>
     + </indexterm>
     + <para>
     +  <productname>PostgreSQL</productname> provides infrastructure for creating
    -+  custom modules to perform server-side validation of OAuth tokens.
    ++  custom modules to perform server-side validation of OAuth bearer tokens.
     + </para>
     + <para>
     +  OAuth validation modules must at least consist of an initialization function
    @@ doc/src/sgml/oauth-validators.sgml (new)
     +  <sect2 id="oauth-validator-callback-validate">
     +   <title>Validate Callback</title>
     +   <para>
    ++    The <function>validate_cb</function> callback is executed during the OAuth
    ++    exchange when a user attempts to authenticate using OAuth.  Any state set in
    ++    previous calls will be available in <structfield>state->private_data</structfield>.
    ++
     +<programlisting>
     +typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
     +</programlisting>
    ++
    ++    <replaceable>token</replaceable> will contain the bearer token to validate.
    ++    The server has ensured that the token is well-formed syntactically, but no
    ++    other validation has been performed.  <replaceable>role</replaceable> will
    ++    contain the role the user has requested to log in as.  The callback must
    ++    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
    ++    defined as below:
    ++
    ++<programlisting>
    ++typedef struct ValidatorModuleResult
    ++{
    ++    bool        authorized;
    ++    char       *authn_id;
    ++} ValidatorModuleResult;
    ++</programlisting>
    ++
    ++    The connection will only proceed if the module sets
    ++    <structfield>authorized</structfield> to <literal>true</literal>.  To
    ++    authenticate the user, the authenticated user name (as determined using the
    ++    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
    ++    field.  Alternatively, <structfield>authn_id</structfield> may be set to
    ++    NULL if the token is valid but the associated user identity cannot be
    ++    determined.
    ++   </para>
    ++   <para>
    ++    The caller assumes ownership of the returned memory allocation, the
    ++    validator module should not in any way access the memory after it has been
    ++    returned.  A validator may instead return NULL to signal an internal
    ++    error.
    ++   </para>
    ++   <para>
    ++    The behavior after <function>validate_cb</function> returns depends on the
    ++    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
    ++    name must exactly match the role that the user is logging in as.  (This
    ++    behavior may be modified with a usermap.)  But when authenticating against
    ++    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
    ++    server will not perform any checks on the value of
    ++    <structfield>authn_id</structfield> at all; in this case it is up to the
    ++    validator to ensure that the token carries enough privileges for the user to
    ++    log in under the indicated <replaceable>role</replaceable>.
     +   </para>
     +  </sect2>
     +
    @@ src/backend/libpq/auth-oauth.c (new)
     +oauth_exchange(void *opaq, const char *input, int inputlen,
     +			   char **output, int *outputlen, const char **logdetail)
     +{
    ++	char	   *input_copy;
     +	char	   *p;
     +	char		cbind_flag;
     +	char	   *auth;
    ++	int			status;
     +
     +	struct oauth_ctx *ctx = opaq;
     +
    @@ src/backend/libpq/auth-oauth.c (new)
     +	}
     +
     +	/* Handle the client's initial message. */
    -+	p = pstrdup(input);
    ++	p = input_copy = pstrdup(input);
     +
     +	/*
     +	 * OAUTHBEARER does not currently define a channel binding (so there is no
    @@ src/backend/libpq/auth-oauth.c (new)
     +		generate_error_response(ctx, output, outputlen);
     +
     +		ctx->state = OAUTH_STATE_ERROR;
    -+		return PG_SASL_EXCHANGE_CONTINUE;
    ++		status = PG_SASL_EXCHANGE_CONTINUE;
    ++	}
    ++	else
    ++	{
    ++		ctx->state = OAUTH_STATE_FINISHED;
    ++		status = PG_SASL_EXCHANGE_SUCCESS;
     +	}
     +
    -+	ctx->state = OAUTH_STATE_FINISHED;
    -+	return PG_SASL_EXCHANGE_SUCCESS;
    ++	/* Don't let extra copies of the bearer token hang around. */
    ++	explicit_bzero(input_copy, inputlen);
    ++
    ++	return status;
     +}
     +
     +/*
    @@ src/backend/libpq/auth-oauth.c (new)
     +	int			map_status;
     +	ValidatorModuleResult *ret;
     +	const char *token;
    ++	bool		status;
     +
     +	/* Ensure that we have a correct token to validate */
     +	if (!(token = validate_token_format(auth)))
    @@ src/backend/libpq/auth-oauth.c (new)
     +	/* Call the validation function from the validator module */
     +	ret = ValidatorCallbacks->validate_cb(validator_module_state,
     +										  token, port->user_name);
    ++	if (ret == NULL)
    ++	{
    ++		ereport(LOG, errmsg("Internal error in OAuth validator module"));
    ++		return false;
    ++	}
     +
     +	if (!ret->authorized)
    -+		return false;
    ++	{
    ++		status = false;
    ++		goto cleanup;
    ++	}
     +
     +	if (ret->authn_id)
     +		set_authn_id(port, ret->authn_id);
    @@ src/backend/libpq/auth-oauth.c (new)
     +		 * validator implementation; all that matters is that the validator
     +		 * says the user can log in with the target role.
     +		 */
    -+		return true;
    ++		status = true;
    ++		goto cleanup;
     +	}
     +
     +	/* Make sure the validator authenticated the user. */
    @@ src/backend/libpq/auth-oauth.c (new)
     +				errmsg("OAuth bearer authentication failed for user \"%s\"",
     +					   port->user_name),
     +				errdetail_log("Validator provided no identity."));
    -+		return false;
    ++
    ++		status = false;
    ++		goto cleanup;
     +	}
     +
     +	/* Finally, check the user map. */
     +	map_status = check_usermap(port->hba->usermap, port->user_name,
     +							   MyClientConnectionInfo.authn_id, false);
    -+	return (map_status == STATUS_OK);
    ++	status = (map_status == STATUS_OK);
    ++
    ++cleanup:
    ++
    ++	/*
    ++	 * Clear and free the validation result from the validator module once
    ++	 * we're done with it.
    ++	 */
    ++	if (ret->authn_id != NULL)
    ++		pfree(ret->authn_id);
    ++	pfree(ret);
    ++
    ++	return status;
     +}
     +
     +/*
    @@ src/backend/libpq/hba.c: parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
     +		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
     +
     +		/*
    -+		 * Supplying a usermap combined with the option to skip usermapping
    -+		 * is nonsensical and indicates a configuration error.
    ++		 * Supplying a usermap combined with the option to skip usermapping is
    ++		 * nonsensical and indicates a configuration error.
     +		 */
     +		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
     +		{
     +			ereport(elevel,
     +					errcode(ERRCODE_CONFIG_FILE_ERROR),
    -+					/* translator: strings are replaced with hba options */
    ++			/* translator: strings are replaced with hba options */
     +					errmsg("%s cannot be used in combination with %s",
     +						   "map", "trust_validator_authz"),
     +					errcontext("line %d of configuration file \"%s\"",
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +}
     +
     +/*
    ++ * URL-Encoding Helpers
    ++ */
    ++
    ++/*
    ++ * Encodes a string using the application/x-www-form-urlencoded format, and
    ++ * appends it to the given buffer.
    ++ */
    ++static void
    ++append_urlencoded(PQExpBuffer buf, const char *s)
    ++{
    ++	char	   *escaped;
    ++	char	   *haystack;
    ++	char	   *match;
    ++
    ++	escaped = curl_easy_escape(NULL, s, 0);
    ++	if (!escaped)
    ++	{
    ++		markPQExpBufferBroken(buf);
    ++		return;
    ++	}
    ++
    ++	/*
    ++	 * curl_easy_escape() almost does what we want, but we need the
    ++	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
    ++	 * Curl command-line tool does this with a simple search-and-replace, so
    ++	 * follow its lead.
    ++	 */
    ++	haystack = escaped;
    ++
    ++	while ((match = strstr(haystack, "%20")) != NULL)
    ++	{
    ++		/* Append the unmatched portion, followed by the plus sign. */
    ++		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
    ++		appendPQExpBufferChar(buf, '+');
    ++
    ++		/* Keep searching after the match. */
    ++		haystack = match + 3 /* strlen("%20") */ ;
    ++	}
    ++
    ++	/* Push the remainder of the string onto the buffer. */
    ++	appendPQExpBufferStr(buf, haystack);
    ++
    ++	curl_free(escaped);
    ++}
    ++
    ++/*
    ++ * Convenience wrapper for encoding a single string. Returns NULL on allocation
    ++ * failure.
    ++ */
    ++static char *
    ++urlencode(const char *s)
    ++{
    ++	PQExpBufferData buf;
    ++
    ++	initPQExpBuffer(&buf);
    ++	append_urlencoded(&buf, s);
    ++
    ++	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
    ++}
    ++
    ++/*
    ++ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
    ++ * list.
    ++ */
    ++static void
    ++build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
    ++{
    ++	if (buf->len)
    ++		appendPQExpBufferChar(buf, '&');
    ++
    ++	append_urlencoded(buf, key);
    ++	appendPQExpBufferChar(buf, '=');
    ++	append_urlencoded(buf, value);
    ++}
    ++
    ++/*
     + * Specific HTTP Request Handlers
     + *
     + * This is finally the beginning of the actual application logic. Generally
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +}
     +
     +/*
    ++ * Adds the client ID (and secret, if provided) to the current request, using
    ++ * either HTTP headers or the request body.
    ++ */
    ++static bool
    ++add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
    ++{
    ++	bool		success = false;
    ++	char	   *username = NULL;
    ++	char	   *password = NULL;
    ++
    ++	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
    ++	{
    ++		/*----
    ++		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
    ++		 * Sec. 2.3.1,
    ++		 *
    ++		 *   Including the client credentials in the request-body using the
    ++		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
    ++		 *   clients unable to directly utilize the HTTP Basic authentication
    ++		 *   scheme (or other password-based HTTP authentication schemes).
    ++		 *
    ++		 * Additionally:
    ++		 *
    ++		 *   The client identifier is encoded using the
    ++		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
    ++		 *   B, and the encoded value is used as the username; the client
    ++		 *   password is encoded using the same algorithm and used as the
    ++		 *   password.
    ++		 *
    ++		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
    ++		 * an initial UTF-8 encoding step. Since the client ID and secret must
    ++		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
    ++		 * that in this function.)
    ++		 *
    ++		 * client_id is not added to the request body in this case. Not only
    ++		 * would it be redundant, but some providers in the wild (e.g. Okta)
    ++		 * refuse to accept it.
    ++		 */
    ++		username = urlencode(conn->oauth_client_id);
    ++		password = urlencode(conn->oauth_client_secret);
    ++
    ++		if (!username || !password)
    ++		{
    ++			actx_error(actx, "out of memory");
    ++			goto cleanup;
    ++		}
    ++
    ++		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
    ++		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
    ++		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
    ++
    ++		actx->used_basic_auth = true;
    ++	}
    ++	else
    ++	{
    ++		/*
    ++		 * If we're not otherwise authenticating, client_id is REQUIRED in the
    ++		 * request body.
    ++		 */
    ++		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
    ++
    ++		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
    ++		actx->used_basic_auth = false;
    ++	}
    ++
    ++	success = true;
    ++
    ++cleanup:
    ++	free(username);
    ++	free(password);
    ++
    ++	return success;
    ++}
    ++
    ++/*
     + * Queue a Device Authorization Request:
     + *
     + *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
     +	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
     +
    -+	/* Construct our request body. TODO: url-encode */
    ++	/* Construct our request body. */
     +	resetPQExpBuffer(work_buffer);
    -+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
    -+	if (conn->oauth_scope)
    -+		appendPQExpBuffer(work_buffer, "&scope=%s", conn->oauth_scope);
    ++	if (conn->oauth_scope && conn->oauth_scope[0])
    ++		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
    ++
    ++	if (!add_client_identification(actx, work_buffer, conn))
    ++		return false;
     +
     +	if (PQExpBufferBroken(work_buffer))
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
     +	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
     +
    -+	if (conn->oauth_client_secret)
    -+	{
    -+		/*----
    -+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
    -+		 *
    -+		 *   Including the client credentials in the request-body using the
    -+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
    -+		 *   clients unable to directly utilize the HTTP Basic authentication
    -+		 *   scheme (or other password-based HTTP authentication schemes).
    -+		 *
    -+		 * TODO: should we omit client_id from the body in this case?
    -+		 * TODO: url-encode...?
    -+		 */
    -+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
    -+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
    -+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
    -+
    -+		actx->used_basic_auth = true;
    -+	}
    -+	else
    -+	{
    -+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
    -+		actx->used_basic_auth = false;
    -+	}
    -+
     +	return start_request(actx);
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	Assert(token_uri);			/* ensured by get_discovery_document() */
     +	Assert(device_code);		/* ensured by run_device_authz() */
     +
    -+	/* Construct our request body. TODO: url-encode */
    ++	/* Construct our request body. */
     +	resetPQExpBuffer(work_buffer);
    -+	appendPQExpBuffer(work_buffer, "client_id=%s", conn->oauth_client_id);
    -+	appendPQExpBuffer(work_buffer, "&device_code=%s", device_code);
    -+	appendPQExpBuffer(work_buffer, "&grant_type=%s",
    -+					  OAUTH_GRANT_TYPE_DEVICE_CODE);
    -+	/* TODO check for broken buffer */
    -+
    -+	/* Make our request. */
    -+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
    -+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
    ++	build_urlencoded(work_buffer, "device_code", device_code);
    ++	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
     +
    -+	if (conn->oauth_client_secret)
    -+	{
    -+		/*----
    -+		 * Use HTTP Basic auth to send the password. Per RFC 6749, Sec. 2.3.1,
    -+		 *
    -+		 *   Including the client credentials in the request-body using the
    -+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
    -+		 *   clients unable to directly utilize the HTTP Basic authentication
    -+		 *   scheme (or other password-based HTTP authentication schemes).
    -+		 *
    -+		 * TODO: should we omit client_id from the body in this case?
    -+		 * TODO: url-encode...?
    -+		 */
    -+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, return false);
    -+		CHECK_SETOPT(actx, CURLOPT_USERNAME, conn->oauth_client_id, return false);
    -+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, conn->oauth_client_secret, return false);
    ++	if (!add_client_identification(actx, work_buffer, conn))
    ++		return false;
     +
    -+		actx->used_basic_auth = true;
    -+	}
    -+	else
    ++	if (PQExpBufferBroken(work_buffer))
     +	{
    -+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, return false);
    -+		actx->used_basic_auth = false;
    ++		actx_error(actx, "out of memory");
    ++		return false;
     +	}
     +
    -+	resetPQExpBuffer(work_buffer);
    -+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
    -+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
    ++	/* Make our request. */
    ++	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
    ++	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
     +
     +	return start_request(actx);
     +}
    @@ src/interfaces/libpq/meson.build: if gssapi.found()
        kwargs: gen_export_kwargs,
      )
     
    + ## src/interfaces/libpq/pqexpbuffer.c ##
    +@@ src/interfaces/libpq/pqexpbuffer.c: static const char *const oom_buffer_ptr = oom_buffer;
    +  *
    +  * Put a PQExpBuffer in "broken" state if it isn't already.
    +  */
    +-static void
    ++void
    + markPQExpBufferBroken(PQExpBuffer str)
    + {
    + 	if (str->data != oom_buffer)
    +
    + ## src/interfaces/libpq/pqexpbuffer.h ##
    +@@ src/interfaces/libpq/pqexpbuffer.h: extern void initPQExpBuffer(PQExpBuffer str);
    + extern void destroyPQExpBuffer(PQExpBuffer str);
    + extern void termPQExpBuffer(PQExpBuffer str);
    + 
    ++/*------------------------
    ++ * markPQExpBufferBroken
    ++ *		Put a PQExpBuffer in "broken" state if it isn't already.
    ++ */
    ++extern void markPQExpBufferBroken(PQExpBuffer str);
    ++
    + /*------------------------
    +  * resetPQExpBuffer
    +  *		Reset a PQExpBuffer to empty
    +
      ## src/makefiles/meson.build ##
     @@ src/makefiles/meson.build: pgxs_deps = {
        'llvm': llvm,
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +	$log_start = $log_end;
     +}
     +
    ++# Make sure the client_id and secret are correctly encoded. $vschars contains
    ++# every allowed character for a client_id/_secret (the "VSCHAR" class).
    ++# $vschars_esc is additionally backslash-escaped for inclusion in a
    ++# single-quoted connection string.
    ++my $vschars =
    ++  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
    ++my $vschars_esc =
    ++  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
    ++
    ++$node->connect_ok(
    ++	"user=$user dbname=postgres oauth_client_id='$vschars_esc'",
    ++	"escapable characters: client_id",
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
    ++$node->connect_ok(
    ++	"user=$user dbname=postgres oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
    ++	"escapable characters: client_id and secret",
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
    ++
     +#
     +# Further tests rely on support for specific behaviors in oauth_server.py. To
     +# trigger these behaviors, we ask for the special issuer .../param (which is set
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +#
     +
     +my $common_connstr = "user=testparam dbname=postgres ";
    ++my $base_connstr = $common_connstr;
     +
     +sub connstr
     +{
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +	my $json = encode_json(\%params);
     +	my $encoded = encode_base64($json, "");
     +
    -+	return "$common_connstr oauth_client_id=$encoded";
    ++	return "$base_connstr oauth_client_id=$encoded";
     +}
     +
     +# Make sure the param system works end-to-end first.
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +	expected_stderr => qr/bearer authentication failed/);
     +
     +# Test behavior of the oauth_client_secret.
    -+$common_connstr = "$common_connstr oauth_client_secret=12345";
    ++$base_connstr = "$common_connstr oauth_client_secret=''";
    ++
    ++$node->connect_ok(
    ++	connstr(stage => 'all', expected_secret => ''),
    ++	"empty oauth_client_secret",
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
    ++
    ++$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
     +
     +$node->connect_ok(
    -+	connstr(stage => 'all', expected_secret => '12345'),
    -+	"oauth_client_secret",
    ++	connstr(stage => 'all', expected_secret => $vschars),
    ++	"nonempty oauth_client_secret",
     +	expected_stderr =>
     +	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +#
     +
     +# Searching the logs is easier if OAuth parameter discovery isn't cluttering
    -+# things up; hardcode the issuer.
    -+$common_connstr = "user=test dbname=postgres oauth_issuer=$issuer";
    ++# things up; hardcode the issuer. (Scope is hardcoded to empty to cover that
    ++# case as well.)
    ++$common_connstr =
    ++  "user=test dbname=postgres oauth_issuer=$issuer oauth_scope=''";
     +
     +$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
     +$node->reload;
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        if secret is None:
     +            return
     +
    -+        if "Authorization" not in self.headers:
    -+            raise RuntimeError("client did not send Authorization header")
    -+
    ++        assert "Authorization" in self.headers
     +        method, creds = self.headers["Authorization"].split()
     +
     +        if method != "Basic":
     +            raise RuntimeError(f"client used {method} auth; expected Basic")
     +
    -+        expected_creds = f"{self.client_id}:{secret}"
    ++        username = urllib.parse.quote_plus(self.client_id)
    ++        password = urllib.parse.quote_plus(secret)
    ++        expected_creds = f"{username}:{password}"
    ++
     +        if creds.encode() != base64.b64encode(expected_creds.encode()):
     +            raise RuntimeError(
     +                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        form = self.rfile.read(size)
     +
     +        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
    -+        return urllib.parse.parse_qs(form.decode("utf-8"), strict_parsing=True)
    ++        return urllib.parse.parse_qs(
    ++            form.decode("utf-8"),
    ++            strict_parsing=True,
    ++            keep_blank_values=True,
    ++            encoding="utf-8",
    ++            errors="strict",
    ++        )
     +
     +    @property
     +    def client_id(self) -> str:
     +        """
    -+        Returns the client_id sent in the POST body. self._parse_params() must
    -+        have been called first.
    ++        Returns the client_id sent in the POST body or the Authorization header.
    ++        self._parse_params() must have been called first.
     +        """
    -+        return self._params["client_id"][0]
    ++        if "client_id" in self._params:
    ++            return self._params["client_id"][0]
    ++
    ++        if "Authorization" not in self.headers:
    ++            raise RuntimeError("client did not send any client_id")
    ++
    ++        _, creds = self.headers["Authorization"].split()
    ++
    ++        decoded = base64.b64decode(creds).decode("utf-8")
    ++        username, _ = decoded.split(":", 1)
    ++
    ++        return urllib.parse.unquote_plus(username)
     +
     +    def do_POST(self):
     +        self._response_code = 200
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        else:
     +            self._token_state.min_delay = 5  # default
     +
    ++        # Check the scope.
    ++        if "scope" in self._params:
    ++            assert self._params["scope"][0], "empty scopes should be omitted"
    ++
     +        return resp
     +
     +    def token(self) -> JsonObject:
2:  55068dfb46 < -:  ---------- v30-review-comments
3:  c01dbdd6cb ! 2:  cca5de6726 DO NOT MERGE: Add pytest suite for OAuth
    @@ src/test/python/client/test_oauth.py (new)
     +            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
     +
     +            body = self._request_body()
    -+            params = urllib.parse.parse_qs(body)
    ++            if body:
    ++                # parse_qs() is understandably fairly lax when it comes to
    ++                # acceptable characters, but we're stricter. Spaces must be
    ++                # encoded, and they must use the '+' encoding rather than "%20".
    ++                assert " " not in body
    ++                assert "%20" not in body
    ++
    ++                params = urllib.parse.parse_qs(
    ++                    body,
    ++                    keep_blank_values=True,
    ++                    strict_parsing=True,
    ++                    encoding="utf-8",
    ++                    errors="strict",
    ++                )
    ++            else:
    ++                params = {}
     +
     +            self._handle(params=params)
     +
    @@ src/test/python/client/test_oauth.py (new)
     +    access_token = secrets.token_urlsafe()
     +
     +    def check_client_authn(headers, params):
    -+        if not secret:
    ++        if secret is None:
    ++            assert "Authorization" not in headers
     +            assert params["client_id"] == [client_id]
     +            return
     +
     +        # Require the client to use Basic authn; request-body credentials are
     +        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
     +        assert "Authorization" in headers
    ++        assert "client_id" not in params
     +
     +        method, creds = headers["Authorization"].split()
     +        assert method == "Basic"
    @@ src/test/python/client/test_oauth.py (new)
     +        client.check_completed()
     +
     +
    ++# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
    ++# class definitions.
    ++all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
    ++all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
    ++
    ++
    ++@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
    ++@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
    ++@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
    ++@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
    ++def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
    ++    sock, client = accept(
    ++        oauth_issuer=openid_provider.issuer,
    ++        oauth_client_id=client_id,
    ++        oauth_client_secret=secret,
    ++        oauth_scope=scope,
    ++    )
    ++
    ++    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
    ++    verification_url = "https://example.com/device"
    ++
    ++    access_token = secrets.token_urlsafe()
    ++
    ++    def check_client_authn(headers, params):
    ++        if secret is None:
    ++            assert "Authorization" not in headers
    ++            assert params["client_id"] == [client_id]
    ++            return
    ++
    ++        # Require the client to use Basic authn; request-body credentials are
    ++        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
    ++        assert "Authorization" in headers
    ++        assert "client_id" not in params
    ++
    ++        method, creds = headers["Authorization"].split()
    ++        assert method == "Basic"
    ++
    ++        decoded = base64.b64decode(creds).decode("utf-8")
    ++        username, password = decoded.split(":", 1)
    ++
    ++        expected_username = urllib.parse.quote_plus(client_id)
    ++        expected_password = urllib.parse.quote_plus(secret)
    ++
    ++        assert [username, password] == [expected_username, expected_password]
    ++
    ++    # Set up our provider callbacks.
    ++    # NOTE that these callbacks will be called on a background thread. Don't do
    ++    # any unprotected state mutation here.
    ++
    ++    def authorization_endpoint(headers, params):
    ++        check_client_authn(headers, params)
    ++
    ++        if scope:
    ++            assert params["scope"] == [scope]
    ++        else:
    ++            assert "scope" not in params
    ++
    ++        resp = {
    ++            "device_code": device_code,
    ++            "user_code": user_code,
    ++            "interval": 0,
    ++            "verification_url": verification_url,
    ++            "expires_in": 5,
    ++        }
    ++
    ++        return 200, resp
    ++
    ++    openid_provider.register_endpoint(
    ++        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
    ++    )
    ++
    ++    def token_endpoint(headers, params):
    ++        check_client_authn(headers, params)
    ++
    ++        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
    ++        assert params["device_code"] == [device_code]
    ++
    ++        # Successfully finish the request by sending the access bearer token.
    ++        resp = {
    ++            "access_token": access_token,
    ++            "token_type": "bearer",
    ++        }
    ++
    ++        return 200, resp
    ++
    ++    openid_provider.register_endpoint(
    ++        "token_endpoint", "POST", "/token", token_endpoint
    ++    )
    ++
    ++    with sock:
    ++        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    ++            # Initiate a handshake, which should result in the above endpoints
    ++            # being called.
    ++            initial = start_oauth_handshake(conn)
    ++
    ++            # Validate and accept the token.
    ++            auth = get_auth_value(initial)
    ++            assert auth == f"Bearer {access_token}".encode("ascii")
    ++
    ++            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
    ++            finish_handshake(conn)
    ++
    ++
     +@pytest.mark.slow
     +@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
     +@pytest.mark.parametrize("retries", [1, 2])
v35-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v35-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From dc4f869365c2b7d376648d330fd9e3770342347a Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v35 1/2] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied (see below).

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

= Server-Side Validation =

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

= OAuth HBA Method =

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   19 +
 configure                                     |  149 +
 configure.ac                                  |   33 +
 doc/src/sgml/client-auth.sgml                 |  145 +
 doc/src/sgml/config.sgml                      |   17 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   29 +
 doc/src/sgml/libpq.sgml                       |  128 +
 doc/src/sgml/oauth-validators.sgml            |  140 +
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   31 +
 meson_options.txt                             |    4 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  727 +++++
 src/backend/libpq/auth.c                      |   26 +-
 src/backend/libpq/hba.c                       |   54 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |   17 +
 src/include/libpq/hba.h                       |    6 +-
 src/include/libpq/oauth.h                     |   49 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   14 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2406 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          |  669 +++++
 src/interfaces/libpq/fe-auth-oauth.h          |   42 +
 src/interfaces/libpq/fe-auth-sasl.h           |   10 +-
 src/interfaces/libpq/fe-auth-scram.c          |    6 +-
 src/interfaces/libpq/fe-auth.c                |  105 +-
 src/interfaces/libpq/fe-auth.h                |    9 +-
 src/interfaces/libpq/fe-connect.c             |   86 +-
 src/interfaces/libpq/fe-misc.c                |    7 +-
 src/interfaces/libpq/libpq-fe.h               |   75 +
 src/interfaces/libpq/libpq-int.h              |   15 +
 src/interfaces/libpq/meson.build              |    7 +
 src/interfaces/libpq/pqexpbuffer.c            |    2 +-
 src/interfaces/libpq/pqexpbuffer.h            |    6 +
 src/makefiles/meson.build                     |    1 +
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   33 +
 src/test/modules/oauth_validator/meson.build  |   33 +
 .../modules/oauth_validator/t/001_server.pl   |  352 +++
 .../modules/oauth_validator/t/oauth_server.py |  359 +++
 src/test/modules/oauth_validator/validator.c  |  100 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |   65 +
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   15 +
 56 files changed, 6050 insertions(+), 58 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 90cb95c868..302cf0487b 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -176,6 +176,7 @@ task:
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+        -Doauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -224,6 +225,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -236,6 +238,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Doauth=curl
   -Duuid=e2fs
 
 
@@ -313,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -691,8 +696,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 268ac94ae6..39fe5a0542 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -860,6 +861,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1568,6 +1570,7 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8404,6 +8407,57 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
+$as_echo_n "checking whether to build with OAuth support... " >&6; }
+
+
+
+# Check whether --with-oauth was given.
+if test "${with_oauth+set}" = set; then :
+  withval=$with_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+
+$as_echo "#define USE_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
+  fi
+elif test x"$with_oauth" != x"no"; then
+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
+$as_echo "$with_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -12918,6 +12972,90 @@ fi
 
 
 
+if test "$with_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
@@ -13943,6 +14081,17 @@ fi
 
 done
 
+fi
+
+if test "$with_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 3c89b54bf1..f7e1400b6e 100644
--- a/configure.ac
+++ b/configure.ac
@@ -917,6 +917,30 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with OAuth support])
+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
+if test x"$with_oauth" = x"" ; then
+  with_oauth=no
+fi
+
+if test x"$with_oauth" = x"curl"; then
+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
+  fi
+elif test x"$with_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_oauth])
+AC_SUBST(with_oauth)
+
+
 #
 # Bonjour
 #
@@ -1391,6 +1415,11 @@ fi
 AC_SUBST(LDAP_LIBS_FE)
 AC_SUBST(LDAP_LIBS_BE)
 
+if test "$with_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 # for contrib/sepgsql
 if test "$with_selinux" = yes; then
   AC_CHECK_LIB(selinux, security_compute_create_name, [],
@@ -1582,6 +1611,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..c5d1a1fe69 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,135 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+
+    <itemizedlist>
+     <listitem>
+      <para>
+       Resource owner: The user or system who owns protected resources and can
+       grant access to them.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Client: The system which accesses the protected resources using access
+       tokens.  Applications using libpq are the clients in connecting to a
+       <productname>PostgreSQL</productname> cluster.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Authorization server: The system which receives requests from, and
+       issues access tokens to, the client after the authenticated resource
+       owner has given approval.
+      </para>
+     </listitem>
+
+     <listitem>
+      <para>
+       Resource server: The system which hosts the protected resources which are
+       accessed by the client. The <productname>PostgreSQL</productname> cluster
+       being connected to is the resource server.
+      </para>
+     </listitem>
+
+    </itemizedlist>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authentication server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        The <acronym>URL</acronym> of the OAuth issuing party, which the client
+        must contact to receive a bearer token.  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>trust_validator_authz</literal></term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping is skipped, and
+        the OAuth validator takes full responsibility for mapping end user
+        identities to database roles.  If the validator authorizes the token,
+        the server trusts that the user is allowed to connect under the
+        requested role, and the connection is allowed to proceed regardless of
+        the authentication status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>trust_validator_authz</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d54f904956..d8e3e153c3 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1201,6 +1201,23 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library to use for validating OAuth connection tokens. If set to
+        an empty string (the default), OAuth connections will be refused. For
+        more information on implementing OAuth validators see
+        <xref linkend="oauth-validators" />. This parameter can only be set in
+        the <filename>postgresql.conf</filename> file.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 8449c20f79..f89f8af3c4 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1072,6 +1072,20 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-oauth">
+       <term><option>--with-oauth=<replaceable>LIBRARY</replaceable></option></term>
+       <listitem>
+        <para>
+         Build with OAuth authentication and authorization support.  The only
+         <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-systemd">
        <term><option>--with-systemd</option></term>
        <listitem>
@@ -2516,6 +2530,21 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-oauth">
+      <term><option>-Doauth={ auto | <replaceable>LIBRARY</replaceable> | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with OAuth authentication and authorization support.  The only
+        <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-systemd-meson">
       <term><option>-Dsystemd={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index afc9346757..62b8ae3b42 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2336,6 +2336,90 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth">custom OAuth
+        hook</link> is installed to provide one), then this parameter must be
+        set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of an issuer to contact if the server requests an OAuth
+        token for the connection. This parameter is optional and intended for
+        advanced usage; see also <xref linkend="libpq-connect-oauth-scope"/>.
+       </para>
+       <para>
+        If no <literal>oauth_issuer</literal> is provided, the client will ask
+        the <productname>PostgreSQL</productname> server to provide an
+        acceptable issuer URL (as configured in its
+        <link linkend="auth-oauth">HBA settings</link>). This is convenient, but
+        it requires two separate network connections to the server per attempt.
+       </para>
+       <para>
+        Providing an explicit <literal>oauth_issuer</literal> (and, typically,
+        an accompanying <literal>oauth_scope</literal>) skips this initial
+        "discovery" phase, which may speed up certain custom OAuth flows.
+        <!-- TODO: note to reviewer: the following is only partially implemented. -->
+        This parameter may also be set defensively, to prevent the backend
+        server from directing the client to arbitrary URLs.
+        <emphasis>However:</emphasis> if the client's issuer setting differs
+        from the server's expected issuer, the server is likely to reject the
+        issued token, and the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        <!-- TODO: note to reviewer: the following is only partially implemented. -->
+        If neither <xref linkend="libpq-connect-oauth-issuer"/> nor
+        <literal>oauth_scope</literal> is specified, the client will obtain
+        appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. Otherwise, the value of
+        this parameter will be used. Similarly to
+        <literal>oauth_issuer</literal>, if the client's scope setting does not
+        contain the server's required scopes, the server is likely to reject the
+        issued token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9961,6 +10045,50 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+
+      <para>
+       If <replaceable>hook</replaceable> is set to a null pointer instead of
+       a function pointer, the default hook will be installed.
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       Retrieves the current value of <literal>PGauthDataHook</literal>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..38dc3c82ef
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,140 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+ </para>
+ <para>
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading a shared library
+   with the <xref linkend="guc-oauth-validator-library"/>'s name as the library
+   base name. The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..af476c82fc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f..ae4732df65 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index bb9d7f5a8e..9478e59f56 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: oauth
+###############################################################
+
+oauth = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
+
+  if oauth.found()
+    oauth_library = 'curl'
+    cdata.set('USE_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3038,6 +3067,7 @@ libpq_deps += [
   gssapi,
   ldap_r,
   libintl,
+  oauth,
   ssl,
 ]
 
@@ -3710,6 +3740,7 @@ if meson.version().version_compare('>=0.57')
       'llvm': llvm,
       'lz4': lz4,
       'nls': libintl,
+      'oauth': oauth,
       'openssl': ssl,
       'pam': pam,
       'plperl': perl_dep,
diff --git a/meson_options.txt b/meson_options.txt
index b942155760..ffdfd57751 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -118,6 +118,10 @@ option('lz4', type: 'feature', value: 'auto',
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for OAuth 2.0 support (curl)')
+
 option('pam', type: 'feature', value: 'auto',
   description: 'PAM support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 42f50b4976..9b81b6fd58 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth	= @with_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..e7c3a721db
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,727 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("Internal error in OAuth validator module"));
+		return false;
+	}
+
+	if (!ret->authorized)
+	{
+		status = false;
+		goto cleanup;
+	}
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				errmsg("oauth_validator_library is not set"));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s modules \"%s\" have to define the symbol %s",
+					   "OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2fd96a7129..17032fd812 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1748,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2042,32 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "trust_validator_authz"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with trust_validator_authz";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2095,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2480,24 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "trust_validator_authz", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a3..b64c8dea97 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8a67f01200..d4aad280ea 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4794,6 +4795,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index cdd9a6e935..2f4e3a8f63 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -223,6 +223,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -700,6 +703,12 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
+#undef USE_OAUTH
+
+/* Define to 1 to use libcurl for OAuth support. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..477c834b40 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -63,6 +63,14 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifneq ($(with_oauth),no)
+OBJS += fe-auth-oauth.o
+
+ifeq ($(with_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +89,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +118,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..6867119883
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2406 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because
+		 * it's inefficient and pointless if your event loop has already
+		 * handed you the exact sockets that are ready. But that's not our use
+		 * case -- our client has no way to tell us which sockets are ready.
+		 * (They don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the
+		 * foot in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		markPQExpBufferBroken(buf);
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		/* Construct our Bearer token. */
+		resetPQExpBuffer(&actx->work_data);
+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
+
+		if (PQExpBufferDataBroken(actx->work_data))
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		*token = strdup(actx->work_data.data);
+		if (!*token)
+		{
+			actx_error(actx, "out of memory");
+			goto token_cleanup;
+		}
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..f5fc6ebc23
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,669 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+static char *
+client_initial_response(PGconn *conn, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	char	   *response = NULL;
+
+	if (!token)
+	{
+		/*
+		 * Either programmer error, or something went badly wrong during the
+		 * asynchronous fetch.
+		 *
+		 * TODO: users shouldn't see this; what action should they take if
+		 * they do?
+		 */
+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+		return NULL;
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		PQExpBufferData token;
+
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		initPQExpBuffer(&token);
+		appendPQExpBuffer(&token, "Bearer %s", request->token);
+
+		if (PQExpBufferDataBroken(token))
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = token.data;
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			PQExpBufferData token;
+
+			initPQExpBuffer(&token);
+			appendPQExpBuffer(&token, "Bearer %s", request.token);
+
+			if (PQExpBufferDataBroken(token))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			state->token = token.data;
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+		/*
+		 * Use our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly with an empty
+				 * token. This doesn't require any asynchronous work.
+				 */
+				state->token = strdup("");
+				if (!state->token)
+				{
+					libpq_append_conn_error(conn, "out of memory");
+					return SASL_FAILED;
+				}
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 258bfd0564..b47011d077 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..5d311d4107 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,15 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+#ifdef USE_OAUTH
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
+#endif
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +588,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +674,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +704,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1026,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1195,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1212,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1542,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 61c025ff3b..b53f8eae9b 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3694,6 +3713,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3849,6 +3869,17 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+#ifdef USE_OAUTH
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+#endif
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3882,7 +3913,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3919,6 +3960,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4600,6 +4676,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4717,6 +4794,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7239,6 +7321,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..a38c571107 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +105,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +728,74 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										int *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd..75043bbc8f 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..0181e5cc03 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -40,6 +40,13 @@ if gssapi.found()
   )
 endif
 
+if oauth.found()
+  libpq_sources += files('fe-auth-oauth.c')
+  if oauth_library == 'curl'
+    libpq_sources += files('fe-auth-oauth-curl.c')
+  endif
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/interfaces/libpq/pqexpbuffer.c b/src/interfaces/libpq/pqexpbuffer.c
index 037875c523..9473ed6749 100644
--- a/src/interfaces/libpq/pqexpbuffer.c
+++ b/src/interfaces/libpq/pqexpbuffer.c
@@ -46,7 +46,7 @@ static const char *const oom_buffer_ptr = oom_buffer;
  *
  * Put a PQExpBuffer in "broken" state if it isn't already.
  */
-static void
+void
 markPQExpBufferBroken(PQExpBuffer str)
 {
 	if (str->data != oom_buffer)
diff --git a/src/interfaces/libpq/pqexpbuffer.h b/src/interfaces/libpq/pqexpbuffer.h
index d05010066b..9956829a88 100644
--- a/src/interfaces/libpq/pqexpbuffer.h
+++ b/src/interfaces/libpq/pqexpbuffer.h
@@ -121,6 +121,12 @@ extern void initPQExpBuffer(PQExpBuffer str);
 extern void destroyPQExpBuffer(PQExpBuffer str);
 extern void termPQExpBuffer(PQExpBuffer str);
 
+/*------------------------
+ * markPQExpBufferBroken
+ *		Put a PQExpBuffer in "broken" state if it isn't already.
+ */
+extern void markPQExpBufferBroken(PQExpBuffer str);
+
 /*------------------------
  * resetPQExpBuffer
  *		Reset a PQExpBuffer to empty
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 850e927584..dec7f0d029 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -235,6 +235,7 @@ pgxs_deps = {
   'llvm': llvm,
   'lz4': lz4,
   'nls': libintl,
+  'oauth': oauth,
   'pam': pam,
   'perl': perl_dep,
   'python': python3_dep,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14..bdfd5f1f8d 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index c829b61953..bd13e4afbd 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..402369504d
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,33 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_oauth
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..86813e8b6a
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,33 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_oauth': oauth_library,
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..5696207155
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,352 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr => qr/failed to obtain device authorization: response is too large/
+);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr => qr/failed to obtain access token: response is too large/
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the issuer. (Scope is hardcoded to empty to cover that
+# case as well.)
+$common_connstr =
+  "user=test dbname=postgres oauth_issuer=$issuer oauth_scope=''";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr oauth_client_id=f02c6361-0635",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..35ba8abb61
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,359 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..dbba326bc4
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,100 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 007571e948..83360b397a 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2499,6 +2499,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2542,7 +2547,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..a13240cd01
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e..362b20a94f 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 171a7dd5d2..deaab159ba 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1718,6 +1722,8 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1782,6 +1788,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1942,11 +1949,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3067,6 +3077,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3463,6 +3475,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
@@ -3663,6 +3677,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v35-0002-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v35-0002-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From cca5de67267b78eaa3fcf990971bb462766e993d Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v35 2/2] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    7 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 2040 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 +++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 +++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 +++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5759 insertions(+), 2 deletions(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 302cf0487b..175b2eff79 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
 
 
 # What files to preserve in case tests fail
@@ -320,6 +320,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -375,6 +376,8 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
@@ -385,7 +388,7 @@ task:
             -Dllvm=disabled \
             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
             -DPERL=perl5.36-i386-linux-gnu \
-            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
             build-32
         EOF
 
diff --git a/meson.build b/meson.build
index 9478e59f56..35056a24d5 100644
--- a/meson.build
+++ b/meson.build
@@ -3381,6 +3381,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3542,6 +3545,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7f..c7fce098eb 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..e5b54fd937
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2040 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_oauth") == "none",
+    reason="OAuth client tests require --with-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy reponse, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://example.com/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": openid_provider.discovery_uri,
+            }
+
+            fail_oauth_handshake(conn, resp)
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..045526a43a
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..ee39c2a14e
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..ea31ad4f87
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

#151Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#149)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Oct 29, 2024 at 10:41 AM Daniel Gustafsson <daniel@yesql.se> wrote:

Question is though, if we added PAM
today would we have done the same?

I assume so; the client can't tell PAM apart from LDAP or any other
plaintext method. (In the same vein, the server can't tell if the
client uses libcurl to grab a token, or something entirely different.)

--Jacob

#152Antonin Houska
ah@cybertec.at
In reply to: Jacob Champion (#142)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Thu, Oct 17, 2024 at 10:51 PM Antonin Houska <ah@cybertec.at> wrote:

* oauth_validator_library is defined as PGC_SIGHUP - is that intentional?

Yes, I think it's going to be important to let DBAs migrate their
authentication modules without a full restart. That probably deserves
more explicit testing, now that you mention it. Is there a specific
concern that you have with that?

No concern. I was just trying to imagine when the module needs to be changed.

And regardless, the library appears to be loaded by every backend during
authentication. Why isn't it loaded by postmaster like libraries listed in
shared_preload_libraries? fork() would then ensure that the backends do have
the library in their address space.

It _can_ be, if you want -- there's nothing that I know of preventing
the validator from also being preloaded with its own _PG_init(), is
there? But I don't think it's a good idea to force that, for the same
reason we want to allow SIGHUP.

Loading the library by postmaster does not prevent the backends from reloading
it on SIGHUP later. I was simply concerned about performance. (I proposed
loading the library at another stage of backend initialization rather than
adding _PG_init() to it.)

* pg_fe_run_oauth_flow()

When first time here
case OAUTH_STEP_TOKEN_REQUEST:
if (!handle_token_response(actx, &state->token))
goto error_return;

the user hasn't been prompted yet so ISTM that the first token request must
always fail. It seems more logical if the prompt is set to the user before
sending the token request to the server. (Although the user probably won't
be that fast to make the first request succeed, so consider this just a
hint.)

That's also intentional -- if the first token response fails for a
reason _other_ than "we're waiting for the user", then we want to
immediately fail hard instead of making them dig out their phone and
go on a two-minute trip, because they're going to come back and find
that it was all for nothing.

There's a comment immediately below the part you quoted that mentions
this briefly; maybe I should move it up a bit?

That's fine, I understand now.

* As long as I understand, the following comment would make sense:

diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index f943a31cc08..97259fb5654 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -518,6 +518,7 @@ oauth_exchange(void *opaq, bool final,
switch (state->state)
{
case FE_OAUTH_INIT:
+                       /* Initial Client Response */
Assert(inputlen == -1);

if (!derive_discovery_uri(conn))

There are multiple "initial client response" cases, though. What
questions are you hoping to clarify with the comment? Maybe we can
find a more direct answer.

Easiness of reading is the only "question" here :-) It's might not always be
obvious why a variable should have some particular value. In general, the
Assert() statements are almost always preceded with a comment in the PG
source.

Or, doesn't the FE_OAUTH_INIT branch of the switch statement actually fit
better into oauth_init()?

oauth_init() is the mechanism initialization for the SASL framework
itself, which is shared with SCRAM. In the current architecture, the
init callback doesn't take the initial client response into
consideration at all.

Sure. The FE_OAUTH_INIT branch in oauth_exchange() (FE) also does not generate
the initial client response.

Based on reading the SCRAM implementation, I concluded that the init()
callback can do authentication method specific things, but unlike exchange()
it does not generate any output.

Generating the client response is up to the exchange callback -- and
even if we moved the SASL_ASYNC processing elsewhere, I don't think we
can get rid of its added complexity. Something has to signal upwards
that it's time to transfer control to an async engine. And we can't
make the asynchronicity a static attribute of the mechanism itself,
because we can skip the flow if something gives us a cached token.

I didn't want to skip the flow. I thought that the init() callback could be
made responsible for getting the token, but forgot that it still needs some
way to signal to the caller that the async flow is needed.

Anyway, are you sure that pg_SASL_continue() can also receive the SASL_ASYNC
value from oauth_exchange()? My understanding is that pg_SASL_init() receives
it if there is no token, but after that, oauth_exchange() is not called util
the token is available, and thus it should not return SASL_ASYNC anymore.

--
Antonin Houska
Web: https://www.cybertec-postgresql.com

#153Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Antonin Houska (#152)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Oct 31, 2024 at 4:05 AM Antonin Houska <ah@cybertec.at> wrote:

And regardless, the library appears to be loaded by every backend during
authentication. Why isn't it loaded by postmaster like libraries listed in
shared_preload_libraries? fork() would then ensure that the backends do have
the library in their address space.

It _can_ be, if you want -- there's nothing that I know of preventing
the validator from also being preloaded with its own _PG_init(), is
there? But I don't think it's a good idea to force that, for the same
reason we want to allow SIGHUP.

Loading the library by postmaster does not prevent the backends from reloading
it on SIGHUP later. I was simply concerned about performance. (I proposed
loading the library at another stage of backend initialization rather than
adding _PG_init() to it.)

Okay. I think this is going to be one of the slower authentication
methods by necessity: the builtin flow in libpq requires a human in
the loop, and an online validator is going to be making several HTTP
calls from the backend. So if it turns out later that we need to
optimize the backend logic, I'd prefer to have a case study in hand;
otherwise I think we're likely to optimize the wrong things.

Easiness of reading is the only "question" here :-) It's might not always be
obvious why a variable should have some particular value. In general, the
Assert() statements are almost always preceded with a comment in the PG
source.

Oh, an assertion label! I can absolutely add one; I originally thought
you were proposing a label for the case itself.

Or, doesn't the FE_OAUTH_INIT branch of the switch statement actually fit
better into oauth_init()?

oauth_init() is the mechanism initialization for the SASL framework
itself, which is shared with SCRAM. In the current architecture, the
init callback doesn't take the initial client response into
consideration at all.

Sure. The FE_OAUTH_INIT branch in oauth_exchange() (FE) also does not generate
the initial client response.

It might, if it ends up falling through to FE_OAUTH_REQUESTING_TOKEN.
There are two paths that can do that: the case where we have no
discovery URI, and the case where a custom user flow returns a token
synchronously (it was probably cached).

Anyway, are you sure that pg_SASL_continue() can also receive the SASL_ASYNC
value from oauth_exchange()? My understanding is that pg_SASL_init() receives
it if there is no token, but after that, oauth_exchange() is not called util
the token is available, and thus it should not return SASL_ASYNC anymore.

Correct -- the only way for the current implementation of the
OAUTHBEARER mechanism to return SASL_ASYNC is during the very first
call. That's not an assumption I want to put into the higher levels,
though; I think Michael will be unhappy with me if I introduce
additional SASL coupling after the decoupling work that's been done
over the last few releases. :D

Thanks again,
--Jacob

#154jian he
jian.universality@gmail.com
In reply to: Jacob Champion (#153)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi there.
zero knowledge of Oath, just reading through the v35-0001.
forgive me if my comments are naive.

+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+ double parsed;
+ int cnt;
+
+ /*
+ * The JSON lexer has already validated the number, which is stricter than
+ * the %f format, so we should be good to use sscanf().
+ */
+ cnt = sscanf(interval_str, "%lf", &parsed);
+
+ if (cnt != 1)
+ {
+ /*
+ * Either the lexer screwed up or our assumption above isn't true, and
+ * either way a developer needs to take a look.
+ */
+ Assert(cnt == 1);
+ return 1; /* don't fall through in release builds */
+ }
+
+ parsed = ceil(parsed);
+
+ if (parsed < 1)
+ return actx->debugging ? 0 : 1;
+
+ else if (INT_MAX <= parsed)
+ return INT_MAX;
+
+ return parsed;
+}
The above Assert looks very wrong to me.

we can also use PG_INT32_MAX, instead of INT_MAX
(generally i think PG_INT32_MAX looks more intuitive to me)

+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+ char   *device_code;
+ char   *user_code;
+ char   *verification_uri;
+ char   *interval_str;
+
+ /* Fields below are parsed from the corresponding string above. */
+ int interval;
+};

click through the link https://www.rfc-editor.org/rfc/rfc8628#section-3.2
it says
"
expires_in
REQUIRED. The lifetime in seconds of the "device_code" and
"user_code".
interval
OPTIONAL. The minimum amount of time in seconds that the client
SHOULD wait between polling requests to the token endpoint. If no
value is provided, clients MUST use 5 as the default.
"
these two fields seem to differ from struct device_authz.

#155Daniel Gustafsson
daniel@yesql.se
In reply to: jian he (#154)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 4 Nov 2024, at 06:00, jian he <jian.universality@gmail.com> wrote:

+ if (cnt != 1)
+ {
+ /*
+ * Either the lexer screwed up or our assumption above isn't true, and
+ * either way a developer needs to take a look.
+ */
+ Assert(cnt == 1);
+ return 1; /* don't fall through in release builds */
+ }

The above Assert looks very wrong to me.

I think the point is to fail hard in development builds to ensure whatever
caused the disconnect between the json lexer and sscanf parsing is looked at.
It should probably be changed to Assert(false); which is the common pattern for
erroring out like this.

--
Daniel Gustafsson

#156Jacob Champion
jacob.champion@enterprisedb.com
In reply to: jian he (#154)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sun, Nov 3, 2024 at 9:00 PM jian he <jian.universality@gmail.com> wrote:

The above Assert looks very wrong to me.

I can switch to Assert(false) if that's preferred, but it makes part
of the libc assert() report useless. (I wish we had more fluent ways
to say "this shouldn't happen, but if it does, we still need to get
out safely.")

we can also use PG_INT32_MAX, instead of INT_MAX
(generally i think PG_INT32_MAX looks more intuitive to me)

That's a fixed-width max; we want the maximum for the `int` type here.

expires_in
REQUIRED. The lifetime in seconds of the "device_code" and
"user_code".
interval
OPTIONAL. The minimum amount of time in seconds that the client
SHOULD wait between polling requests to the token endpoint. If no
value is provided, clients MUST use 5 as the default.
"
these two fields seem to differ from struct device_authz.

Yeah, Daniel and I had talked about being stricter about REQUIRED
fields that are not currently used. There's a comment making note of
this in parse_device_authz(). The v1 code will need to make expires_in
REQUIRED, so that future developers can develop features that depend
on it without worrying about breaking
currently-working-but-noncompliant deployments. (And if there are any
noncompliant deployments out there now, we need to know about them so
we can have that explicit discussion.)

Thanks,
--Jacob

#157Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#150)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Oct 29, 2024 at 1:34 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Next up will be Antonin's suggested change to the Bearer handling, as
well as previously-discussed changes to the --with-oauth build option.

Done in v36, attached. The Bearer scheme is enforced in all cases
except initial discovery.

--with-builtin-oauth=curl is now the way to build with the
libcurl-based Device Authorization flow. If you don't want to build
with that option, you can still use the callback hooks for your own
flow. (libpq will give you a new error message if the server requires
OAUTHBEARER but you have no flows installed.) There is also a related
002_client.pl, which provides a basic framework for testing the hooks
without an authorization server. I expect to flesh that out more.

This also means that fe-auth-oauth.c is now always incorporated into
the client build. So I had to fix the SOCKET/int confusion on Windows
in libpq-fe.h: it now has a dependency on <winsock2.h>. I don't really
like that, and if anyone has a way to decouple those safely, I'm all
ears.

Thanks,
--Jacob

Attachments:

since-v35.diff.txttext/plain; charset=UTF-8; name=since-v35.diff.txtDownload
1:  dc4f869365 ! 1:  5730b875b8 Add OAUTHBEARER SASL mechanism
    @@ Commit message
         Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
     
      ## .cirrus.tasks.yml ##
    +@@ .cirrus.tasks.yml: env:
    +   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
    +   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
    +   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
    +-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
    ++  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
    + 
    + 
    + # What files to preserve in case tests fail
     @@ .cirrus.tasks.yml: task:
          chown root:postgres /tmp/cores
          sysctl kern.corefile='/tmp/cores/%N.%P.core'
    @@ .cirrus.tasks.yml: task:
        # NB: Intentionally build without -Dllvm. The freebsd image size is already
        # large enough to make VM startup slow, and even without llvm freebsd
     @@ .cirrus.tasks.yml: task:
    +         --buildtype=debug \
              -Dcassert=true -Dinjection_points=true \
              -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
    -         -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
    -+        -Doauth=curl \
    ++        -Dbuiltin_oauth=curl \
              -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
              build
          EOF
    @@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
        --with-libxslt
        --with-llvm
        --with-lz4
    -+  --with-oauth=curl
    ++  --with-builtin-oauth=curl
        --with-pam
        --with-perl
        --with-python
    @@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
      
      LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
        -Dllvm=enabled
    -+  -Doauth=curl
    ++  -Dbuiltin_oauth=curl
        -Duuid=e2fs
      
      
    @@ configure: with_uuid
      with_readline
      with_systemd
      with_selinux
    -+with_oauth
    ++with_builtin_oauth
      with_ldap
      with_krb_srvnam
      krb_srvtab
    @@ configure: with_krb_srvnam
      with_pam
      with_bsd_auth
      with_ldap
    -+with_oauth
    ++with_builtin_oauth
      with_bonjour
      with_selinux
      with_systemd
    @@ configure: Optional Packages:
        --with-pam              build with PAM support
        --with-bsd-auth         build with BSD Authentication support
        --with-ldap             build with LDAP support
    -+  --with-oauth=LIB        use LIB for OAuth 2.0 support (curl)
    ++  --with-builtin-oauth=LIB
    ++                          use LIB for built-in OAuth 2.0 client flows (curl)
        --with-bonjour          build with Bonjour support
        --with-selinux          build with SELinux support
        --with-systemd          build with systemd support
    @@ configure: $as_echo "$with_ldap" >&6; }
     +#
     +# OAuth 2.0
     +#
    -+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with OAuth support" >&5
    -+$as_echo_n "checking whether to build with OAuth support... " >&6; }
    ++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with built-in OAuth client support" >&5
    ++$as_echo_n "checking whether to build with built-in OAuth client support... " >&6; }
     +
     +
     +
    -+# Check whether --with-oauth was given.
    -+if test "${with_oauth+set}" = set; then :
    -+  withval=$with_oauth;
    ++# Check whether --with-builtin-oauth was given.
    ++if test "${with_builtin_oauth+set}" = set; then :
    ++  withval=$with_builtin_oauth;
     +  case $withval in
     +    yes)
    -+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
    ++      as_fn_error $? "argument required for --with-builtin-oauth option" "$LINENO" 5
     +      ;;
     +    no)
    -+      as_fn_error $? "argument required for --with-oauth option" "$LINENO" 5
    ++      as_fn_error $? "argument required for --with-builtin-oauth option" "$LINENO" 5
     +      ;;
     +    *)
     +
    @@ configure: $as_echo "$with_ldap" >&6; }
     +fi
     +
     +
    -+if test x"$with_oauth" = x"" ; then
    -+  with_oauth=no
    ++if test x"$with_builtin_oauth" = x"" ; then
    ++  with_builtin_oauth=no
     +fi
     +
    -+if test x"$with_oauth" = x"curl"; then
    ++if test x"$with_builtin_oauth" = x"curl"; then
     +
    -+$as_echo "#define USE_OAUTH 1" >>confdefs.h
    ++$as_echo "#define USE_BUILTIN_OAUTH 1" >>confdefs.h
     +
     +
     +$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
    @@ configure: $as_echo "$with_ldap" >&6; }
     +    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
     +$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
     +  fi
    -+elif test x"$with_oauth" != x"no"; then
    -+  as_fn_error $? "--with-oauth must specify curl" "$LINENO" 5
    ++elif test x"$with_builtin_oauth" != x"no"; then
    ++  as_fn_error $? "--with-builtin-oauth must specify curl" "$LINENO" 5
     +fi
     +
    -+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth" >&5
    -+$as_echo "$with_oauth" >&6; }
    ++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_builtin_oauth" >&5
    ++$as_echo "$with_builtin_oauth" >&6; }
     +
     +
     +
    @@ configure: $as_echo "$with_ldap" >&6; }
      #
     @@ configure: fi
      
    + fi
      
    - 
    -+if test "$with_oauth" = curl ; then
    ++# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
    ++# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
    ++# dependency on that platform?
    ++if test "$with_builtin_oauth" = curl ; then
     +  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
     +$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
     +if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
    @@ configure: fi
     +  LIBS="-lcurl $LIBS"
     +
     +else
    -+  as_fn_error $? "library 'curl' is required for --with-oauth=curl" "$LINENO" 5
    ++  as_fn_error $? "library 'curl' is required for --with-builtin-oauth=curl" "$LINENO" 5
     +fi
     +
     +  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
    @@ configure: fi
     +fi
     +fi
     +
    - # for contrib/sepgsql
    - if test "$with_selinux" = yes; then
    -   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for security_compute_create_name in -lselinux" >&5
    + if test "$with_gssapi" = yes ; then
    +   if test "$PORTNAME" != "win32"; then
    +     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
     @@ configure: fi
      
      done
      
     +fi
     +
    -+if test "$with_oauth" = curl; then
    ++if test "$with_builtin_oauth" = curl; then
     +  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
     +if test "x$ac_cv_header_curl_curl_h" = xyes; then :
     +
     +else
    -+  as_fn_error $? "header file <curl/curl.h> is required for OAuth" "$LINENO" 5
    ++  as_fn_error $? "header file <curl/curl.h> is required for --with-builtin-oauth=curl" "$LINENO" 5
     +fi
     +
     +
    @@ configure.ac: AC_MSG_RESULT([$with_ldap])
     +#
     +# OAuth 2.0
     +#
    -+AC_MSG_CHECKING([whether to build with OAuth support])
    -+PGAC_ARG_REQ(with, oauth, [LIB], [use LIB for OAuth 2.0 support (curl)])
    -+if test x"$with_oauth" = x"" ; then
    -+  with_oauth=no
    ++AC_MSG_CHECKING([whether to build with built-in OAuth client support])
    ++PGAC_ARG_REQ(with, builtin-oauth, [LIB], [use LIB for built-in OAuth 2.0 client flows (curl)])
    ++if test x"$with_builtin_oauth" = x"" ; then
    ++  with_builtin_oauth=no
     +fi
     +
    -+if test x"$with_oauth" = x"curl"; then
    -+  AC_DEFINE([USE_OAUTH], 1, [Define to 1 to build with OAuth 2.0 support. (--with-oauth)])
    -+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth support.])
    ++if test x"$with_builtin_oauth" = x"curl"; then
    ++  AC_DEFINE([USE_BUILTIN_OAUTH], 1, [Define to 1 to build with OAuth 2.0 client flows. (--with-builtin-oauth)])
    ++  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth client flows.])
     +  # OAuth requires python for testing
     +  if test "$with_python" != yes; then
     +    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
     +  fi
    -+elif test x"$with_oauth" != x"no"; then
    -+  AC_MSG_ERROR([--with-oauth must specify curl])
    ++elif test x"$with_builtin_oauth" != x"no"; then
    ++  AC_MSG_ERROR([--with-builtin-oauth must specify curl])
     +fi
     +
    -+AC_MSG_RESULT([$with_oauth])
    -+AC_SUBST(with_oauth)
    ++AC_MSG_RESULT([$with_builtin_oauth])
    ++AC_SUBST(with_builtin_oauth)
     +
     +
      #
      # Bonjour
      #
    -@@ configure.ac: fi
    - AC_SUBST(LDAP_LIBS_FE)
    - AC_SUBST(LDAP_LIBS_BE)
    +@@ configure.ac: failure.  It is possible the compiler isn't looking in the proper directory.
    + Use --without-zlib to disable zlib support.])])
    + fi
      
    -+if test "$with_oauth" = curl ; then
    -+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-oauth=curl])])
    ++# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
    ++# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
    ++# dependency on that platform?
    ++if test "$with_builtin_oauth" = curl ; then
    ++  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-builtin-oauth=curl])])
     +  PGAC_CHECK_LIBCURL
     +fi
     +
    - # for contrib/sepgsql
    - if test "$with_selinux" = yes; then
    -   AC_CHECK_LIB(selinux, security_compute_create_name, [],
    + if test "$with_gssapi" = yes ; then
    +   if test "$PORTNAME" != "win32"; then
    +     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
     @@ configure.ac: elif test "$with_uuid" = ossp ; then
            [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
      fi
      
    -+if test "$with_oauth" = curl; then
    -+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for OAuth])])
    ++if test "$with_builtin_oauth" = curl; then
    ++  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-builtin-oauth=curl])])
     +fi
     +
      if test "$PORTNAME" = "win32" ; then
    @@ doc/src/sgml/installation.sgml: build-postgresql:
             </listitem>
            </varlistentry>
      
    -+      <varlistentry id="configure-option-with-oauth">
    -+       <term><option>--with-oauth=<replaceable>LIBRARY</replaceable></option></term>
    ++      <varlistentry id="configure-option-with-builtin-oauth">
    ++       <term><option>--with-builtin-oauth=<replaceable>LIBRARY</replaceable></option></term>
     +       <listitem>
     +        <para>
    -+         Build with OAuth authentication and authorization support.  The only
    ++         Build with support for OAuth 2.0 client flows.  The only
     +         <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
     +         This requires the <productname>curl</productname> package to be
     +         installed.  Building with this will check for the required header files
    @@ doc/src/sgml/installation.sgml: ninja install
            </listitem>
           </varlistentry>
      
    -+     <varlistentry id="configure-with-oauth">
    -+      <term><option>-Doauth={ auto | <replaceable>LIBRARY</replaceable> | disabled }</option></term>
    ++     <varlistentry id="configure-with-builtin-oauth">
    ++      <term><option>-Dbuiltin_oauth={ auto | <replaceable>LIBRARY</replaceable> | disabled }</option></term>
     +      <listitem>
     +       <para>
    -+        Build with OAuth authentication and authorization support.  The only
    ++        Build with support for OAuth 2.0 client flows.  The only
     +        <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
     +        This requires the <productname>curl</productname> package to be
     +        installed.  Building with this will check for the required header files
    @@ meson.build: endif
      
      
     +###############################################################
    -+# Library: oauth
    ++# Library: OAuth (libcurl)
     +###############################################################
     +
    -+oauth = not_found_dep
    ++libcurl = not_found_dep
     +oauth_library = 'none'
    -+oauthopt = get_option('oauth')
    ++oauthopt = get_option('builtin_oauth')
     +
     +if oauthopt == 'auto' and auto_features.disabled()
     +  oauthopt = 'none'
    @@ meson.build: endif
     +if oauthopt in ['auto', 'curl']
     +  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
     +  # to explicitly set TLS 1.3 ciphersuites).
    -+  oauth = dependency('libcurl', version: '>= 7.61.0', required: (oauthopt == 'curl'))
    -+
    -+  if oauth.found()
    ++  libcurl = dependency('libcurl', version: '>= 7.61.0',
    ++                       required: (oauthopt == 'curl'))
    ++  if libcurl.found()
     +    oauth_library = 'curl'
    -+    cdata.set('USE_OAUTH', 1)
    ++    cdata.set('USE_BUILTIN_OAUTH', 1)
     +    cdata.set('USE_OAUTH_CURL', 1)
     +  endif
     +endif
     +
    -+if oauthopt == 'auto' and auto_features.enabled() and not oauth.found()
    ++if oauthopt == 'auto' and auto_features.enabled() and not libcurl.found()
     +  error('no OAuth implementation library found')
     +endif
     +
    @@ meson.build: endif
      # Library: Tcl (for pltcl)
      #
     @@ meson.build: libpq_deps += [
    + 
        gssapi,
        ldap_r,
    ++  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
    ++  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
    ++  # dependency on that platform?
    ++  libcurl,
        libintl,
    -+  oauth,
        ssl,
      ]
    - 
     @@ meson.build: if meson.version().version_compare('>=0.57')
    +       'gss': gssapi,
    +       'icu': icu,
    +       'ldap': ldap,
    ++      'libcurl': libcurl,
    +       'libxml': libxml,
    +       'libxslt': libxslt,
            'llvm': llvm,
    -       'lz4': lz4,
    -       'nls': libintl,
    -+      'oauth': oauth,
    -       'openssl': ssl,
    -       'pam': pam,
    -       'plperl': perl_dep,
     
      ## meson_options.txt ##
    -@@ meson_options.txt: option('lz4', type: 'feature', value: 'auto',
    - option('nls', type: 'feature', value: 'auto',
    -   description: 'Native language support')
    +@@ meson_options.txt: option('bonjour', type: 'feature', value: 'auto',
    + option('bsd_auth', type: 'feature', value: 'auto',
    +   description: 'BSD Authentication support')
      
    -+option('oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
    ++option('builtin_oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
     +  value: 'auto',
    -+  description: 'use LIB for OAuth 2.0 support (curl)')
    ++  description: 'use LIB for built-in OAuth 2.0 client flows (curl)')
     +
    - option('pam', type: 'feature', value: 'auto',
    -   description: 'PAM support')
    + option('docs', type: 'feature', value: 'auto',
    +   description: 'Documentation in HTML and man page format')
      
     
      ## src/Makefile.global.in ##
    @@ src/Makefile.global.in: with_ldap	= @with_ldap@
      with_libxml	= @with_libxml@
      with_libxslt	= @with_libxslt@
      with_llvm	= @with_llvm@
    -+with_oauth	= @with_oauth@
    ++with_builtin_oauth = @with_builtin_oauth@
      with_system_tzdata = @with_system_tzdata@
      with_uuid	= @with_uuid@
      with_zlib	= @with_zlib@
    @@ src/include/pg_config.h.in
      /* Define to 1 if you have the `ldap' library (-lldap). */
      #undef HAVE_LIBLDAP
      
    +@@
    + /* Define to 1 to build with BSD Authentication support. (--with-bsd-auth) */
    + #undef USE_BSD_AUTH
    + 
    ++/* Define to 1 to build with OAuth 2.0 client flows. (--with-builtin-oauth) */
    ++#undef USE_BUILTIN_OAUTH
    ++
    + /* Define to build with ICU support. (--with-icu) */
    + #undef USE_ICU
    + 
     @@
      /* Define to select named POSIX semaphores. */
      #undef USE_NAMED_POSIX_SEMAPHORES
      
    -+/* Define to 1 to build with OAuth 2.0 support. (--with-oauth) */
    -+#undef USE_OAUTH
    -+
    -+/* Define to 1 to use libcurl for OAuth support. */
    ++/* Define to 1 to use libcurl for OAuth client flows. */
     +#undef USE_OAUTH_CURL
     +
      /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
    @@ src/include/pg_config.h.in
      
     
      ## src/interfaces/libpq/Makefile ##
    +@@ src/interfaces/libpq/Makefile: endif
    + 
    + OBJS = \
    + 	$(WIN32RES) \
    ++	fe-auth-oauth.o \
    + 	fe-auth-scram.o \
    + 	fe-cancel.o \
    + 	fe-connect.o \
     @@ src/interfaces/libpq/Makefile: OBJS += \
      	fe-secure-gssapi.o
      endif
      
    -+ifneq ($(with_oauth),no)
    -+OBJS += fe-auth-oauth.o
    -+
    -+ifeq ($(with_oauth),curl)
    ++ifeq ($(with_builtin_oauth),curl)
     +OBJS += fe-auth-oauth-curl.o
     +endif
    -+endif
     +
      ifeq ($(PORTNAME), cygwin)
      override shlib = cyg$(NAME)$(DLSUFFIX)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (tok.access_token)
     +	{
    -+		/* Construct our Bearer token. */
    -+		resetPQExpBuffer(&actx->work_data);
    -+		appendPQExpBuffer(&actx->work_data, "Bearer %s", tok.access_token);
    -+
    -+		if (PQExpBufferDataBroken(actx->work_data))
    -+		{
    -+			actx_error(actx, "out of memory");
    -+			goto token_cleanup;
    -+		}
    -+
    -+		*token = strdup(actx->work_data.data);
    -+		if (!*token)
    -+		{
    -+			actx_error(actx, "out of memory");
    -+			goto token_cleanup;
    -+		}
    ++		*token = tok.access_token;
    ++		tok.access_token = NULL;
     +
     +		success = true;
     +		goto token_cleanup;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +#define kvsep "\x01"
     +
    ++/*
    ++ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
    ++ *
    ++ * If discover is true, the token pointer will be ignored and the initial
    ++ * response will instead contain a request for the server's required OAuth
    ++ * parameters (Sec. 4.3). Otherwise, a bearer token must be provided.
    ++ *
    ++ * Returns the response as a null-terminated string, or NULL on error.
    ++ */
     +static char *
    -+client_initial_response(PGconn *conn, const char *token)
    ++client_initial_response(PGconn *conn, bool discover, const char *token)
     +{
    -+	static const char *const resp_format = "n,," kvsep "auth=%s" kvsep kvsep;
    ++	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
     +
     +	PQExpBufferData buf;
    ++	const char *authn_scheme;
     +	char	   *response = NULL;
     +
    -+	if (!token)
    ++	if (discover)
    ++	{
    ++		/* Parameter discovery uses a completely empty auth value. */
    ++		authn_scheme = token = "";
    ++	}
    ++	else
     +	{
     +		/*
    -+		 * Either programmer error, or something went badly wrong during the
    -+		 * asynchronous fetch.
    -+		 *
    -+		 * TODO: users shouldn't see this; what action should they take if
    -+		 * they do?
    ++		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
    ++		 * space is used as a separator.
     +		 */
    -+		libpq_append_conn_error(conn, "no OAuth token was set for the connection");
    -+		return NULL;
    ++		authn_scheme = "Bearer ";
    ++
    ++		/* We must have a token. */
    ++		if (!token)
    ++		{
    ++			/*
    ++			 * Either programmer error, or something went badly wrong during
    ++			 * the asynchronous fetch.
    ++			 *
    ++			 * TODO: users shouldn't see this; what action should they take if
    ++			 * they do?
    ++			 */
    ++			libpq_append_conn_error(conn, "no OAuth token was set for the connection");
    ++			return NULL;
    ++		}
     +	}
     +
     +	initPQExpBuffer(&buf);
    -+	appendPQExpBuffer(&buf, resp_format, token);
    ++	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
     +
     +	if (!PQExpBufferDataBroken(buf))
     +		response = strdup(buf.data);
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		 * onto the original string, since it may not be safe for us to free()
     +		 * it.)
     +		 */
    -+		PQExpBufferData token;
    -+
     +		if (!request->token)
     +		{
     +			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
     +			return PGRES_POLLING_FAILED;
     +		}
     +
    -+		initPQExpBuffer(&token);
    -+		appendPQExpBuffer(&token, "Bearer %s", request->token);
    -+
    -+		if (PQExpBufferDataBroken(token))
    ++		state->token = strdup(request->token);
    ++		if (!state->token)
     +		{
     +			libpq_append_conn_error(conn, "out of memory");
     +			return PGRES_POLLING_FAILED;
     +		}
     +
    -+		state->token = token.data;
     +		return PGRES_POLLING_OK;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			 * hold onto the original string, since it may not be safe for us
     +			 * to free() it.)
     +			 */
    -+			PQExpBufferData token;
    -+
    -+			initPQExpBuffer(&token);
    -+			appendPQExpBuffer(&token, "Bearer %s", request.token);
    -+
    -+			if (PQExpBufferDataBroken(token))
    ++			state->token = strdup(request.token);
    ++			if (!state->token)
     +			{
     +				libpq_append_conn_error(conn, "out of memory");
     +				goto fail;
     +			}
     +
    -+			state->token = token.data;
    -+
     +			/* short-circuit */
     +			if (request.cleanup)
     +				request.cleanup(conn, &request);
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	}
     +	else
     +	{
    ++#if USE_BUILTIN_OAUTH
     +		/*
    -+		 * Use our built-in OAuth flow.
    ++		 * Hand off to our built-in OAuth flow.
     +		 *
     +		 * Only allow one try per connection, since we're not performing any
     +		 * caching at the moment. (Custom flows might be more sophisticated.)
     +		 */
     +		conn->async_auth = pg_fe_run_oauth_flow;
     +		conn->oauth_want_retry = PG_BOOL_NO;
    ++
    ++#else
    ++		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built using --with-builtin-oauth");
    ++		goto fail;
    ++
    ++#endif
     +	}
     +
     +	return true;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +{
     +	fe_oauth_state *state = opaq;
     +	PGconn	   *conn = state->conn;
    ++	bool		discover = false;
     +
     +	*output = NULL;
     +	*outputlen = 0;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	switch (state->state)
     +	{
     +		case FE_OAUTH_INIT:
    ++			/* We begin in the initial response phase. */
     +			Assert(inputlen == -1);
     +
     +			if (!derive_discovery_uri(conn))
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			{
     +				/*
     +				 * If we don't have a discovery URI to be able to request a
    -+				 * token, we ask the server for one explicitly with an empty
    -+				 * token. This doesn't require any asynchronous work.
    ++				 * token, we ask the server for one explicitly. This doesn't
    ++				 * require any asynchronous work.
     +				 */
    -+				state->token = strdup("");
    -+				if (!state->token)
    -+				{
    -+					libpq_append_conn_error(conn, "out of memory");
    -+					return SASL_FAILED;
    -+				}
    ++				discover = true;
     +			}
     +
     +			/* fall through */
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			/* We should still be in the initial response phase. */
     +			Assert(inputlen == -1);
     +
    -+			*output = client_initial_response(conn, state->token);
    ++			*output = client_initial_response(conn, discover, state->token);
     +			if (!*output)
     +				return SASL_FAILED;
     +
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      			conn->sasl = &pg_scram_mech;
      			conn->password_needed = true;
      		}
    -+#ifdef USE_OAUTH
     +		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
     +				 !selected_mechanism)
     +		{
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
     +			conn->sasl = &pg_oauth_mech;
     +			conn->password_needed = false;
     +		}
    -+#endif
      	}
      
      	if (!selected_mechanism)
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
      					/* Check to see if we should mention pgpassfile */
      					pgpassfileWarning(conn);
      
    -+#ifdef USE_OAUTH
    ++					/*
    ++					 * OAuth connections may perform two-step discovery, where
    ++					 * the first connection is a dummy.
    ++					 */
     +					if (conn->sasl == &pg_oauth_mech
     +						&& conn->oauth_want_retry == PG_BOOL_YES)
     +					{
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
     +						need_new_connection = true;
     +						goto keep_going;
     +					}
    -+#endif
     +
      					CONNECTION_FAILED();
      				}
    @@ src/interfaces/libpq/fe-misc.c: pqSocketCheck(PGconn *conn, int forRead, int for
      	if (result < 0)
     
      ## src/interfaces/libpq/libpq-fe.h ##
    +@@ src/interfaces/libpq/libpq-fe.h: extern "C"
    +  */
    + #include "postgres_ext.h"
    + 
    ++#ifdef WIN32
    ++#include <winsock2.h>			/* for SOCKET */
    ++#endif
    ++
    + /*
    +  * These symbols may be used in compile-time #ifdef tests for the availability
    +  * of v14-and-newer libpq features.
     @@ src/interfaces/libpq/libpq-fe.h: extern "C"
      /* Features added in PostgreSQL v18: */
      /* Indicates presence of PQfullProtocolVersion */
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +	const char *user_code;		/* user code to enter */
     +} PQpromptOAuthDevice;
     +
    ++/* for _PQoauthBearerRequest.async() */
    ++#ifdef WIN32
    ++#define SOCKTYPE SOCKET
    ++#else
    ++#define SOCKTYPE int
    ++#endif
    ++
     +typedef struct _PQoauthBearerRequest
     +{
     +	/* Hook inputs (constant across all calls) */
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +	 */
     +	PostgresPollingStatusType (*async) (PGconn *conn,
     +										struct _PQoauthBearerRequest *request,
    -+										int *altsock);
    ++										SOCKTYPE *altsock);
     +
     +	/*
     +	 * Callback to clean up custom allocations. A hook implementation may use
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +	 */
     +	void	   *user;
     +} PQoauthBearerRequest;
    ++
    ++#undef SOCKTYPE
     +
      extern char *PQencryptPassword(const char *passwd, const char *user);
      extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
    @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
      	PGLoadBalanceType load_balance_type;	/* desired load balancing
     
      ## src/interfaces/libpq/meson.build ##
    +@@
    + # args for executables (which depend on libpq).
    + 
    + libpq_sources = files(
    ++  'fe-auth-oauth.c',
    +   'fe-auth-scram.c',
    +   'fe-auth.c',
    +   'fe-cancel.c',
     @@ src/interfaces/libpq/meson.build: if gssapi.found()
        )
      endif
      
    -+if oauth.found()
    -+  libpq_sources += files('fe-auth-oauth.c')
    -+  if oauth_library == 'curl'
    -+    libpq_sources += files('fe-auth-oauth-curl.c')
    -+  endif
    ++if oauth_library == 'curl'
    ++  libpq_sources += files('fe-auth-oauth-curl.c')
     +endif
     +
      export_file = custom_target('libpq.exports',
    @@ src/interfaces/libpq/pqexpbuffer.h: extern void initPQExpBuffer(PQExpBuffer str)
       *		Reset a PQExpBuffer to empty
     
      ## src/makefiles/meson.build ##
    +@@ src/makefiles/meson.build: pgxs_kv = {
    +   'SUN_STUDIO_CC': 'no', # not supported so far
    + 
    +   # want the chosen option, rather than the library
    ++  'with_builtin_oauth' : oauth_library,
    +   'with_ssl' : ssl_library,
    +   'with_uuid': uuidopt,
    + 
     @@ src/makefiles/meson.build: pgxs_deps = {
    +   'gssapi': gssapi,
    +   'icu': icu,
    +   'ldap': ldap,
    ++  'libcurl': libcurl,
    +   'libxml': libxml,
    +   'libxslt': libxslt,
        'llvm': llvm,
    -   'lz4': lz4,
    -   'nls': libintl,
    -+  'oauth': oauth,
    -   'pam': pam,
    -   'perl': perl_dep,
    -   'python': python3_dep,
     
      ## src/test/modules/Makefile ##
     @@ src/test/modules/Makefile: SUBDIRS = \
    @@ src/test/modules/oauth_validator/Makefile (new)
     +MODULES = validator
     +PGFILEDESC = "validator - test OAuth validator module"
     +
    ++PROGRAM = oauth_hook_client
    ++PGAPPICON = win32
    ++OBJS = $(WIN32RES) oauth_hook_client.o
    ++
    ++PG_CPPFLAGS = -I$(libpq_srcdir)
    ++PG_LIBS_INTERNAL += $(libpq_pgport)
    ++
     +NO_INSTALLCHECK = 1
     +
     +TAP_TESTS = 1
    @@ src/test/modules/oauth_validator/Makefile (new)
     +include $(top_srcdir)/contrib/contrib-global.mk
     +
     +export PYTHON
    -+export with_oauth
    ++export with_builtin_oauth
     +export with_python
     +
     +endif
    @@ src/test/modules/oauth_validator/meson.build (new)
     +)
     +test_install_libs += validator
     +
    ++oauth_hook_client_sources = files(
    ++  'oauth_hook_client.c',
    ++)
    ++
    ++if host_system == 'windows'
    ++  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
    ++    '--NAME', 'oauth_hook_client',
    ++    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
    ++endif
    ++
    ++oauth_hook_client = executable('oauth_hook_client',
    ++  oauth_hook_client_sources,
    ++  dependencies: [frontend_code, libpq],
    ++  kwargs: default_bin_args + {
    ++    'install': false,
    ++  },
    ++)
    ++testprep_targets += oauth_hook_client
    ++
     +tests += {
     +  'name': 'oauth_validator',
     +  'sd': meson.current_source_dir(),
    @@ src/test/modules/oauth_validator/meson.build (new)
     +  'tap': {
     +    'tests': [
     +      't/001_server.pl',
    ++      't/002_client.pl',
     +    ],
     +    'env': {
     +      'PYTHON': python.path(),
    -+      'with_oauth': oauth_library,
    ++      'with_builtin_oauth': oauth_library,
     +      'with_python': 'yes',
     +    },
     +  },
    ++}
    +
    + ## src/test/modules/oauth_validator/oauth_hook_client.c (new) ##
    +@@
    ++/*-------------------------------------------------------------------------
    ++ *
    ++ * oauth_hook_client.c
    ++ *		Verify OAuth hook functionality in libpq
    ++ *
    ++ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
    ++ * Portions Copyright (c) 1994, Regents of the University of California
    ++ *
    ++ *
    ++ * IDENTIFICATION
    ++ *		src/test/modules/oauth_validator/oauth_hook_client.c
    ++ *
    ++ *-------------------------------------------------------------------------
    ++ */
    ++
    ++#include "postgres_fe.h"
    ++
    ++#include <stdio.h>
    ++#include <stdlib.h>
    ++
    ++#include "getopt_long.h"
    ++#include "libpq-fe.h"
    ++
    ++static int	handle_auth_data(PGAuthData type, PGconn *conn, void *data);
    ++
    ++static void
    ++usage(char *argv[])
    ++{
    ++	fprintf(stderr, "usage: %s [flags] CONNINFO\n\n", argv[0]);
    ++
    ++	fprintf(stderr, "recognized flags:\n");
    ++	fprintf(stderr, " -h, --help				show this message\n");
    ++	fprintf(stderr, " --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
    ++	fprintf(stderr, " --expected-uri URI		fail if received configuration link does not match URI\n");
    ++	fprintf(stderr, " --no-hook					don't install OAuth hooks (connection will fail)\n");
    ++	fprintf(stderr, " --token TOKEN				use the provided TOKEN value\n");
    ++}
    ++
    ++static bool no_hook = false;
    ++static const char *expected_uri = NULL;
    ++static const char *expected_scope = NULL;
    ++static char *token = NULL;
    ++
    ++int
    ++main(int argc, char *argv[])
    ++{
    ++	static const struct option long_options[] = {
    ++		{"help", no_argument, NULL, 'h'},
    ++
    ++		{"expected-scope", required_argument, NULL, 1000},
    ++		{"expected-uri", required_argument, NULL, 1001},
    ++		{"no-hook", no_argument, NULL, 1002},
    ++		{"token", required_argument, NULL, 1003},
    ++		{0}
    ++	};
    ++
    ++	const char *conninfo;
    ++	PGconn	   *conn;
    ++	int			c;
    ++
    ++	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
    ++	{
    ++		switch (c)
    ++		{
    ++			case 'h':
    ++				usage(argv);
    ++				return 0;
    ++
    ++			case 1000:			/* --expected-scope */
    ++				expected_scope = optarg;
    ++				break;
    ++
    ++			case 1001:			/* --expected-uri */
    ++				expected_uri = optarg;
    ++				break;
    ++
    ++			case 1002:			/* --no-hook */
    ++				no_hook = true;
    ++				break;
    ++
    ++			case 1003:			/* --token */
    ++				token = optarg;
    ++				break;
    ++
    ++			default:
    ++				usage(argv);
    ++				return 1;
    ++		}
    ++	}
    ++
    ++	if (argc != optind + 1)
    ++	{
    ++		usage(argv);
    ++		return 1;
    ++	}
    ++
    ++	conninfo = argv[optind];
    ++
    ++	/* Set up our OAuth hooks. */
    ++	PQsetAuthDataHook(handle_auth_data);
    ++
    ++	/* Connect. (All the actual work is in the hook.) */
    ++	conn = PQconnectdb(conninfo);
    ++	if (PQstatus(conn) != CONNECTION_OK)
    ++	{
    ++		fprintf(stderr, "Connection to database failed: %s\n",
    ++				PQerrorMessage(conn));
    ++		PQfinish(conn);
    ++		return 1;
    ++	}
    ++
    ++	printf("connection succeeded\n");
    ++	PQfinish(conn);
    ++	return 0;
    ++}
    ++
    ++static int
    ++handle_auth_data(PGAuthData type, PGconn *conn, void *data)
    ++{
    ++	PQoauthBearerRequest *req = data;
    ++
    ++	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
    ++		return 0;
    ++
    ++	if (expected_uri)
    ++	{
    ++		if (!req->openid_configuration)
    ++		{
    ++			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
    ++			return -1;
    ++		}
    ++
    ++		if (strcmp(expected_uri, req->openid_configuration) != 0)
    ++		{
    ++			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
    ++			return -1;
    ++		}
    ++	}
    ++
    ++	if (expected_scope)
    ++	{
    ++		if (!req->scope)
    ++		{
    ++			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
    ++			return -1;
    ++		}
    ++
    ++		if (strcmp(expected_scope, req->scope) != 0)
    ++		{
    ++			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
    ++			return -1;
    ++		}
    ++	}
    ++
    ++	req->token = token;
    ++	return 1;
     +}
     
      ## src/test/modules/oauth_validator/t/001_server.pl (new) ##
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
     +}
     +
    -+if ($ENV{with_oauth} ne 'curl')
    ++if ($ENV{with_builtin_oauth} ne 'curl')
     +{
     +	plan skip_all => 'client-side OAuth not supported by this build';
     +}
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +
     +$node->stop;
     +
    ++done_testing();
    +
    + ## src/test/modules/oauth_validator/t/002_client.pl (new) ##
    +@@
    ++
    ++# Copyright (c) 2021-2024, PostgreSQL Global Development Group
    ++
    ++use strict;
    ++use warnings FATAL => 'all';
    ++
    ++use JSON::PP     qw(encode_json);
    ++use MIME::Base64 qw(encode_base64);
    ++use PostgreSQL::Test::Cluster;
    ++use PostgreSQL::Test::Utils;
    ++use PostgreSQL::Test::OAuthServer;
    ++use Test::More;
    ++
    ++if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
    ++{
    ++	plan skip_all =>
    ++	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
    ++}
    ++
    ++#
    ++# Cluster Setup
    ++#
    ++
    ++my $node = PostgreSQL::Test::Cluster->new('primary');
    ++$node->init;
    ++$node->append_conf('postgresql.conf', "log_connections = on\n");
    ++$node->append_conf('postgresql.conf',
    ++	"oauth_validator_library = 'validator'\n");
    ++$node->start;
    ++
    ++$node->safe_psql('postgres', 'CREATE USER test;');
    ++
    ++my $issuer = "https://127.0.0.1:54321";
    ++my $scope = "openid postgres";
    ++
    ++unlink($node->data_dir . '/pg_hba.conf');
    ++$node->append_conf(
    ++	'pg_hba.conf', qq{
    ++local all test oauth issuer="$issuer" scope="$scope"
    ++});
    ++$node->reload;
    ++
    ++my ($log_start, $log_end);
    ++$log_start = $node->wait_for_log(qr/reloading configuration files/);
    ++
    ++$ENV{PGOAUTHDEBUG} = "UNSAFE";
    ++
    ++#
    ++# Tests
    ++#
    ++
    ++my $user = "test";
    ++my $base_connstr = $node->connstr() . " user=$user";
    ++my $common_connstr = "$base_connstr oauth_client_id=myID";
    ++
    ++sub test
    ++{
    ++	my ($test_name, %params) = @_;
    ++
    ++	my $flags = [];
    ++	if (defined($params{flags}))
    ++	{
    ++		$flags = $params{flags};
    ++	}
    ++
    ++	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
    ++	diag "running '" . join("' '", @cmd) . "'";
    ++
    ++	my ($stdout, $stderr) = run_command(\@cmd);
    ++
    ++	if (defined($params{expected_stdout}))
    ++	{
    ++		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
    ++	}
    ++
    ++	if (defined($params{expected_stderr}))
    ++	{
    ++		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
    ++	}
    ++	else
    ++	{
    ++		is($stderr, "", "$test_name: no stderr");
    ++	}
    ++}
    ++
    ++test(
    ++	"basic synchronous hook can provide a token",
    ++	flags => [
    ++		"--token", "my-token",
    ++		"--expected-uri", "$issuer/.well-known/openid-configuration",
    ++		"--expected-scope", $scope,
    ++	],
    ++	expected_stdout => qr/connection succeeded/);
    ++
    ++$node->log_check("validator receives correct token",
    ++	$log_start,
    ++	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
    ++
    ++if ($ENV{with_builtin_oauth} ne 'curl')
    ++{
    ++	# libpq should help users out if no OAuth support is built in.
    ++	test(
    ++		"fails without custom hook installed",
    ++		flags => ["--no-hook"],
    ++		expected_stderr =>
    ++		  qr/no custom OAuth flows are available, and libpq was not built using --with-builtin-oauth/
    ++	);
    ++}
    ++
     +done_testing();
     
      ## src/test/modules/oauth_validator/t/oauth_server.py (new) ##
2:  cca5de6726 ! 2:  01df79980b DO NOT MERGE: Add pytest suite for OAuth
    @@ .cirrus.tasks.yml: env:
        MTEST_ARGS: --print-errorlogs --no-rebuild -C build
        PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
        TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
    --  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
    -+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance python
    +-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
    ++  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
      
      
      # What files to preserve in case tests fail
    @@ .cirrus.tasks.yml: task:
            configure_32_script: |
              su postgres <<-EOF
                export CC='ccache gcc -m32'
    -@@ .cirrus.tasks.yml: task:
    -             -Dllvm=disabled \
    -             --pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
    -             -DPERL=perl5.36-i386-linux-gnu \
    --            -DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
    -+            -DPG_TEST_EXTRA="${PG_TEST_EXTRA//"python"}" \
    -             build-32
    -         EOF
    - 
    ++          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
    +           meson setup \
    +             --buildtype=debug \
    +             -Dcassert=true -Dinjection_points=true \
     
      ## meson.build ##
     @@ meson.build: else
    @@ src/test/python/client/test_oauth.py (new)
     +# The client tests need libpq to have been compiled with OAuth support; skip
     +# them otherwise.
     +pytestmark = pytest.mark.skipif(
    -+    os.getenv("with_oauth") == "none",
    -+    reason="OAuth client tests require --with-oauth support",
    ++    os.getenv("with_builtin_oauth") == "none",
    ++    reason="OAuth client tests require --with-builtin-oauth support",
     +)
     +
     +if platform.system() == "Darwin":
    @@ src/test/python/meson.build (new)
     +subdir('server')
     +
     +pytest_env = {
    -+  'with_oauth': oauth_library,
    ++  'with_builtin_oauth': oauth_library,
     +
     +  # Point to the default database; the tests will create their own databases as
     +  # needed.
v36-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v36-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 5730b875b8cda1e2d33b76036934fc522683d9f8 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v36 1/2] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied (see below).

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

= Server-Side Validation =

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

= OAuth HBA Method =

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   17 +-
 config/programs.m4                            |   19 +
 configure                                     |  153 ++
 configure.ac                                  |   36 +
 doc/src/sgml/client-auth.sgml                 |  145 +
 doc/src/sgml/config.sgml                      |   17 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   29 +
 doc/src/sgml/libpq.sgml                       |  128 +
 doc/src/sgml/oauth-validators.sgml            |  140 +
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   34 +
 meson_options.txt                             |    4 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  727 +++++
 src/backend/libpq/auth.c                      |   26 +-
 src/backend/libpq/hba.c                       |   54 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |   17 +
 src/include/libpq/hba.h                       |    6 +-
 src/include/libpq/oauth.h                     |   49 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2392 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          |  687 +++++
 src/interfaces/libpq/fe-auth-oauth.h          |   42 +
 src/interfaces/libpq/fe-auth-sasl.h           |   10 +-
 src/interfaces/libpq/fe-auth-scram.c          |    6 +-
 src/interfaces/libpq/fe-auth.c                |  103 +-
 src/interfaces/libpq/fe-auth.h                |    9 +-
 src/interfaces/libpq/fe-connect.c             |   88 +-
 src/interfaces/libpq/fe-misc.c                |    7 +-
 src/interfaces/libpq/libpq-fe.h               |   88 +
 src/interfaces/libpq/libpq-int.h              |   15 +
 src/interfaces/libpq/meson.build              |    5 +
 src/interfaces/libpq/pqexpbuffer.c            |    2 +-
 src/interfaces/libpq/pqexpbuffer.h            |    6 +
 src/makefiles/meson.build                     |    2 +
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/meson.build  |   53 +
 .../oauth_validator/oauth_hook_client.c       |  157 ++
 .../modules/oauth_validator/t/001_server.pl   |  352 +++
 .../modules/oauth_validator/t/002_client.pl   |  110 +
 .../modules/oauth_validator/t/oauth_server.py |  359 +++
 src/test/modules/oauth_validator/validator.c  |  100 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |   65 +
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   15 +
 58 files changed, 6368 insertions(+), 59 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index fc413eb11e..26e747d559 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -175,6 +175,7 @@ task:
         --buildtype=debug \
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
+        -Dbuiltin_oauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -223,6 +224,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-builtin-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -235,6 +237,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Dbuiltin_oauth=curl
   -Duuid=e2fs
 
 
@@ -312,8 +315,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -687,8 +692,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 6e256b417b..ee6476583c 100755
--- a/configure
+++ b/configure
@@ -715,6 +715,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_builtin_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -861,6 +862,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_builtin_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1570,6 +1572,8 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-builtin-oauth=LIB
+                          use LIB for built-in OAuth 2.0 client flows (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8410,6 +8414,57 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with built-in OAuth client support" >&5
+$as_echo_n "checking whether to build with built-in OAuth client support... " >&6; }
+
+
+
+# Check whether --with-builtin-oauth was given.
+if test "${with_builtin_oauth+set}" = set; then :
+  withval=$with_builtin_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-builtin-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-builtin-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_builtin_oauth" = x"" ; then
+  with_builtin_oauth=no
+fi
+
+if test x"$with_builtin_oauth" = x"curl"; then
+
+$as_echo "#define USE_BUILTIN_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
+  fi
+elif test x"$with_builtin_oauth" != x"no"; then
+  as_fn_error $? "--with-builtin-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_builtin_oauth" >&5
+$as_echo "$with_builtin_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -12107,6 +12162,93 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_builtin_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-builtin-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
@@ -13949,6 +14091,17 @@ fi
 
 done
 
+fi
+
+if test "$with_builtin_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-builtin-oauth=curl" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 3992694dac..46932b32c8 100644
--- a/configure.ac
+++ b/configure.ac
@@ -919,6 +919,30 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with built-in OAuth client support])
+PGAC_ARG_REQ(with, builtin-oauth, [LIB], [use LIB for built-in OAuth 2.0 client flows (curl)])
+if test x"$with_builtin_oauth" = x"" ; then
+  with_builtin_oauth=no
+fi
+
+if test x"$with_builtin_oauth" = x"curl"; then
+  AC_DEFINE([USE_BUILTIN_OAUTH], 1, [Define to 1 to build with OAuth 2.0 client flows. (--with-builtin-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth client flows.])
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
+  fi
+elif test x"$with_builtin_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-builtin-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_builtin_oauth])
+AC_SUBST(with_builtin_oauth)
+
+
 #
 # Bonjour
 #
@@ -1288,6 +1312,14 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_builtin_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-builtin-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
@@ -1584,6 +1616,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_builtin_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-builtin-oauth=curl])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..c5d1a1fe69 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,135 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+
+    <itemizedlist>
+     <listitem>
+      <para>
+       Resource owner: The user or system who owns protected resources and can
+       grant access to them.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Client: The system which accesses the protected resources using access
+       tokens.  Applications using libpq are the clients in connecting to a
+       <productname>PostgreSQL</productname> cluster.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Authorization server: The system which receives requests from, and
+       issues access tokens to, the client after the authenticated resource
+       owner has given approval.
+      </para>
+     </listitem>
+
+     <listitem>
+      <para>
+       Resource server: The system which hosts the protected resources which are
+       accessed by the client. The <productname>PostgreSQL</productname> cluster
+       being connected to is the resource server.
+      </para>
+     </listitem>
+
+    </itemizedlist>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authentication server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        The <acronym>URL</acronym> of the OAuth issuing party, which the client
+        must contact to receive a bearer token.  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>trust_validator_authz</literal></term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping is skipped, and
+        the OAuth validator takes full responsibility for mapping end user
+        identities to database roles.  If the validator authorizes the token,
+        the server trusts that the user is allowed to connect under the
+        requested role, and the connection is allowed to proceed regardless of
+        the authentication status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>trust_validator_authz</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d54f904956..d8e3e153c3 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1201,6 +1201,23 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library to use for validating OAuth connection tokens. If set to
+        an empty string (the default), OAuth connections will be refused. For
+        more information on implementing OAuth validators see
+        <xref linkend="oauth-validators" />. This parameter can only be set in
+        the <filename>postgresql.conf</filename> file.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 5621606f59..c4e06c53f6 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1072,6 +1072,20 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-builtin-oauth">
+       <term><option>--with-builtin-oauth=<replaceable>LIBRARY</replaceable></option></term>
+       <listitem>
+        <para>
+         Build with support for OAuth 2.0 client flows.  The only
+         <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-systemd">
        <term><option>--with-systemd</option></term>
        <listitem>
@@ -2516,6 +2530,21 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-builtin-oauth">
+      <term><option>-Dbuiltin_oauth={ auto | <replaceable>LIBRARY</replaceable> | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with support for OAuth 2.0 client flows.  The only
+        <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-systemd-meson">
       <term><option>-Dsystemd={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index bfefb1289e..87d612fddf 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2336,6 +2336,90 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth">custom OAuth
+        hook</link> is installed to provide one), then this parameter must be
+        set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of an issuer to contact if the server requests an OAuth
+        token for the connection. This parameter is optional and intended for
+        advanced usage; see also <xref linkend="libpq-connect-oauth-scope"/>.
+       </para>
+       <para>
+        If no <literal>oauth_issuer</literal> is provided, the client will ask
+        the <productname>PostgreSQL</productname> server to provide an
+        acceptable issuer URL (as configured in its
+        <link linkend="auth-oauth">HBA settings</link>). This is convenient, but
+        it requires two separate network connections to the server per attempt.
+       </para>
+       <para>
+        Providing an explicit <literal>oauth_issuer</literal> (and, typically,
+        an accompanying <literal>oauth_scope</literal>) skips this initial
+        "discovery" phase, which may speed up certain custom OAuth flows.
+        <!-- TODO: note to reviewer: the following is only partially implemented. -->
+        This parameter may also be set defensively, to prevent the backend
+        server from directing the client to arbitrary URLs.
+        <emphasis>However:</emphasis> if the client's issuer setting differs
+        from the server's expected issuer, the server is likely to reject the
+        issued token, and the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        <!-- TODO: note to reviewer: the following is only partially implemented. -->
+        If neither <xref linkend="libpq-connect-oauth-issuer"/> nor
+        <literal>oauth_scope</literal> is specified, the client will obtain
+        appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. Otherwise, the value of
+        this parameter will be used. Similarly to
+        <literal>oauth_issuer</literal>, if the client's scope setting does not
+        contain the server's required scopes, the server is likely to reject the
+        issued token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9963,6 +10047,50 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+
+      <para>
+       If <replaceable>hook</replaceable> is set to a null pointer instead of
+       a function pointer, the default hook will be installed.
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       Retrieves the current value of <literal>PGauthDataHook</literal>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..38dc3c82ef
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,140 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+ </para>
+ <para>
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading a shared library
+   with the <xref linkend="guc-oauth-validator-library"/>'s name as the library
+   base name. The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..af476c82fc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f..ae4732df65 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 9a98f0c86a..d354ec5e8b 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: OAuth (libcurl)
+###############################################################
+
+libcurl = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('builtin_oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0',
+                       required: (oauthopt == 'curl'))
+  if libcurl.found()
+    oauth_library = 'curl'
+    cdata.set('USE_BUILTIN_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not libcurl.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3037,6 +3066,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3705,6 +3738,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index 3893519639..453429ab86 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -79,6 +79,10 @@ option('bonjour', type: 'feature', value: 'auto',
 option('bsd_auth', type: 'feature', value: 'auto',
   description: 'BSD Authentication support')
 
+option('builtin_oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for built-in OAuth 2.0 client flows (curl)')
+
 option('docs', type: 'feature', value: 'auto',
   description: 'Documentation in HTML and man page format')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 4859343153..c08f20d109 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_builtin_oauth = @with_builtin_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..e7c3a721db
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,727 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+
+/* GUC */
+char	   *OAuthValidatorLibrary = "";
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(void);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library();
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*------
+	 * Build the .well-known URI based on our issuer.
+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
+	 * have to make this configurable too.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("Internal error in OAuth validator module"));
+		return false;
+	}
+
+	if (!ret->authorized)
+	{
+		status = false;
+		goto cleanup;
+	}
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(void)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	if (OAuthValidatorLibrary[0] == '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				errmsg("oauth_validator_library is not set"));
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(OAuthValidatorLibrary,
+							   "_PG_oauth_validator_module_init", false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s modules \"%s\" have to define the symbol %s",
+					   "OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2fd96a7129..17032fd812 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -114,7 +114,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1748,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2042,32 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "trust_validator_authz"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with trust_validator_authz";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2095,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2480,24 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "trust_validator_authz", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a3..b64c8dea97 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8a67f01200..d4aad280ea 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4794,6 +4795,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
+		},
+		&OAuthValidatorLibrary,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..53c999a69f 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,9 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..6f98e84cc9
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,49 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *OAuthValidatorLibrary;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index cdd9a6e935..dd59ddb198 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -223,6 +223,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -672,6 +675,9 @@
 /* Define to 1 to build with BSD Authentication support. (--with-bsd-auth) */
 #undef USE_BSD_AUTH
 
+/* Define to 1 to build with OAuth 2.0 client flows. (--with-builtin-oauth) */
+#undef USE_BUILTIN_OAUTH
+
 /* Define to build with ICU support. (--with-icu) */
 #undef USE_ICU
 
@@ -700,6 +706,9 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to use libcurl for OAuth client flows. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..6502059d16 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_builtin_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..99f5d11f8d
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2392 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * TODO: curl_multi_socket_all() is deprecated, presumably because
+		 * it's inefficient and pointless if your event loop has already
+		 * handed you the exact sockets that are ready. But that's not our use
+		 * case -- our client has no way to tell us which sockets are ready.
+		 * (They don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do... so it appears to be exactly the API we need.
+		 *
+		 * Ignore the deprecation for now. This needs a followup on
+		 * curl-library@, to make sure we're not shooting ourselves in the
+		 * foot in some other way.
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		markPQExpBufferBroken(buf);
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
+	Assert(token_uri);			/* ensured by get_discovery_document() */
+	Assert(device_code);		/* ensured by run_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		const char *env;
+
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		env = getenv("PGOAUTHDEBUG");
+		if (env && strcmp(env, "UNSAFE") == 0)
+			actx->debugging = true;
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				/* TODO: check issuer */
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..ff4491cfd3
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,687 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the token pointer will be ignored and the initial
+ * response will instead contain a request for the server's required OAuth
+ * parameters (Sec. 4.3). Otherwise, a bearer token must be provided.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* We must have a token. */
+		if (!token)
+		{
+			/*
+			 * Either programmer error, or something went badly wrong during
+			 * the asynchronous fetch.
+			 *
+			 * TODO: users shouldn't see this; what action should they take if
+			 * they do?
+			 */
+			libpq_append_conn_error(conn, "no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("failed to parse server's error response: %s"),
+						  errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	if (ctx.discovery_uri)
+	{
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		appendPQExpBuffer(&conn->errorMessage,
+						  libpq_gettext("server sent error response without a status"));
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = strdup(request->token);
+		if (!state->token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			state->token = strdup(request.token);
+			if (!state->token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_BUILTIN_OAUTH
+		/*
+		 * Hand off to our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built using --with-builtin-oauth");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+static bool
+derive_discovery_uri(PGconn *conn)
+{
+	PQExpBufferData discovery_buf;
+
+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
+	{
+		/*
+		 * Either we already have one, or we aren't able to derive one
+		 * ourselves. The latter case is not an error condition; we'll just
+		 * ask the server to provide one for us.
+		 */
+		return true;
+	}
+
+	initPQExpBuffer(&discovery_buf);
+
+	Assert(!conn->oauth_discovery_uri);
+	Assert(conn->oauth_issuer);
+
+	/*
+	 * If we don't yet have a discovery URI, but the user gave us an explicit
+	 * issuer, use the .well-known discovery URI for that issuer.
+	 */
+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
+
+	if (PQExpBufferDataBroken(discovery_buf))
+		goto cleanup;
+
+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
+
+cleanup:
+	termPQExpBuffer(&discovery_buf);
+	return (conn->oauth_discovery_uri != NULL);
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!derive_discovery_uri(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				if (!conn->oauth_client_id)
+				{
+					/* We can't talk to a server without a client identifier. */
+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
+					return SASL_FAILED;
+				}
+
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly. This doesn't
+				 * require any asynchronous work.
+				 */
+				discover = true;
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, discover, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("server sent additional OAuth data after error\n"));
+			return SASL_FAILED;
+
+		default:
+			appendPQExpBufferStr(&conn->errorMessage,
+								 libpq_gettext("invalid OAuth exchange state\n"));
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..6e5e946364
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 258bfd0564..b47011d077 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..d260b60c0e 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,13 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +586,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +672,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +702,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1024,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1193,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1210,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1540,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 61c025ff3b..4be1b98514 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3694,6 +3713,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3849,6 +3869,19 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3882,7 +3915,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3919,6 +3962,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4600,6 +4678,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4717,6 +4796,11 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7239,6 +7323,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..3c6c7fd23b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -28,6 +28,10 @@ extern "C"
  */
 #include "postgres_ext.h"
 
+#ifdef WIN32
+#include <winsock2.h>			/* for SOCKET */
+#endif
+
 /*
  * These symbols may be used in compile-time #ifdef tests for the availability
  * of v14-and-newer libpq features.
@@ -59,6 +63,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +109,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +192,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +732,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+/* for _PQoauthBearerRequest.async() */
+#ifdef WIN32
+#define SOCKTYPE SOCKET
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										SOCKTYPE *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd..75043bbc8f 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,15 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer URL */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +517,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..e4d92eb402 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -4,6 +4,7 @@
 # args for executables (which depend on libpq).
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -40,6 +41,10 @@ if gssapi.found()
   )
 endif
 
+if oauth_library == 'curl'
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/interfaces/libpq/pqexpbuffer.c b/src/interfaces/libpq/pqexpbuffer.c
index 037875c523..9473ed6749 100644
--- a/src/interfaces/libpq/pqexpbuffer.c
+++ b/src/interfaces/libpq/pqexpbuffer.c
@@ -46,7 +46,7 @@ static const char *const oom_buffer_ptr = oom_buffer;
  *
  * Put a PQExpBuffer in "broken" state if it isn't already.
  */
-static void
+void
 markPQExpBufferBroken(PQExpBuffer str)
 {
 	if (str->data != oom_buffer)
diff --git a/src/interfaces/libpq/pqexpbuffer.h b/src/interfaces/libpq/pqexpbuffer.h
index d05010066b..9956829a88 100644
--- a/src/interfaces/libpq/pqexpbuffer.h
+++ b/src/interfaces/libpq/pqexpbuffer.h
@@ -121,6 +121,12 @@ extern void initPQExpBuffer(PQExpBuffer str);
 extern void destroyPQExpBuffer(PQExpBuffer str);
 extern void termPQExpBuffer(PQExpBuffer str);
 
+/*------------------------
+ * markPQExpBufferBroken
+ *		Put a PQExpBuffer in "broken" state if it isn't already.
+ */
+extern void markPQExpBufferBroken(PQExpBuffer str);
+
 /*------------------------
  * resetPQExpBuffer
  *		Reset a PQExpBuffer to empty
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index e13938fe8a..c7d55259ce 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -66,6 +66,7 @@ pgxs_kv = {
   'SUN_STUDIO_CC': 'no', # not supported so far
 
   # want the chosen option, rather than the library
+  'with_builtin_oauth' : oauth_library,
   'with_ssl' : ssl_library,
   'with_uuid': uuidopt,
 
@@ -231,6 +232,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14..bdfd5f1f8d 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index c829b61953..bd13e4afbd 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..6787041b3b
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_builtin_oauth
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..72121e44ff
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,53 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_builtin_oauth': oauth_library,
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 0000000000..b9278a2930
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,157 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Verify OAuth hook functionality in libpq
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <stdio.h>
+#include <stdlib.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGAuthData type, PGconn *conn, void *data);
+
+static void
+usage(char *argv[])
+{
+	fprintf(stderr, "usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	fprintf(stderr, "recognized flags:\n");
+	fprintf(stderr, " -h, --help				show this message\n");
+	fprintf(stderr, " --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	fprintf(stderr, " --expected-uri URI		fail if received configuration link does not match URI\n");
+	fprintf(stderr, " --no-hook					don't install OAuth hooks (connection will fail)\n");
+	fprintf(stderr, " --token TOKEN				use the provided TOKEN value\n");
+}
+
+static bool no_hook = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+static int
+handle_auth_data(PGAuthData type, PGconn *conn, void *data)
+{
+	PQoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..77e7e240cf
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,352 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_builtin_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver; # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "127.0.0.1:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf('pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"           scope="openid postgres"
+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
+					 "HTTPS is required without debug mode",
+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234", role="$user"/,
+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="test" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters.
+$user = "testalt";
+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check("user $user: validator receives correct parameters", $log_start,
+					 log_like => [
+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+					 ]);
+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
+					 log_like => [
+						 qr/connection authenticated: identity="testalt" method=oauth/,
+					 ]);
+	$log_start = $log_end;
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr = "user=testparam dbname=postgres ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr => qr/failed to obtain device authorization: response is too large/
+);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr => qr/failed to obtain access token: response is too large/
+);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr => qr/failed to parse access token response: unexpected content type/
+);
+
+$node->connect_fails(
+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
+);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the issuer. (Scope is hardcoded to empty to cover that
+# case as well.)
+$common_connstr =
+  "user=test dbname=postgres oauth_issuer=$issuer oauth_scope=''";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr oauth_client_id=f02c6361-0635",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 0000000000..c349820fac
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,110 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_library = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+my $issuer = "https://127.0.0.1:54321";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr = "$base_connstr oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	diag "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_builtin_oauth} ne 'curl')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built using --with-builtin-oauth/
+	);
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..35ba8abb61
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,359 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = self.path.startswith("/alternate/")
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        if self.path == "/.well-known/openid-configuration":
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..dbba326bc4
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,100 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 007571e948..83360b397a 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2499,6 +2499,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2542,7 +2547,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..a13240cd01
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e..362b20a94f 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 1847bbfa95..08ccc8ad9b 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1718,6 +1722,8 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1782,6 +1788,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1942,11 +1949,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3067,6 +3077,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3460,6 +3472,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
@@ -3660,6 +3674,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v36-0002-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v36-0002-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 01df79980b3bf24216dd57f6d5af21a30ebd4427 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v36 2/2] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  107 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  138 ++
 src/test/python/client/test_client.py |  186 +++
 src/test/python/client/test_oauth.py  | 2040 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   46 +
 src/test/python/pq3.py                |  740 +++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 +++++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 +++++++
 src/test/python/tls.py                |  195 +++
 src/tools/make_venv                   |   56 +
 25 files changed, 5759 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 26e747d559..e69d867bda 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -319,6 +319,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -373,9 +374,12 @@ task:
 
       # Also build & test in a 32bit build - it's gotten rare to test that
       # locally.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       configure_32_script: |
         su postgres <<-EOF
           export CC='ccache gcc -m32'
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           meson setup \
             --buildtype=debug \
             -Dcassert=true -Dinjection_points=true \
diff --git a/meson.build b/meson.build
index d354ec5e8b..f6e03c8756 100644
--- a/meson.build
+++ b/meson.build
@@ -3379,6 +3379,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3545,6 +3548,110 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      if get_option('PG_TEST_EXTRA').contains('python')
+        reqs = files(t['requirements'])
+        test('install_' + venv_name,
+          python,
+          args: [ make_venv, '--requirements', reqs, venv_path ],
+          env: env,
+          priority: setup_tests_priority - 1,  # must run after tmp_install
+          is_parallel: false,
+          suite: ['setup'],
+          timeout: 60,  # 30s is too short for the cryptography package compile
+        )
+      endif
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      }
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+        ]
+        if not get_option('PG_TEST_EXTRA').contains('python')
+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
+        endif
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7f..c7fce098eb 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..ff13ea9e21
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..e4378a9fdf
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2040 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_builtin_oauth") == "none",
+    reason="OAuth client tests require --with-builtin-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy reponse, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def xtest_oauth_success(conn):  # TODO
+    initial = start_oauth_handshake(conn)
+
+    auth = get_auth_value(initial)
+    assert auth.startswith(b"Bearer ")
+
+    # Accept the token. TODO actually validate
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+    finish_handshake(conn)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server.
+    """
+
+    def __init__(self, *, port):
+        super().__init__()
+
+        self.exception = None
+
+        addr = ("", port)
+        self.server = self._Server(addr, self._Handler)
+
+        # TODO: allow HTTPS only, somehow
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"http://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _Server(http.server.HTTPServer):
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture
+def openid_provider(unused_tcp_port_factory):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(port=unused_tcp_port_factory())
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_issuer(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read()
+
+
+def test_oauth_requires_client_id(accept, openid_provider):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        # Do not set a client ID; this should cause a client error after the
+        # server asks for OAUTHBEARER and the client tries to contact the
+        # issuer.
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "no oauth_client_id is set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(oauth_client_id=secrets.token_hex())
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://example.com/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "https://example.com/"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "device_authorization_endpoint": "https://example.com/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "https://example.com",
+                    "token_endpoint": "https://example.com/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://example.com/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+    sock, client = accept(
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange and link to the HTTP provider.
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": openid_provider.discovery_uri,
+            }
+
+            fail_oauth_handshake(conn, resp)
+
+    # FIXME: We'll get a second connection, but it won't do anything.
+    sock, _ = accept()
+    expect_disconnected_handshake(sock)
+
+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..c0b7742ec5
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,46 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_builtin_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..c89f0756ae
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c shared_preload_libraries=oauthtest",
+                        "-c oauth_validator_library=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..ee39c2a14e
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..ea31ad4f87
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
-- 
2.34.1

#158Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#157)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Nov 5, 2024 at 3:33 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Done in v36, attached.

Forgot to draw attention to this part:

+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?

Best I can tell, libpq for FreeBSD has a dependency diamond for GSS
symbols: libpq links against MIT krb5, libcurl links against Heimdal,
libpq links against libcurl. Link order becomes critical to avoid
nasty segfaults, but I have not dug deeply into the root cause.

--Jacob

#159Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#157)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 06.11.24 00:33, Jacob Champion wrote:

Done in v36, attached.

Assorted review comments from me:

Everything in the commit message between

= Debug Mode =

and

Several TODOs:

should be moved to the documentation. In some cases, it already is,
but it doesn't always have the same level of detail.

(You could point from the commit message to .sgml files if you want to
highlight usage instructions, but I don't think this is generally
necessary.)

* config/programs.m4

Can we do the libcurl detection using pkg-config only? Seems simpler,
and maintains consistency to meson.

* doc/src/sgml/client-auth.sgml

In the list of terms (this could be a <variablelist>), state how these
terms map to a PostgreSQL installation. You already explain what the
client and the resource server are, but not who the resource owner is
and what the authorization server is. It would also be good to be
explicit and upfront that the authorization server is a third-party
component that needs to be obtained separately.

trust_validator_authz: Personally, I'm not a fan of the "authz" and
"authn" abbreviations. I know this is security jargon. But are
regular users going to understand this? Can we just spell it out?

* doc/src/sgml/config.sgml

Also here maybe state that these OAuth libraries have to be obtained
separately.

* doc/src/sgml/installation.sgml

I find the way the installation options are structured a bit odd. I
would have expected --with-libcurl and -Dlibcurl (or --with-curl and
-Dcurl). These build options usually just say, use this library. We
don't spell out what, for example, libldap is used for, we just use it
and enable all the features that require it.

* doc/src/sgml/libpq.sgml

Maybe oauth_issuer should be oauth_issuer_url? Otherwise one might
expect to just write "google" here or something. Or there might be
other ways to contact an issuer in the future? Just a thought.

* doc/src/sgml/oauth-validators.sgml

This chapter says "libpq" several times, but I think this is a server
side plugin, so libpq does not participate. Check please.

* src/backend/libpq/auth-oauth.c

I'm confused by the use of PG_MAX_AUTH_TOKEN_LENGTH in the
pg_be_oauth_mech definition. What does that mean?

+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "

Add comments to these.

Also, add comments to all functions defined here that don't have one
yet.

* src/backend/utils/misc/guc_tables.c

Why is oauth_validator_library GUC_NOT_IN_SAMPLE?

Also, shouldn't this be an hba option instead? What if you want to
use different validators for different connections?

* src/interfaces/libpq/fe-auth-oauth-curl.c

The CURL_IGNORE_DEPRECATION thing needs clarification. Is that in
progress?

+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)

Add a comment about why this value.

+   union
+   {
+       char      **scalar;     /* for all scalar types */
+       struct curl_slist **array;  /* for type == JSON_TOKEN_ARRAY_START */
+   };

This is an anonymous union, which requires C11. Strangely, I cannot
get clang to warn about this with -Wc11-extensions. Probably better
to fix anyway. (The trailing supported MSVC versions don't support
C11 yet.)

* src/interfaces/libpq/fe-auth.h

+extern const pg_fe_sasl_mech pg_oauth_mech;

Should this rather be in fe-auth-oauth.h?

* src/interfaces/libpq/libpq-fe.h

The naming scheme of types and functions in this file is clearly
obscure and has grown randomly over time. But at least my intuition
is that the preferred way is

types start with PG
function start with PQ

and the next letter is usually lower case. (PQconnectdb, PQhost,
PGconn, PQresult)

Maybe check your additions against that.

* src/interfaces/libpq/pqexpbuffer.c
* src/interfaces/libpq/pqexpbuffer.h

Let's try to do this without opening up additional APIs here.

This is only used once, in append_urlencoded(), and there are other
ways to communicate errors, for example returning a bool.

* src/test/modules/oauth_validator/

Everything in this directory needs more comments, at least on a file
level.

Add a README in this directory. Also update the README in the upper
directory.

* src/test/modules/oauth_validator/t/001_server.pl

On Cirrus CI Windows task, this test reports SKIP. Can't tell why,
because the log is not kept. I suppose you expect this to work on
Windows (but see my comment below), so it would be good to get this
test running.

* src/test/modules/oauth_validator/t/002_client.pl

+my $issuer = "https://127.0.0.1:54321&quot;;

Use PostgreSQL::Test::Cluster::get_free_port() instead of hardcoding
port numbers.

Or is this a real port? I don't see it used anywhere else.

+ diag "running '" . join("' '", @cmd) . "'";

This should be "note" instead. Otherwise it garbles the output.

* src/test/perl/PostgreSQL/Test/OAuthServer.pm

Add some comments to this file, what it's for.

Is this meant to work on Windows? Just thinking, things like

kill(15, $self->{'pid'});

pgperlcritic complains:

src/test/perl/PostgreSQL/Test/OAuthServer.pm: Return value of flagged
function ignored - read at line 39, column 2.

* src/tools/pgindent/typedefs.list

We don't need to typedef every locally used enum or similar into a
full typedef. I suggest the following might be unnecessary:

AsyncAuthFunc
OAuthStep
fe_oauth_state_enum

#160Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#159)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Nov 8, 2024 at 1:21 AM Peter Eisentraut <peter@eisentraut.org> wrote:

Assorted review comments from me:

Thank you! I will cherry-pick some responses here and plan to address
the rest in a future patchset.

trust_validator_authz: Personally, I'm not a fan of the "authz" and
"authn" abbreviations. I know this is security jargon. But are
regular users going to understand this? Can we just spell it out?

Yes. That name's a holdover from the very first draft, actually.

Is "trust_validator_authorization" a great name in the first place?
The key concept is that user mapping is being delegated to the OAuth
system itself, so you'd better make sure that the validator has been
built to do that. (Anyone have any suggestions?)

I find the way the installation options are structured a bit odd. I
would have expected --with-libcurl and -Dlibcurl (or --with-curl and
-Dcurl). These build options usually just say, use this library.

It's patterned directly off of -Dssl/--with-ssl (which I liberally
borrowed from) because the builtin client implementation used to have
multiple options for the library in use. I can change it if needed,
but I thought it'd be helpful for future devs if I didn't undo the
generalization.

Maybe oauth_issuer should be oauth_issuer_url? Otherwise one might
expect to just write "google" here or something. Or there might be
other ways to contact an issuer in the future? Just a thought.

More specifically this is an "issuer identifier", as defined by the
OAuth/OpenID discovery specs. It's a subset of a URL, and I want to
make sure users know how to differentiate between an "issuer" they
trust and the "discovery URI" that's in use for that issuer. They may
want to set one or the other -- a discovery URI is associated with
exactly one issuer, but unfortunately an issuer may have multiple
discovery URIs, which I'm actively working on. (There is also some
relation to the multiple-issuers problem mentioned below.)

I'm confused by the use of PG_MAX_AUTH_TOKEN_LENGTH in the
pg_be_oauth_mech definition. What does that mean?

Just that Bearer tokens can be pretty long, so we don't want to limit
them to 1k like SCRAM does. 64k is probably overkill, but I've seen
anecdotal reports of tens of KBs and it seemed reasonable to match
what we're doing for GSS tokens.

Also, shouldn't [oauth_validator_library] be an hba option instead? What if you want to
use different validators for different connections?

Yes. This is again the multiple-issuers problem; I will split that off
into its own email since this one's getting long. It has security
implications.

The CURL_IGNORE_DEPRECATION thing needs clarification. Is that in
progress?

Thanks for the nudge, I've started a thread:

https://curl.se/mail/lib-2024-11/0028.html

This is an anonymous union, which requires C11. Strangely, I cannot
get clang to warn about this with -Wc11-extensions. Probably better
to fix anyway. (The trailing supported MSVC versions don't support
C11 yet.)

Oh, that's not going to be fun.

This is only used once, in append_urlencoded(), and there are other
ways to communicate errors, for example returning a bool.

I'd rather not introduce two parallel error indicators for the caller
to have to check for that particular part. But I can change over to
using the (identical!) termPQExpBuffer. I felt like the other API
signaled the intent a little better, though.

On Cirrus CI Windows task, this test reports SKIP. Can't tell why,
because the log is not kept. I suppose you expect this to work on
Windows (but see my comment below)

No, builtin client support does not exist on Windows. If/when it's
added, the 001_server tests will need to be ported.

+my $issuer = "https://127.0.0.1:54321&quot;;

Use PostgreSQL::Test::Cluster::get_free_port() instead of hardcoding
port numbers.

Or is this a real port? I don't see it used anywhere else.

It's not real; 002_client.pl doesn't start an authorization server at
all. I can make that more explicit.

src/test/perl/PostgreSQL/Test/OAuthServer.pm: Return value of flagged
function ignored - read at line 39, column 2.

So perlcritic recognizes "or" but not the "//" operator... Lovely.

Thanks!
--Jacob

#161Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#160)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Nov 12, 2024 at 1:47 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

On Fri, Nov 8, 2024 at 1:21 AM Peter Eisentraut <peter@eisentraut.org> wrote:

Also, shouldn't [oauth_validator_library] be an hba option instead? What if you want to
use different validators for different connections?

Yes. This is again the multiple-issuers problem; I will split that off
into its own email since this one's getting long. It has security
implications.

Okay, so, how to use multiple issuers/providers. Here's my current
plan, with justification below:

1. libpq connection strings must specify exactly one issuer
2. the discovery document coming from the server must belong to that
libpq issuer
3. the HBA should allow a choice of discovery document and validator

= Current Bug =

First, I should point out a critical mistake I've made on the client
side: I treat oauth_issuer and oauth_client_id as if they can be
arbitrarily mixed and matched. Some of the providers I've been testing
do allow you to use one registered client across multiple issuers, but
that's the exception rather than the norm. Even if you have multiple
issuers available, you still expect your registered client to be
talking to only the provider you registered it with.

And you don't want the Postgres server to switch providers for you.
Imagine that you've registered a client application for use with a big
provider, and that provider has given you a client secret. You expect
to share that secret only with them, but with the current setup, if a
DBA wants to steal that secret from you, all they have to do is stand
up a provider of their own, and libpq will send the secret straight to
it instead. Great.

There's actually a worse scenario that's pointed out in the spec for
the Device Authorization flow [1]https://datatracker.ietf.org/doc/html/rfc8628#section-5.3:

Note that if an authorization server used with this flow is
malicious, then it could perform a man-in-the-middle attack on the
backchannel flow to another authorization server. [...] For this to
be possible, the device manufacturer must either be the attacker and
shipping a device intended to perform the man-in-the-middle attack,
or be using an authorization server that is controlled by an
attacker, possibly because the attacker compromised the
authorization server used by the device.

Back when I implemented this, that paragraph seemed pointlessly
obvious: of course you must trust your authorization server. What I
missed was, the Postgres server MUST NOT be able to control the entry
point into the device flow, because that means a malicious DBA can
trivially start a device prompt with a different provider, forward you
all the details through the endpoint they control, and hope you're too
fatigued to notice the difference before clicking through. (This is
easier if that provider is one of the big ones that you're already
used to trusting.) Then they have a token with which to attack you on
a completely different platform.

So in my opinion, my patchset must be changed to require a trusted
issuer in the libpq connection string. The server can tell you which
discovery document to get from that issuer, and it can tell you which
scopes are required (as long as the user hasn't hardcoded those too),
but it shouldn't be able to force the client to talk to an arbitrary
provider or swap out issuers.

= Multiple Issuers =

Okay, with that out of the way, let's talk about multiple issuer support.

First, server-side. If a server wants different groups of
users/databases/etc. to go through different issuers, then it stands
to reason that a validator should be selectable in the HBA settings,
since a validator for Provider A may not have any clue how to validate
Provider B. I don't like the idea of pg_hba being used to load
arbitrary libraries, though; I think the superuser should have to
designate a pool of "blessed" validator libraries to load through a
GUC. As a UX improvement for the common case, maybe we don't require
the HBA to have an explicit validator parameter if the conf contains
exactly one blessed library.

In case someone does want to develop a multi-issuer validator (say, to
deal with the providers that have multiple issuers underneath their
umbrella), we need to make sure that the configured issuer in use is
available to the validator, so that they aren't susceptible to a
mix-up attack of their own.

As for the client side, I think v1 should allow only one expected
issuer per connection. There are OAuth features [2]https://datatracker.ietf.org/doc/html/rfc9207 that help clients
handle more safely, but as far as I can tell they are not widely
deployed yet, and I don't know if any of them apply to the device
flow. (With the device flow, if the client allows multiple providers,
those providers can attack each other as described above.)

If a more complicated client application associates a single end user
with multiple Postgres connections, and each connection needs its own
issuer, then that application needs to be encouraged to use a flow
which has been hardened for that use case. (Setting aside the security
problems with mix-ups, the device flow won't be particularly pleasant
for that anyway. "Here's a bunch of URLs and codes, go to all of them
before they time out, good luck!")

= Discovery Documents =

There are two flavors of discovery document, OAuth and OpenID. And
OIDC Discovery and RFC 8414 disagree on the rules, so for the issuer
"https://example.com/abcd&quot;, you have two discovery document locations
using postfix or infix styles for the path:

- OpenID: https://example.com/abcd/.well-known/openid-configuration
- OAuth: https://example.com/.well-known/oauth-authorization-server/abcd

Some providers publish different information at each [3]https://devforum.okta.com/t/is-userinfo-endpoint-available-in-oauth-authorization-server/24284, so the
difference may be important for some deployments. RFC 8414 claims the
OpenID flavor should transition to the infix style at some point (a
transition that is not happening as far as I can see), so now there
are three standards. And Okta uses the construction
"https://example.com/abcd/.well-known/oauth-authorization-server&quot;,
which you may notice matches _neither_ of the two options above, so
now there are four standards.

To deal with all of this, I plan to better separate the difference
between the issuer and the discovery URL in the code, as well as allow
DBAs and clients to specify the discovery URL explicitly to override
the default OpenID flavor. For now I plan to support only
"openid-configuration" and "oauth-authorization-server" in both
postfix and infix notation (four options total, as seen in the wild).

How's all that sound?

--Jacob

[1]: https://datatracker.ietf.org/doc/html/rfc8628#section-5.3
[2]: https://datatracker.ietf.org/doc/html/rfc9207
[3]: https://devforum.okta.com/t/is-userinfo-endpoint-available-in-oauth-authorization-server/24284

#162Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#160)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 12.11.24 22:47, Jacob Champion wrote:

On Fri, Nov 8, 2024 at 1:21 AM Peter Eisentraut <peter@eisentraut.org> wrote:

I find the way the installation options are structured a bit odd. I
would have expected --with-libcurl and -Dlibcurl (or --with-curl and
-Dcurl). These build options usually just say, use this library.

It's patterned directly off of -Dssl/--with-ssl (which I liberally
borrowed from) because the builtin client implementation used to have
multiple options for the library in use. I can change it if needed,
but I thought it'd be helpful for future devs if I didn't undo the
generalization.

Personally, I'm not even a fan of the -Dssl/--with-ssl system. I'm more
attached to --with-openssl. But if you want to stick with that, a more
suitable naming would be something like, say, --with-httplib=curl, which
means, use curl for all your http needs. Because if we later add other
functionality that can use some http, I don't think we want to enable or
disable them all individually, or even mix different http libraries for
different features. In practice, curl is a widely available and
respected library, so I'd expect packagers to be just turn it all on
without much further consideration.

I'm confused by the use of PG_MAX_AUTH_TOKEN_LENGTH in the
pg_be_oauth_mech definition. What does that mean?

Just that Bearer tokens can be pretty long, so we don't want to limit
them to 1k like SCRAM does. 64k is probably overkill, but I've seen
anecdotal reports of tens of KBs and it seemed reasonable to match
what we're doing for GSS tokens.

Ah, ok, I totally misread that code. Could you maybe write this definition

+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+   oauth_get_mechanisms,
+   oauth_init,
+   oauth_exchange,
+
+   PG_MAX_AUTH_TOKEN_LENGTH,
+};

with designated initializers:

const pg_be_sasl_mech pg_be_oauth_mech = {
.get_mechanisms = oauth_get_mechanisms,
.init = oauth_init,
.exchange = oauth_exchange,
.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
};

The CURL_IGNORE_DEPRECATION thing needs clarification. Is that in
progress?

Thanks for the nudge, I've started a thread:

https://curl.se/mail/lib-2024-11/0028.html

It looks like this has been clarified, so let's put that URL into a code
comment.

This is only used once, in append_urlencoded(), and there are other
ways to communicate errors, for example returning a bool.

I'd rather not introduce two parallel error indicators for the caller
to have to check for that particular part. But I can change over to
using the (identical!) termPQExpBuffer. I felt like the other API
signaled the intent a little better, though.

I think it's better to not drill a new hole into an established API for
such a limited use. So termPQExpBuffer() seems better for now. If it
later turns out, many callers are using termPQExpBuffer() for fake error
handling purposes, then that can be considered independently.

On Cirrus CI Windows task, this test reports SKIP. Can't tell why,
because the log is not kept. I suppose you expect this to work on
Windows (but see my comment below)

No, builtin client support does not exist on Windows. If/when it's
added, the 001_server tests will need to be ported.

Could you put some kind of explicit conditional or a comment in there.
Right now, it's not possible to tell that Windows is not supported.

#163Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#162)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Nov 19, 2024 at 3:05 AM Peter Eisentraut <peter@eisentraut.org> wrote:

Personally, I'm not even a fan of the -Dssl/--with-ssl system. I'm more
attached to --with-openssl. But if you want to stick with that, a more
suitable naming would be something like, say, --with-httplib=curl, which
means, use curl for all your http needs. Because if we later add other
functionality that can use some http, I don't think we want to enable or
disable them all individually, or even mix different http libraries for
different features. In practice, curl is a widely available and
respected library, so I'd expect packagers to be just turn it all on
without much further consideration.

Okay, I can see that. I'll work on replacing --with-builtin-oauth. Any
votes from the gallery on --with-httplib vs. --with-libcurl?

The other suggestions look good and I've added them to my personal
TODO list. Thanks again for all the feedback!

--Jacob

#164Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#161)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Nov 14, 2024 at 9:45 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

1. libpq connection strings must specify exactly one issuer

This is done in v37. The oauth_issuer connection parameter is now
required if the server requests OAUTHBEARER.

2. the discovery document coming from the server must belong to that
libpq issuer

This is also complete. If, for example, the client uses
"oauth_issuer=https://example.com/issuer&quot;, then the server is only
allowed to send it to one of four places -- either the three standard
URIs

- https://example.com/issuer/.well-known/openid-configuration
- https://example.com/.well-known/oauth-authorization-server/issuer
- https://example.com/.well-known/openid-configuration/issuer

or the decidedly nonstandard

- https://example.com/issuer/.well-known/oauth-authorization-server

The query for server parameters may be skipped, whether for
performance or paranoia reasons, by providing one of the above
well-known URIs directly. For example,
"oauth_issuer=https://example.com/issuer/.well-known/openid-configuration&quot;.
You need to have worked out your oauth_scope setting in advance,
though, if you're not willing to ask the server for it.

3. the HBA should allow a choice of discovery document and validator

oauth_validator_library is now oauth_validator_libraries, and the HBA
supports a `validator` option.

The `issuer` option now works the same way as `oauth_issuer` on the
client: you either provide an issuer identifier or a well-known
discovery URI. If you only provide the issuer, the server will choose
OIDC-style discovery, which seems to be far and away the most popular
choice. You can override that with whatever well-known URI you like,
but libpq still only supports the four options above for its own
safety.

I'm not sure I like the way this option works in the HBA. The client
can immediately complain if you provide a useless URI, because it has
to perform all those checks anyway, but the server has no such
guardrails. I'm wondering if I should maybe split the HBA options into
two -- "issuer" and an optional "discovery" option, perhaps? -- and
try to help out the DBAs a little more.

I don't like the idea of pg_hba being used to load
arbitrary libraries, though; I think the superuser should have to
designate a pool of "blessed" validator libraries to load through a
GUC. As a UX improvement for the common case, maybe we don't require
the HBA to have an explicit validator parameter if the conf contains
exactly one blessed library.

I've implemented both of these. For example, if you have

oauth_validator_libraries = 'provider1'

then your oauth HBA lines can omit `validator`, and provider1 will be
used for all connections. Once you add a second, though:

oauth_validator_libraries = 'provider1, provider2'

then HBA parsing will complain about an ambiguous configuration until
you provide `validator` settings for each:

LOG: authentication method "oauth" requires argument "validator"
to be set when oauth_validator_libraries contains multiple options

In case someone does want to develop a multi-issuer validator (say, to
deal with the providers that have multiple issuers underneath their
umbrella), we need to make sure that the configured issuer in use is
available to the validator, so that they aren't susceptible to a
mix-up attack of their own.

This is already provided via MyProcPort, which the library is free to
examine (and our tests currently make use of it, which I'd forgotten).

--

v37 also chips away at some of the upthread feedback:
- tests that expect no issuer connections have been changed to use an
invalid IP address
- the curl_multi_socket_all deprecation history has been recorded
- oauth_validator_libraries has been added to the config sample
- pgperltidy has been run on the new TAP tests

I also added an envvar (PGOAUTHCAFILE), which is itself buried
underneath PGOAUTHDEBUG=UNSAFE, to change the CA file in use by Curl.
The Python tests use that to test HTTPS issuers. I'm not convinced yet
whether that should be a fully fleshed out/documented feature for v1;
the whole idea of the OAuth system is that your browser should be able
to connect, and if you need to tweak the X.509 tree for your provider
then you're probably going to be doing it at the system level, not at
the application level. But it's a possibility.

Thanks,
--Jacob

Attachments:

since-v36.diff.txttext/plain; charset=US-ASCII; name=since-v36.diff.txtDownload
1:  5730b875b8 ! 1:  16f1b8fc02 Add OAUTHBEARER SASL mechanism
    @@ doc/src/sgml/client-auth.sgml: host ... radius radiusservers="server1,server2" r
     +      <term><literal>issuer</literal></term>
     +      <listitem>
     +       <para>
    -+        The <acronym>URL</acronym> of the OAuth issuing party, which the client
    -+        must contact to receive a bearer token.  This parameter is required.
    ++        The issuer identifier of the authorization server, as defined by its
    ++        discovery document, or a well-known URI pointing to that discovery
    ++        document. This parameter is required.
     +       </para>
    ++       <para>
    ++        When an OAuth client connects to the server, a discovery document URI
    ++        will be constructed using the issuer identifier. By default, the URI
    ++        uses the conventions of OpenID Connect Discovery: the path
    ++        <literal>/.well-known/openid-configuration</literal> will be appended
    ++        to the issuer identifier. Alternatively, if the
    ++        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
    ++        path segment, the URI will be provided to the client as-is.
    ++       </para>
    ++       <warning>
    ++        <para>
    ++         The OAuth client in libpq requires the server's issuer setting to
    ++         exactly match the issuer identifier which is provided in the discovery
    ++         document, which must in turn match the client's
    ++         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
    ++         case or format are permitted.
    ++        </para>
    ++       </warning>
     +      </listitem>
     +     </varlistentry>
     +
    @@ doc/src/sgml/client-auth.sgml: host ... radius radiusservers="server1,server2" r
     +     </varlistentry>
     +
     +     <varlistentry>
    ++      <term><literal>validator</literal></term>
    ++      <listitem>
    ++       <para>
    ++        The library to use for validating bearer tokens. If given, the name must
    ++        exactly match one of the libraries listed in
    ++        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
    ++        optional unless <literal>oauth_validator_libraries</literal> contains
    ++        more than one library, in which case it is required.
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++     <varlistentry>
     +      <term><literal>map</literal></term>
     +      <listitem>
     +       <para>
    @@ doc/src/sgml/config.sgml: include_dir 'conf.d'
            </listitem>
           </varlistentry>
     +
    -+     <varlistentry id="guc-oauth-validator-library" xreflabel="oauth_validator_library">
    -+      <term><varname>oauth_validator_library</varname> (<type>string</type>)
    ++     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
    ++      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
     +      <indexterm>
    -+       <primary><varname>oauth_validator_library</varname> configuration parameter</primary>
    ++       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
     +      </indexterm>
     +      </term>
     +      <listitem>
     +       <para>
    -+        The library to use for validating OAuth connection tokens. If set to
    -+        an empty string (the default), OAuth connections will be refused. For
    -+        more information on implementing OAuth validators see
    ++        The library/libraries to use for validating OAuth connection tokens. If
    ++        only one validator library is provided, it will be used by default for
    ++        any OAuth connections; otherwise, all
    ++        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
    ++        must explicitly set a <literal>validator</literal> chosen from this
    ++        list. If set to an empty string (the default), OAuth connections will be
    ++        refused. For more information on implementing OAuth validators see
     +        <xref linkend="oauth-validators" />. This parameter can only be set in
     +        the <filename>postgresql.conf</filename> file.
     +       </para>
    @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
            </listitem>
           </varlistentry>
     +
    ++     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
    ++      <term><literal>oauth_issuer</literal></term>
    ++      <listitem>
    ++       <para>
    ++        The HTTPS URL of an issuer to contact if the server requests an OAuth
    ++        token for the connection. This parameter is required for all OAuth
    ++        connections; it should exactly match the <literal>issuer</literal>
    ++        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
    ++       </para>
    ++       <para>
    ++        As part of the standard authentication handshake, libpq will ask the
    ++        server for a <emphasis>discovery document:</emphasis> a URI providing a
    ++        set of OAuth configuration parameters. The server must provide a URI
    ++        that is directly constructed from the components of the
    ++        <literal>oauth_issuer</literal>, and this value must exactly match the
    ++        issuer identifier that is declared in the discovery document itself, or
    ++        the connection will fail. This is required to prevent a class of "mix-up
    ++        attacks" on OAuth clients.
    ++       </para>
    ++       <para>
    ++        This standard handshake requires two separate network connections to the
    ++        server per authentication attempt. To skip asking the server for a
    ++        discovery document URI, you may set <literal>oauth_issuer</literal> to a
    ++        <literal>/.well-known/</literal> URI used for OAuth discovery. (In this
    ++        case, it is recommended that
    ++        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
    ++        client will not have a chance to ask the server for a correct scope
    ++        setting, and the default scopes for a token may not be sufficient to
    ++        connect.) libpq currently supports the following well-known endpoints:
    ++        <itemizedlist>
    ++         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
    ++         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
    ++        </itemizedlist>
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
     +     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
     +      <term><literal>oauth_client_id</literal></term>
     +      <listitem>
    @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
     +      </listitem>
     +     </varlistentry>
     +
    -+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
    -+      <term><literal>oauth_issuer</literal></term>
    -+      <listitem>
    -+       <para>
    -+        The HTTPS URL of an issuer to contact if the server requests an OAuth
    -+        token for the connection. This parameter is optional and intended for
    -+        advanced usage; see also <xref linkend="libpq-connect-oauth-scope"/>.
    -+       </para>
    -+       <para>
    -+        If no <literal>oauth_issuer</literal> is provided, the client will ask
    -+        the <productname>PostgreSQL</productname> server to provide an
    -+        acceptable issuer URL (as configured in its
    -+        <link linkend="auth-oauth">HBA settings</link>). This is convenient, but
    -+        it requires two separate network connections to the server per attempt.
    -+       </para>
    -+       <para>
    -+        Providing an explicit <literal>oauth_issuer</literal> (and, typically,
    -+        an accompanying <literal>oauth_scope</literal>) skips this initial
    -+        "discovery" phase, which may speed up certain custom OAuth flows.
    -+        <!-- TODO: note to reviewer: the following is only partially implemented. -->
    -+        This parameter may also be set defensively, to prevent the backend
    -+        server from directing the client to arbitrary URLs.
    -+        <emphasis>However:</emphasis> if the client's issuer setting differs
    -+        from the server's expected issuer, the server is likely to reject the
    -+        issued token, and the connection will fail.
    -+       </para>
    -+      </listitem>
    -+     </varlistentry>
    -+
     +     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
     +      <term><literal>oauth_scope</literal></term>
     +      <listitem>
    @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
     +        identifiers. This parameter is optional and intended for advanced usage.
     +       </para>
     +       <para>
    -+        <!-- TODO: note to reviewer: the following is only partially implemented. -->
    -+        If neither <xref linkend="libpq-connect-oauth-issuer"/> nor
    -+        <literal>oauth_scope</literal> is specified, the client will obtain
    -+        appropriate scope settings from the
    -+        <productname>PostgreSQL</productname> server. Otherwise, the value of
    -+        this parameter will be used. Similarly to
    -+        <literal>oauth_issuer</literal>, if the client's scope setting does not
    -+        contain the server's required scopes, the server is likely to reject the
    -+        issued token, and the connection will fail.
    ++        Usually the client will obtain appropriate scope settings from the
    ++        <productname>PostgreSQL</productname> server. If this parameter is used,
    ++        the server's requested scope list will be ignored. This can prevent a
    ++        less-trusted server from requesting inappropriate access scopes from the
    ++        end user. However, if the client's scope setting does not contain the
    ++        server's required scopes, the server is likely to reject the issued
    ++        token, and the connection will fail.
     +       </para>
     +       <para>
     +        The meaning of an empty scope list is provider-dependent. An OAuth
    @@ doc/src/sgml/oauth-validators.sgml (new)
     +   <primary>_PG_oauth_validator_module_init</primary>
     +  </indexterm>
     +  <para>
    -+   An OAuth validator module is loaded by dynamically loading a shared library
    -+   with the <xref linkend="guc-oauth-validator-library"/>'s name as the library
    -+   base name. The normal library search path is used to locate the library. To
    ++   An OAuth validator module is loaded by dynamically loading one of the shared
    ++   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
    ++   The normal library search path is used to locate the library. To
     +   provide the validator callbacks and to indicate that the library is an OAuth
     +   validator module a function named
     +   <function>_PG_oauth_validator_module_init</function> must be provided. The
    @@ src/backend/libpq/auth-oauth.c (new)
     +#include "storage/fd.h"
     +#include "storage/ipc.h"
     +#include "utils/json.h"
    ++#include "utils/varlena.h"
     +
     +/* GUC */
    -+char	   *OAuthValidatorLibrary = "";
    ++char	   *oauth_validator_libraries_string = NULL;
     +
     +static void oauth_get_mechanisms(Port *port, StringInfo buf);
     +static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
     +static int	oauth_exchange(void *opaq, const char *input, int inputlen,
     +						   char **output, int *outputlen, const char **logdetail);
     +
    -+static void load_validator_library(void);
    ++static void load_validator_library(const char *libname);
     +static void shutdown_validator_library(int code, Datum arg);
     +
     +static ValidatorModuleState *validator_module_state;
    @@ src/backend/libpq/auth-oauth.c (new)
     +	ctx->issuer = port->hba->oauth_issuer;
     +	ctx->scope = port->hba->oauth_scope;
     +
    -+	load_validator_library();
    ++	load_validator_library(port->hba->oauth_validator);
     +
     +	return ctx;
     +}
    @@ src/backend/libpq/auth-oauth.c (new)
     +				errmsg("OAuth is not properly configured for this user"),
     +				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
     +
    -+	/*------
    -+	 * Build the .well-known URI based on our issuer.
    -+	 * TODO: RFC 8414 defines a competing well-known URI, so we'll probably
    -+	 * have to make this configurable too.
    ++	/*
    ++	 * Build a default .well-known URI based on our issuer, unless the HBA has
    ++	 * already provided one.
     +	 */
     +	initStringInfo(&issuer);
     +	appendStringInfoString(&issuer, ctx->issuer);
    -+	appendStringInfoString(&issuer, "/.well-known/openid-configuration");
    ++	if (strstr(ctx->issuer, "/.well-known/") == NULL)
    ++		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
     +
     +	initStringInfo(&buf);
     +
    @@ src/backend/libpq/auth-oauth.c (new)
     + * since token validation won't be possible.
     + */
     +static void
    -+load_validator_library(void)
    ++load_validator_library(const char *libname)
     +{
     +	OAuthValidatorModuleInit validator_init;
     +
    -+	if (OAuthValidatorLibrary[0] == '\0')
    -+		ereport(ERROR,
    -+				errcode(ERRCODE_INVALID_PARAMETER_VALUE),
    -+				errmsg("oauth_validator_library is not set"));
    ++	Assert(libname && *libname);
     +
     +	validator_init = (OAuthValidatorModuleInit)
    -+		load_external_function(OAuthValidatorLibrary,
    -+							   "_PG_oauth_validator_module_init", false, NULL);
    ++		load_external_function(libname, "_PG_oauth_validator_module_init",
    ++							   false, NULL);
     +
     +	/*
     +	 * The validator init function is required since it will set the callbacks
    @@ src/backend/libpq/auth-oauth.c (new)
     +	 */
     +	if (validator_init == NULL)
     +		ereport(ERROR,
    -+				errmsg("%s modules \"%s\" have to define the symbol %s",
    -+					   "OAuth validator", OAuthValidatorLibrary, "_PG_oauth_validator_module_init"));
    ++				errmsg("%s module \"%s\" must define the symbol %s",
    ++					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
     +
     +	ValidatorCallbacks = (*validator_init) ();
     +
    @@ src/backend/libpq/auth-oauth.c (new)
     +{
     +	if (ValidatorCallbacks->shutdown_cb != NULL)
     +		ValidatorCallbacks->shutdown_cb(validator_module_state);
    ++}
    ++
    ++/*
    ++ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
    ++ *
    ++ * If the validator is currently unset and exactly one library is declared in
    ++ * oauth_validator_libraries, then that library will be used as the validator.
    ++ * Otherwise the name must be present in the list of oauth_validator_libraries.
    ++ */
    ++bool
    ++check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
    ++{
    ++	int			line_num = hbaline->linenumber;
    ++	char	   *file_name = hbaline->sourcefile;
    ++	char	   *rawstring;
    ++	List	   *elemlist = NIL;
    ++	ListCell   *l;
    ++
    ++	*err_msg = NULL;
    ++
    ++	if (oauth_validator_libraries_string[0] == '\0')
    ++	{
    ++		ereport(elevel,
    ++				errcode(ERRCODE_CONFIG_FILE_ERROR),
    ++				errmsg("oauth_validator_libraries must be set for authentication method %s",
    ++					   "oauth"),
    ++				errcontext("line %d of configuration file \"%s\"",
    ++						   line_num, file_name));
    ++		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
    ++							"oauth");
    ++		return false;
    ++	}
    ++
    ++	/* SplitDirectoriesString needs a modifiable copy */
    ++	rawstring = pstrdup(oauth_validator_libraries_string);
    ++
    ++	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
    ++	{
    ++		/* syntax error in list */
    ++		ereport(elevel,
    ++				errcode(ERRCODE_CONFIG_FILE_ERROR),
    ++				errmsg("invalid list syntax in parameter \"%s\"",
    ++					   "oauth_validator_libraries"));
    ++		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
    ++							"oauth_validator_libraries");
    ++		goto done;
    ++	}
    ++
    ++	if (!hbaline->oauth_validator)
    ++	{
    ++		if (elemlist->length == 1)
    ++		{
    ++			hbaline->oauth_validator = pstrdup(linitial(elemlist));
    ++			goto done;
    ++		}
    ++
    ++		ereport(elevel,
    ++				errcode(ERRCODE_CONFIG_FILE_ERROR),
    ++				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
    ++				errcontext("line %d of configuration file \"%s\"",
    ++						   line_num, file_name));
    ++		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
    ++		goto done;
    ++	}
    ++
    ++	foreach(l, elemlist)
    ++	{
    ++		char	   *allowed = lfirst(l);
    ++
    ++		if (strcmp(allowed, hbaline->oauth_validator) == 0)
    ++			goto done;
    ++	}
    ++
    ++	ereport(elevel,
    ++			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
    ++			errmsg("validator \"%s\" is not permitted by %s",
    ++				   hbaline->oauth_validator, "oauth_validator_libraries"),
    ++			errcontext("line %d of configuration file \"%s\"",
    ++					   line_num, file_name));
    ++	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
    ++						hbaline->oauth_validator, "oauth_validator_libraries");
    ++
    ++done:
    ++	list_free_deep(elemlist);
    ++	pfree(rawstring);
    ++
    ++	return (*err_msg == NULL);
     +}
     
      ## src/backend/libpq/auth.c ##
    @@ src/backend/libpq/auth.c: ClientAuthentication(Port *port)
      	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
     
      ## src/backend/libpq/hba.c ##
    +@@
    + #include "libpq/hba.h"
    + #include "libpq/ifaddr.h"
    + #include "libpq/libpq-be.h"
    ++#include "libpq/oauth.h"
    + #include "postmaster/postmaster.h"
    + #include "regex/regex.h"
    + #include "replication/walsender.h"
     @@ src/backend/libpq/hba.c: static const char *const UserAuthName[] =
      	"ldap",
      	"cert",
    @@ src/backend/libpq/hba.c: parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
     +		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
     +		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
     +
    ++		/* Ensure a validator library is set and permitted by the config. */
    ++		if (!check_oauth_validator(parsedline, elevel, err_msg))
    ++			return NULL;
    ++
     +		/*
     +		 * Supplying a usermap combined with the option to skip usermapping is
     +		 * nonsensical and indicates a configuration error.
    @@ src/backend/libpq/hba.c: parse_hba_auth_opt(char *name, char *val, HbaLine *hbal
     +		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
     +		hbaline->oauth_scope = pstrdup(val);
     +	}
    ++	else if (strcmp(name, "validator") == 0)
    ++	{
    ++		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
    ++		hbaline->oauth_validator = pstrdup(val);
    ++	}
     +	else if (strcmp(name, "trust_validator_authz") == 0)
     +	{
     +		REQUIRE_AUTH_OPTION(uaOAuth, "trust_validator_authz", "oauth");
    @@ src/backend/utils/misc/guc_tables.c: struct config_string ConfigureNamesString[]
      	},
      
     +	{
    -+		{"oauth_validator_library", PGC_SIGHUP, CONN_AUTH_AUTH,
    -+			gettext_noop("Sets the library that will be called to validate OAuth v2 bearer tokens."),
    ++		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
    ++			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
     +			NULL,
    -+			GUC_SUPERUSER_ONLY | GUC_NOT_IN_SAMPLE
    ++			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
     +		},
    -+		&OAuthValidatorLibrary,
    ++		&oauth_validator_libraries_string,
     +		"",
     +		NULL, NULL, NULL
     +	},
    @@ src/backend/utils/misc/guc_tables.c: struct config_string ConfigureNamesString[]
      	{
      		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
     
    + ## src/backend/utils/misc/postgresql.conf.sample ##
    +@@
    + #ssl_passphrase_command = ''
    + #ssl_passphrase_command_supports_reload = off
    + 
    ++# OAuth
    ++#oauth_validator_libraries = ''
    ++
    + 
    + #------------------------------------------------------------------------------
    + # RESOURCE USAGE (except WAL)
    +
      ## src/include/common/oauth-common.h (new) ##
     @@
     +/*-------------------------------------------------------------------------
    @@ src/include/libpq/hba.h: typedef struct HbaLine
      	char	   *radiusports_s;
     +	char	   *oauth_issuer;
     +	char	   *oauth_scope;
    ++	char	   *oauth_validator;
     +	bool		oauth_skip_usermap;
      } HbaLine;
      
    @@ src/include/libpq/oauth.h (new)
     +#include "libpq/libpq-be.h"
     +#include "libpq/sasl.h"
     +
    -+extern PGDLLIMPORT char *OAuthValidatorLibrary;
    ++extern PGDLLIMPORT char *oauth_validator_libraries_string;
     +
     +typedef struct ValidatorModuleState
     +{
    @@ src/include/libpq/oauth.h (new)
     +/* Implementation */
     +extern const pg_be_sasl_mech pg_be_oauth_mech;
     +
    ++/*
    ++ * Ensure a validator named in the HBA is permitted by the configuration.
    ++ */
    ++extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
    ++
     +#endif							/* PG_OAUTH_H */
     
      ## src/include/pg_config.h.in ##
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		CHECK_SETOPT(actx, popt, protos, return false);
     +	}
     +
    ++	/* TODO: would anyone use this in "real" situations, or just testing? */
    ++	if (actx->debugging)
    ++	{
    ++		const char *env;
    ++
    ++		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
    ++			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
    ++	}
    ++
     +	/*
     +	 * Suppress the Accept header to make our request as minimal as possible.
     +	 * (Ideally we would set it to "application/json" instead, but OpenID is
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (actx->running)
     +	{
    -+		/*
    ++		/*---
     +		 * There's an async request in progress. Pump the multi handle.
     +		 *
    -+		 * TODO: curl_multi_socket_all() is deprecated, presumably because
    -+		 * it's inefficient and pointless if your event loop has already
    -+		 * handed you the exact sockets that are ready. But that's not our use
    -+		 * case -- our client has no way to tell us which sockets are ready.
    -+		 * (They don't even know there are sockets to begin with.)
    ++		 * curl_multi_socket_all() is officially deprecated, because it's
    ++		 * inefficient and pointless if your event loop has already handed you
    ++		 * the exact sockets that are ready. But that's not our use case --
    ++		 * our client has no way to tell us which sockets are ready. (They
    ++		 * don't even know there are sockets to begin with.)
     +		 *
     +		 * We can grab the list of triggered events from the multiplexer
     +		 * ourselves, but that's effectively what curl_multi_socket_all() is
    -+		 * going to do... so it appears to be exactly the API we need.
    ++		 * going to do. And there are currently no plans for the Curl project
    ++		 * to remove or break this API, so ignore the deprecation. See
    ++		 *
    ++		 *    https://curl.se/mail/lib-2024-11/0028.html
     +		 *
    -+		 * Ignore the deprecation for now. This needs a followup on
    -+		 * curl-library@, to make sure we're not shooting ourselves in the
    -+		 * foot in some other way.
     +		 */
     +		CURL_IGNORE_DEPRECATION(
     +			err = curl_multi_socket_all(actx->curlm, &actx->running);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	return true;
     +}
     +
    ++/*
    ++ * Ensure that the discovery document is provided by the expected issuer.
    ++ * Currently, issuers are statically configured in the connection string.
    ++ */
    ++static bool
    ++check_issuer(struct async_ctx *actx, PGconn *conn)
    ++{
    ++	const struct provider *provider = &actx->provider;
    ++
    ++	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
    ++	Assert(provider->issuer);	/* ensured by parse_provider() */
    ++
    ++	/*---
    ++	 * We require strict equality for issuer identifiers -- no path or case
    ++	 * normalization, no substitution of default ports and schemes, etc. This
    ++	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
    ++	 * validation:
    ++	 *
    ++	 *    The issuer value returned MUST be identical to the Issuer URL that
    ++	 *    was used as the prefix to /.well-known/openid-configuration to
    ++	 *    retrieve the configuration information.
    ++	 *
    ++	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
    ++	 *
    ++	 *    Clients MUST then [...] compare the result to the issuer identifier
    ++	 *    of the authorization server where the authorization request was
    ++	 *    sent to. This comparison MUST use simple string comparison as defined
    ++	 *    in Section 6.2.1 of [RFC3986].
    ++	 *
    ++	 * TODO: Encoding support?
    ++	 */
    ++	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
    ++	{
    ++		actx_error(actx,
    ++				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
    ++				   provider->issuer, conn->oauth_issuer_id);
    ++		return false;
    ++	}
    ++
    ++	return true;
    ++}
    ++
     +#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
     +
     +/*
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	const struct curl_slist *grant;
     +	bool		device_grant_found = false;
     +
    -+	Assert(provider->issuer);	/* ensured by get_discovery_document() */
    ++	Assert(provider->issuer);	/* ensured by parse_provider() */
     +
     +	/*------
     +	 * First, sanity checks for discovery contents that are OPTIONAL in the
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
     +	PQExpBuffer work_buffer = &actx->work_data;
     +
    -+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
    ++	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
     +	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
     +
     +	/* Construct our request body. */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	const char *device_code = actx->authz.device_code;
     +	PQExpBuffer work_buffer = &actx->work_data;
     +
    -+	Assert(conn->oauth_client_id);	/* ensured by get_auth_token() */
    -+	Assert(token_uri);			/* ensured by get_discovery_document() */
    -+	Assert(device_code);		/* ensured by run_device_authz() */
    ++	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
    ++	Assert(token_uri);			/* ensured by parse_provider() */
    ++	Assert(device_code);		/* ensured by parse_device_authz() */
     +
     +	/* Construct our request body. */
     +	resetPQExpBuffer(work_buffer);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (!state->async_ctx)
     +	{
    -+		const char *env;
    -+
     +		/*
     +		 * Create our asynchronous state, and hook it into the upper-level
     +		 * OAuth state immediately, so any failures below won't leak the
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +#endif
     +
     +		/* Should we enable unsafe features? */
    -+		env = getenv("PGOAUTHDEBUG");
    -+		if (env && strcmp(env, "UNSAFE") == 0)
    -+			actx->debugging = true;
    ++		actx->debugging = oauth_unsafe_debugging_enabled();
     +
     +		state->async_ctx = actx;
     +		state->free_async_ctx = free_curl_async_ctx;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +				if (!finish_discovery(actx))
     +					goto error_return;
     +
    -+				/* TODO: check issuer */
    ++				if (!check_issuer(actx, conn))
    ++					goto error_return;
     +
     +				actx->errctx = "cannot run OAuth device authorization";
     +				if (!check_for_device_flow(actx))
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			 * TODO: users shouldn't see this; what action should they take if
     +			 * they do?
     +			 */
    -+			libpq_append_conn_error(conn, "no OAuth token was set for the connection");
    ++			libpq_append_conn_error(conn,
    ++									"no OAuth token was set for the connection");
     +			return NULL;
     +		}
     +	}
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
     +}
     +
    ++#define HTTPS_SCHEME "https://"
    ++#define HTTP_SCHEME "http://"
    ++
    ++/* We support both well-known suffixes defined by RFC 8414. */
    ++#define WK_PREFIX "/.well-known/"
    ++#define OPENID_WK_SUFFIX "openid-configuration"
    ++#define OAUTH_WK_SUFFIX "oauth-authorization-server"
    ++
    ++/*
    ++ * Derives an issuer identifier from one of our recognized .well-known URIs,
    ++ * using the rules in RFC 8414.
    ++ */
    ++static char *
    ++issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
    ++{
    ++	const char *authority_start = NULL;
    ++	const char *wk_start;
    ++	const char *wk_end;
    ++	char	   *issuer;
    ++	ptrdiff_t	start_offset,
    ++				end_offset;
    ++	size_t		end_len;
    ++
    ++	/*
    ++	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
    ++	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
    ++	 * level (but issuer identifier comparison at the level above this is
    ++	 * case-sensitive, so in practice it's probably moot).
    ++	 */
    ++	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
    ++		authority_start = wkuri + strlen(HTTPS_SCHEME);
    ++
    ++	if (!authority_start
    ++		&& oauth_unsafe_debugging_enabled()
    ++		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
    ++	{
    ++		/* Allow http:// for testing only. */
    ++		authority_start = wkuri + strlen(HTTP_SCHEME);
    ++	}
    ++
    ++	if (!authority_start)
    ++	{
    ++		libpq_append_conn_error(conn,
    ++								"OAuth discovery URI \"%s\" must use HTTPS",
    ++								wkuri);
    ++		return NULL;
    ++	}
    ++
    ++	/*
    ++	 * Well-known URIs in general may support queries and fragments, but the
    ++	 * two types we support here do not. (They must be constructed from the
    ++	 * components of issuer identifiers, which themselves may not contain any
    ++	 * queries or fragments.)
    ++	 *
    ++	 * It's important to check this first, to avoid getting tricked later by a
    ++	 * prefix buried inside a query or fragment.
    ++	 */
    ++	if (strpbrk(authority_start, "?#") != NULL)
    ++	{
    ++		libpq_append_conn_error(conn,
    ++								"OAuth discovery URI \"%s\" must not contain query or fragment components",
    ++								wkuri);
    ++		return NULL;
    ++	}
    ++
    ++	/*
    ++	 * Find the start of the .well-known prefix. IETF rules state this must be
    ++	 * at the beginning of the path component, but OIDC defined it at the end
    ++	 * instead, so we have to search for it anywhere.
    ++	 */
    ++	wk_start = strstr(authority_start, WK_PREFIX);
    ++	if (!wk_start)
    ++	{
    ++		libpq_append_conn_error(conn,
    ++								"OAuth discovery URI \"%s\" is not a .well-known URI",
    ++								wkuri);
    ++		return NULL;
    ++	}
    ++
    ++	/*
    ++	 * Now find the suffix type. We only support the two defined in OIDC
    ++	 * Discovery 1.0 and RFC 8414.
    ++	 */
    ++	wk_end = wk_start + strlen(WK_PREFIX);
    ++
    ++	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
    ++		wk_end += strlen(OPENID_WK_SUFFIX);
    ++	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
    ++		wk_end += strlen(OAUTH_WK_SUFFIX);
    ++	else
    ++		wk_end = NULL;
    ++
    ++	/*
    ++	 * Even if there's a match, we still need to check to make sure the suffix
    ++	 * takes up the entire path segment, to weed out constructions like
    ++	 * "/.well-known/openid-configuration-bad".
    ++	 */
    ++	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
    ++	{
    ++		libpq_append_conn_error(conn,
    ++								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
    ++								wkuri);
    ++		return NULL;
    ++	}
    ++
    ++	/*
    ++	 * Finally, make sure the .well-known components are provided either as a
    ++	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
    ++	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
    ++	 * to claim association with "https://localhost/a/b".
    ++	 */
    ++	if (*wk_end != '\0')
    ++	{
    ++		/*
    ++		 * It's not at the end, so it's required to be at the beginning at the
    ++		 * path. Find the starting slash.
    ++		 */
    ++		const char *path_start;
    ++
    ++		path_start = strchr(authority_start, '/');
    ++		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
    ++
    ++		if (wk_start != path_start)
    ++		{
    ++			libpq_append_conn_error(conn,
    ++									"OAuth discovery URI \"%s\" uses an invalid format",
    ++									wkuri);
    ++			return NULL;
    ++		}
    ++	}
    ++
    ++	/* Checks passed! Now build the issuer. */
    ++	issuer = strdup(wkuri);
    ++	if (!issuer)
    ++	{
    ++		libpq_append_conn_error(conn, "out of memory");
    ++		return NULL;
    ++	}
    ++
    ++	/*
    ++	 * The .well-known components are from [wk_start, wk_end). Remove those to
    ++	 * form the issuer ID, by shifting the path suffix (which may be empty)
    ++	 * leftwards.
    ++	 */
    ++	start_offset = wk_start - wkuri;
    ++	end_offset = wk_end - wkuri;
    ++	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
    ++
    ++	memmove(issuer + start_offset, issuer + end_offset, end_len);
    ++
    ++	return issuer;
    ++}
    ++
     +static bool
     +handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
     +{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	/* Sanity check. */
     +	if (strlen(msg) != msglen)
     +	{
    -+		appendPQExpBufferStr(&conn->errorMessage,
    -+							 libpq_gettext("server's error message contained an embedded NULL, and was discarded"));
    ++		libpq_append_conn_error(conn,
    ++								"server's error message contained an embedded NULL, and was discarded");
     +		return false;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		errmsg = json_errdetail(err, &lex);
     +
     +	if (errmsg)
    -+		appendPQExpBuffer(&conn->errorMessage,
    -+						  libpq_gettext("failed to parse server's error response: %s"),
    -+						  errmsg);
    ++		libpq_append_conn_error(conn,
    ++								"failed to parse server's error response: %s",
    ++								errmsg);
     +
     +	/* Don't need the error buffer or the JSON lexer anymore. */
     +	termPQExpBuffer(&ctx.errbuf);
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		return false;
     +
     +	/* TODO: what if these override what the user already specified? */
    ++	/* TODO: what if there's no discovery URI? */
     +	if (ctx.discovery_uri)
     +	{
    ++		char	   *discovery_issuer;
    ++
    ++		/* The URI must correspond to our existing issuer, to avoid mix-ups. */
    ++		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
    ++		if (!discovery_issuer)
    ++			return false;		/* error message already set */
    ++
    ++		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
    ++		{
    ++			libpq_append_conn_error(conn,
    ++									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
    ++									ctx.discovery_uri, discovery_issuer,
    ++									conn->oauth_issuer_id);
    ++
    ++			free(discovery_issuer);
    ++			return false;
    ++		}
    ++
    ++		free(discovery_issuer);
    ++
     +		if (conn->oauth_discovery_uri)
     +			free(conn->oauth_discovery_uri);
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +	if (!ctx.status)
     +	{
    -+		appendPQExpBuffer(&conn->errorMessage,
    -+						  libpq_gettext("server sent error response without a status"));
    ++		libpq_append_conn_error(conn,
    ++								"server sent error response without a status");
     +		return false;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +	if (!request->async)
     +	{
    -+		libpq_append_conn_error(conn, "user-defined OAuth flow provided neither a token nor an async callback");
    ++		libpq_append_conn_error(conn,
    ++								"user-defined OAuth flow provided neither a token nor an async callback");
     +		return PGRES_POLLING_FAILED;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		 */
     +		if (!request->token)
     +		{
    -+			libpq_append_conn_error(conn, "user-defined OAuth flow did not provide a token");
    ++			libpq_append_conn_error(conn,
    ++									"user-defined OAuth flow did not provide a token");
     +			return PGRES_POLLING_FAILED;
     +		}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	return false;
     +}
     +
    ++/*
    ++ * Fill in our issuer identifier and discovery URI, if possible, using the
    ++ * connection parameters. If conn->oauth_discovery_uri can't be populated in
    ++ * this function, it will be requested from the server.
    ++ */
     +static bool
    -+derive_discovery_uri(PGconn *conn)
    ++setup_oauth_parameters(PGconn *conn)
     +{
    -+	PQExpBufferData discovery_buf;
    -+
    -+	if (conn->oauth_discovery_uri || !conn->oauth_issuer)
    ++	/*---
    ++	 * To talk to a server, we require the user to provide issuer and client
    ++	 * identifiers.
    ++	 *
    ++	 * While it's possible for an OAuth client to support multiple issuers, it
    ++	 * requires additional effort to make sure the flows in use are safe -- to
    ++	 * quote RFC 9207,
    ++	 *
    ++	 *     OAuth clients that interact with only one authorization server are
    ++	 *     not vulnerable to mix-up attacks. However, when such clients decide
    ++	 *     to add support for a second authorization server in the future, they
    ++	 *     become vulnerable and need to apply countermeasures to mix-up
    ++	 *     attacks.
    ++	 *
    ++	 * For now, we allow only one.
    ++	 */
    ++	if (!conn->oauth_issuer || !conn->oauth_client_id)
     +	{
    -+		/*
    -+		 * Either we already have one, or we aren't able to derive one
    -+		 * ourselves. The latter case is not an error condition; we'll just
    -+		 * ask the server to provide one for us.
    -+		 */
    -+		return true;
    ++		libpq_append_conn_error(conn,
    ++								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
    ++		return false;
     +	}
     +
    -+	initPQExpBuffer(&discovery_buf);
    -+
    -+	Assert(!conn->oauth_discovery_uri);
    -+	Assert(conn->oauth_issuer);
    -+
     +	/*
    -+	 * If we don't yet have a discovery URI, but the user gave us an explicit
    -+	 * issuer, use the .well-known discovery URI for that issuer.
    ++	 * oauth_issuer is interpreted differently if it's a well-known discovery
    ++	 * URI rather than just an issuer identifier.
     +	 */
    -+	appendPQExpBufferStr(&discovery_buf, conn->oauth_issuer);
    -+	appendPQExpBufferStr(&discovery_buf, "/.well-known/openid-configuration");
    -+
    -+	if (PQExpBufferDataBroken(discovery_buf))
    -+		goto cleanup;
    ++	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
    ++	{
    ++		/*
    ++		 * Convert the URI back to an issuer identifier. (This also performs
    ++		 * validation of the URI format.)
    ++		 */
    ++		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
    ++														   conn->oauth_issuer);
    ++		if (!conn->oauth_issuer_id)
    ++			return false;		/* error message already set */
     +
    -+	conn->oauth_discovery_uri = strdup(discovery_buf.data);
    ++		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
    ++		if (!conn->oauth_discovery_uri)
    ++		{
    ++			libpq_append_conn_error(conn, "out of memory");
    ++			return false;
    ++		}
    ++	}
    ++	else
    ++	{
    ++		/*
    ++		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
    ++		 * for the discovery URI.
    ++		 */
    ++		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
    ++		if (!conn->oauth_issuer_id)
    ++		{
    ++			libpq_append_conn_error(conn, "out of memory");
    ++			return false;
    ++		}
    ++	}
     +
    -+cleanup:
    -+	termPQExpBuffer(&discovery_buf);
    -+	return (conn->oauth_discovery_uri != NULL);
    ++	return true;
     +}
     +
     +static SASLStatus
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			/* We begin in the initial response phase. */
     +			Assert(inputlen == -1);
     +
    -+			if (!derive_discovery_uri(conn))
    ++			if (!setup_oauth_parameters(conn))
     +				return SASL_FAILED;
     +
     +			if (conn->oauth_discovery_uri)
     +			{
    -+				if (!conn->oauth_client_id)
    -+				{
    -+					/* We can't talk to a server without a client identifier. */
    -+					libpq_append_conn_error(conn, "no oauth_client_id is set for the connection");
    -+					return SASL_FAILED;
    -+				}
    -+
     +				/*
     +				 * Decide whether we're using a user-provided OAuth flow, or
     +				 * the one we have built in.
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			 * successfully after telling us it was going to fail. Neither is
     +			 * acceptable.
     +			 */
    -+			appendPQExpBufferStr(&conn->errorMessage,
    -+								 libpq_gettext("server sent additional OAuth data after error\n"));
    ++			libpq_append_conn_error(conn,
    ++									"server sent additional OAuth data after error");
     +			return SASL_FAILED;
     +
     +		default:
    -+			appendPQExpBufferStr(&conn->errorMessage,
    -+								 libpq_gettext("invalid OAuth exchange state\n"));
    ++			libpq_append_conn_error(conn, "invalid OAuth exchange state");
     +			break;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		state->free_async_ctx(state->conn, state->async_ctx);
     +
     +	free(state);
    ++}
    ++
    ++/*
    ++ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
    ++ */
    ++bool
    ++oauth_unsafe_debugging_enabled(void)
    ++{
    ++	const char *env = getenv("PGOAUTHDEBUG");
    ++
    ++	return (env && strcmp(env, "UNSAFE") == 0);
     +}
     
      ## src/interfaces/libpq/fe-auth-oauth.h (new) ##
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     +} fe_oauth_state;
     +
     +extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
    ++extern bool oauth_unsafe_debugging_enabled(void);
     +
     +#endif							/* FE_AUTH_OAUTH_H */
     
    @@ src/interfaces/libpq/fe-connect.c: freePGconn(PGconn *conn)
      	free(conn->target_session_attrs);
      	free(conn->load_balance_hosts);
     +	free(conn->oauth_issuer);
    ++	free(conn->oauth_issuer_id);
     +	free(conn->oauth_discovery_uri);
     +	free(conn->oauth_client_id);
     +	free(conn->oauth_client_secret);
    @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
      								 * connection that's used for queries */
      
     +	/* OAuth v2 */
    -+	char	   *oauth_issuer;	/* token issuer URL */
    ++	char	   *oauth_issuer;	/* token issuer/URL */
    ++	char	   *oauth_issuer_id;	/* token issuer identifier */
     +	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
     +										 * document */
     +	char	   *oauth_client_id;	/* client identifier */
    @@ src/test/modules/oauth_validator/Makefile (new)
     +#
     +#-------------------------------------------------------------------------
     +
    -+MODULES = validator
    ++MODULES = validator fail_validator
     +PGFILEDESC = "validator - test OAuth validator module"
     +
     +PROGRAM = oauth_hook_client
    @@ src/test/modules/oauth_validator/Makefile (new)
     +
     +endif
     
    + ## src/test/modules/oauth_validator/fail_validator.c (new) ##
    +@@
    ++/*-------------------------------------------------------------------------
    ++ *
    ++ * fail_validator.c
    ++ *	  Test module for serverside OAuth token validation callbacks, which always
    ++ *	  fails
    ++ *
    ++ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
    ++ * Portions Copyright (c) 1994, Regents of the University of California
    ++ *
    ++ * src/test/modules/oauth_validator/fail_validator.c
    ++ *
    ++ *-------------------------------------------------------------------------
    ++ */
    ++
    ++#include "postgres.h"
    ++
    ++#include "fmgr.h"
    ++#include "libpq/oauth.h"
    ++#include "miscadmin.h"
    ++#include "utils/guc.h"
    ++#include "utils/memutils.h"
    ++
    ++PG_MODULE_MAGIC;
    ++
    ++static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
    ++										 const char *token,
    ++										 const char *role);
    ++
    ++static const OAuthValidatorCallbacks validator_callbacks = {
    ++	.validate_cb = fail_token,
    ++};
    ++
    ++const OAuthValidatorCallbacks *
    ++_PG_oauth_validator_module_init(void)
    ++{
    ++	return &validator_callbacks;
    ++}
    ++
    ++static ValidatorModuleResult *
    ++fail_token(ValidatorModuleState *state, const char *token, const char *role)
    ++{
    ++	elog(FATAL, "fail_validator: sentinel error");
    ++	pg_unreachable();
    ++}
    +
      ## src/test/modules/oauth_validator/meson.build (new) ##
     @@
     +# Copyright (c) 2024, PostgreSQL Global Development Group
    @@ src/test/modules/oauth_validator/meson.build (new)
     +)
     +test_install_libs += validator
     +
    ++fail_validator_sources = files(
    ++  'fail_validator.c',
    ++)
    ++
    ++if host_system == 'windows'
    ++  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
    ++    '--NAME', 'fail_validator',
    ++    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
    ++endif
    ++
    ++fail_validator = shared_module('fail_validator',
    ++  fail_validator_sources,
    ++  kwargs: pg_test_mod_args,
    ++)
    ++test_install_libs += fail_validator
    ++
     +oauth_hook_client_sources = files(
     +  'oauth_hook_client.c',
     +)
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +use strict;
     +use warnings FATAL => 'all';
     +
    -+use JSON::PP qw(encode_json);
    ++use JSON::PP     qw(encode_json);
     +use MIME::Base64 qw(encode_base64);
     +use PostgreSQL::Test::Cluster;
     +use PostgreSQL::Test::Utils;
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +my $node = PostgreSQL::Test::Cluster->new('primary');
     +$node->init;
     +$node->append_conf('postgresql.conf', "log_connections = on\n");
    -+$node->append_conf('postgresql.conf', "shared_preload_libraries = 'validator'\n");
    -+$node->append_conf('postgresql.conf', "oauth_validator_library = 'validator'\n");
    ++$node->append_conf('postgresql.conf',
    ++	"oauth_validator_libraries = 'validator'\n");
     +$node->start;
     +
     +$node->safe_psql('postgres', 'CREATE USER test;');
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +{
     +	my $exit_code = $?;
     +
    -+	$webserver->stop() if defined $webserver; # might have been SKIP'd
    ++	$webserver->stop() if defined $webserver;    # might have been SKIP'd
     +
     +	$? = $exit_code;
     +}
     +
     +my $port = $webserver->port();
    -+my $issuer = "127.0.0.1:$port";
    ++my $issuer = "http://localhost:$port";
     +
     +unlink($node->data_dir . '/pg_hba.conf');
    -+$node->append_conf('pg_hba.conf', qq{
    -+local all test      oauth issuer="$issuer"           scope="openid postgres"
    -+local all testalt   oauth issuer="$issuer/alternate" scope="openid postgres alt"
    -+local all testparam oauth issuer="$issuer/param"     scope="openid postgres"
    ++$node->append_conf(
    ++	'pg_hba.conf', qq{
    ++local all test      oauth issuer="$issuer"       scope="openid postgres"
    ++local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
    ++local all testparam oauth issuer="$issuer/param" scope="openid postgres"
     +});
     +$node->reload;
     +
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +
     +# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
     +# first, check to make sure the client refuses such connections by default.
    -+$node->connect_fails("user=test dbname=postgres oauth_client_id=f02c6361-0635",
    -+					 "HTTPS is required without debug mode",
    -+					 expected_stderr => qr/failed to fetch OpenID discovery document: Unsupported protocol/);
    ++$node->connect_fails(
    ++	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
    ++	"HTTPS is required without debug mode",
    ++	expected_stderr =>
    ++	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
    ++);
     +
     +$ENV{PGOAUTHDEBUG} = "UNSAFE";
     +
     +my $user = "test";
    -+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0635", "connect",
    -+					  expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@))
    ++if ($node->connect_ok(
    ++		"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
    ++		"connect",
    ++		expected_stderr =>
    ++		  qr@Visit https://example\.com/ and enter the code: postgresuser@))
     +{
     +	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
    -+	$node->log_check("user $user: validator receives correct parameters", $log_start,
    -+					 log_like => [
    -+						 qr/oauth_validator: token="9243959234", role="$user"/,
    -+						 qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
    -+					 ]);
    -+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
    -+					 log_like => [
    -+						 qr/connection authenticated: identity="test" method=oauth/,
    -+					 ]);
    ++	$node->log_check(
    ++		"user $user: validator receives correct parameters",
    ++		$log_start,
    ++		log_like => [
    ++			qr/oauth_validator: token="9243959234", role="$user"/,
    ++			qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
    ++		]);
    ++	$node->log_check(
    ++		"user $user: validator sets authenticated identity",
    ++		$log_start,
    ++		log_like =>
    ++		  [ qr/connection authenticated: identity="test" method=oauth/, ]);
     +	$log_start = $log_end;
     +}
     +
    -+# The /alternate issuer uses slightly different parameters.
    ++# The /alternate issuer uses slightly different parameters, along with an
    ++# OAuth-style discovery document.
     +$user = "testalt";
    -+if ($node->connect_ok("user=$user dbname=postgres oauth_client_id=f02c6361-0636", "connect",
    -+					  expected_stderr => qr@Visit https://example\.org/ and enter the code: postgresuser@))
    ++if ($node->connect_ok(
    ++		"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
    ++		"connect",
    ++		expected_stderr =>
    ++		  qr@Visit https://example\.org/ and enter the code: postgresuser@))
     +{
     +	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
    -+	$node->log_check("user $user: validator receives correct parameters", $log_start,
    -+					 log_like => [
    -+						 qr/oauth_validator: token="9243959234-alt", role="$user"/,
    -+						 qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
    -+					 ]);
    -+	$node->log_check("user $user: validator sets authenticated identity", $log_start,
    -+					 log_like => [
    -+						 qr/connection authenticated: identity="testalt" method=oauth/,
    -+					 ]);
    ++	$node->log_check(
    ++		"user $user: validator receives correct parameters",
    ++		$log_start,
    ++		log_like => [
    ++			qr/oauth_validator: token="9243959234-alt", role="$user"/,
    ++			qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
    ++		]);
    ++	$node->log_check(
    ++		"user $user: validator sets authenticated identity",
    ++		$log_start,
    ++		log_like =>
    ++		  [ qr/connection authenticated: identity="testalt" method=oauth/, ]);
     +	$log_start = $log_end;
     +}
     +
    ++# The issuer linked by the server must match the client's oauth_issuer setting.
    ++$node->connect_fails(
    ++	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
    ++	"oauth_issuer must match discovery",
    ++	expected_stderr =>
    ++	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
    ++);
    ++
     +# Make sure the client_id and secret are correctly encoded. $vschars contains
     +# every allowed character for a client_id/_secret (the "VSCHAR" class).
     +# $vschars_esc is additionally backslash-escaped for inclusion in a
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
     +
     +$node->connect_ok(
    -+	"user=$user dbname=postgres oauth_client_id='$vschars_esc'",
    ++	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc'",
     +	"escapable characters: client_id",
     +	expected_stderr =>
     +	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
     +$node->connect_ok(
    -+	"user=$user dbname=postgres oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
    ++	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
     +	"escapable characters: client_id and secret",
     +	expected_stderr =>
     +	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +# oauth_client_id.
     +#
     +
    -+my $common_connstr = "user=testparam dbname=postgres ";
    ++my $common_connstr =
    ++  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
     +my $base_connstr = $common_connstr;
     +
     +sub connstr
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +$node->connect_ok(
     +	connstr(),
     +	"connect to /param",
    -+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    -+);
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +
     +$node->connect_ok(
     +	connstr(stage => 'token', retries => 1),
     +	"token retry",
    -+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    -+);
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +$node->connect_ok(
     +	connstr(stage => 'token', retries => 2),
     +	"token retry (twice)",
    -+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    -+);
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +$node->connect_ok(
     +	connstr(stage => 'all', retries => 1, interval => 2),
     +	"token retry (two second interval)",
    -+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    -+);
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +$node->connect_ok(
     +	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
     +	"token retry (default interval)",
    -+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    -+);
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +
     +$node->connect_ok(
     +	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
     +	"content type with charset",
    -+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    -+);
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +$node->connect_ok(
    -+	connstr(stage => 'all', content_type => "application/json \t;\t charset=utf-8"),
    ++	connstr(
    ++		stage => 'all',
    ++		content_type => "application/json \t;\t charset=utf-8"),
     +	"content type with charset (whitespace)",
    -+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    -+);
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +$node->connect_ok(
     +	connstr(stage => 'device', uri_spelling => "verification_url"),
     +	"alternative spelling of verification_uri",
    -+	expected_stderr => qr@Visit https://example\.com/ and enter the code: postgresuser@
    -+);
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +
     +$node->connect_fails(
     +	connstr(stage => 'device', huge_response => JSON::PP::true),
     +	"bad device authz response: overlarge JSON",
    -+	expected_stderr => qr/failed to obtain device authorization: response is too large/
    -+);
    ++	expected_stderr =>
    ++	  qr/failed to obtain device authorization: response is too large/);
     +$node->connect_fails(
     +	connstr(stage => 'token', huge_response => JSON::PP::true),
     +	"bad token response: overlarge JSON",
    -+	expected_stderr => qr/failed to obtain access token: response is too large/
    -+);
    ++	expected_stderr =>
    ++	  qr/failed to obtain access token: response is too large/);
     +
     +$node->connect_fails(
     +	connstr(stage => 'device', content_type => 'text/plain'),
     +	"bad device authz response: wrong content type",
    -+	expected_stderr => qr/failed to parse device authorization: unexpected content type/
    -+);
    ++	expected_stderr =>
    ++	  qr/failed to parse device authorization: unexpected content type/);
     +$node->connect_fails(
     +	connstr(stage => 'token', content_type => 'text/plain'),
     +	"bad token response: wrong content type",
    -+	expected_stderr => qr/failed to parse access token response: unexpected content type/
    -+);
    ++	expected_stderr =>
    ++	  qr/failed to parse access token response: unexpected content type/);
     +$node->connect_fails(
     +	connstr(stage => 'token', content_type => 'application/jsonx'),
     +	"bad token response: wrong content type (correct prefix)",
    -+	expected_stderr => qr/failed to parse access token response: unexpected content type/
    -+);
    ++	expected_stderr =>
    ++	  qr/failed to parse access token response: unexpected content type/);
     +
     +$node->connect_fails(
    -+	connstr(stage => 'all', interval => ~0, retries => 1, retry_code => "slow_down"),
    ++	connstr(
    ++		stage => 'all',
    ++		interval => ~0,
    ++		retries => 1,
    ++		retry_code => "slow_down"),
     +	"bad token response: server overflows the device authz interval",
    -+	expected_stderr => qr/failed to obtain access token: slow_down interval overflow/
    -+);
    ++	expected_stderr =>
    ++	  qr/failed to obtain access token: slow_down interval overflow/);
     +
     +$node->connect_fails(
     +	connstr(stage => 'token', error_code => "invalid_grant"),
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +#
     +
     +# Searching the logs is easier if OAuth parameter discovery isn't cluttering
    -+# things up; hardcode the issuer. (Scope is hardcoded to empty to cover that
    -+# case as well.)
    ++# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
    ++# that case as well.)
     +$common_connstr =
    -+  "user=test dbname=postgres oauth_issuer=$issuer oauth_scope=''";
    ++  "user=test dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope=''";
     +
     +$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
     +$node->reload;
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +$log_start =
     +  $node->wait_for_log(qr/reloading configuration files/, $log_start);
     +
    ++#
    ++# Test multiple validators.
    ++#
    ++
    ++$node->append_conf('postgresql.conf',
    ++	"oauth_validator_libraries = 'validator, fail_validator'\n");
    ++
    ++# With multiple validators, every HBA line must explicitly declare one.
    ++my $result = $node->restart(fail_ok => 1);
    ++is($result, 0,
    ++	'restart fails without explicit validators in oauth HBA entries');
    ++
    ++$log_start = $node->wait_for_log(
    ++	qr/authentication method "oauth" requires argument "validator" to be set/,
    ++	$log_start);
    ++
    ++unlink($node->data_dir . '/pg_hba.conf');
    ++$node->append_conf(
    ++	'pg_hba.conf', qq{
    ++local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
    ++local all testalt oauth validator=fail_validator issuer="$issuer/alternate" scope="openid postgres alt"
    ++});
    ++$node->restart;
    ++
    ++$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
    ++
    ++# The test user should work as before.
    ++$user = "test";
    ++if ($node->connect_ok(
    ++		"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
    ++		"validator is used for $user",
    ++		expected_stderr =>
    ++		  qr@Visit https://example\.com/ and enter the code: postgresuser@))
    ++{
    ++	$log_start = $node->wait_for_log(qr/connection authorized/, $log_start);
    ++}
    ++
    ++# testalt should be routed through the fail_validator.
    ++$user = "testalt";
    ++$node->connect_fails(
    ++	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
    ++	"fail_validator is used for $user",
    ++	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
    ++
     +$node->stop;
     +
     +done_testing();
    @@ src/test/modules/oauth_validator/t/002_client.pl (new)
     +use MIME::Base64 qw(encode_base64);
     +use PostgreSQL::Test::Cluster;
     +use PostgreSQL::Test::Utils;
    -+use PostgreSQL::Test::OAuthServer;
     +use Test::More;
     +
     +if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
    @@ src/test/modules/oauth_validator/t/002_client.pl (new)
     +$node->init;
     +$node->append_conf('postgresql.conf', "log_connections = on\n");
     +$node->append_conf('postgresql.conf',
    -+	"oauth_validator_library = 'validator'\n");
    ++	"oauth_validator_libraries = 'validator'\n");
     +$node->start;
     +
     +$node->safe_psql('postgres', 'CREATE USER test;');
     +
    -+my $issuer = "https://127.0.0.1:54321";
    ++# These tests don't use the builtin flow, and we don't have an authorization
    ++# server running, so the address used here shouldn't matter. Use an invalid IP
    ++# address, so if there's some cascade of errors that causes the client to
    ++# attempt a connection, we'll fail noisily.
    ++my $issuer = "https://256.256.256.256";
     +my $scope = "openid postgres";
     +
     +unlink($node->data_dir . '/pg_hba.conf');
    @@ src/test/modules/oauth_validator/t/002_client.pl (new)
     +
     +my $user = "test";
     +my $base_connstr = $node->connstr() . " user=$user";
    -+my $common_connstr = "$base_connstr oauth_client_id=myID";
    ++my $common_connstr =
    ++  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
     +
     +sub test
     +{
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        """
     +        Switches the behavior of the provider depending on the issuer URI.
     +        """
    -+        self._alt_issuer = self.path.startswith("/alternate/")
    ++        self._alt_issuer = (
    ++            self.path.startswith("/alternate/")
    ++            or self.path == "/.well-known/oauth-authorization-server/alternate"
    ++        )
     +        self._parameterized = self.path.startswith("/param/")
     +
     +        if self._alt_issuer:
    -+            self.path = self.path.removeprefix("/alternate")
    ++            # The /alternate issuer uses IETF-style .well-known URIs.
    ++            if self.path.startswith("/.well-known/"):
    ++                self.path = self.path.removesuffix("/alternate")
    ++            else:
    ++                self.path = self.path.removeprefix("/alternate")
     +        elif self._parameterized:
     +            self.path = self.path.removeprefix("/param")
     +
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +        self._response_code = 200
     +        self._check_issuer()
     +
    -+        if self.path == "/.well-known/openid-configuration":
    ++        config_path = "/.well-known/openid-configuration"
    ++        if self._alt_issuer:
    ++            config_path = "/.well-known/oauth-authorization-server"
    ++
    ++        if self.path == config_path:
     +            resp = self.config()
     +        else:
     +            self.send_error(404, "Not Found")
2:  01df79980b ! 2:  3d169848db DO NOT MERGE: Add pytest suite for OAuth
    @@ .cirrus.tasks.yml: task:
        matrix:
          - name: Linux - Debian Bookworm - Autoconf
     @@ .cirrus.tasks.yml: task:
    - 
    -       # Also build & test in a 32bit build - it's gotten rare to test that
    -       # locally.
    +       # can easily provide some here by running one of the sets of tests that
    +       # way. Newer versions of python insist on changing the LC_CTYPE away
    +       # from C, prevent that with PYTHONCOERCECLOCALE.
     +      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
     +      # Python modules can't link against libpq.
    -       configure_32_script: |
    +       test_world_32_script: |
              su postgres <<-EOF
    -           export CC='ccache gcc -m32'
     +          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
    -           meson setup \
    -             --buildtype=debug \
    -             -Dcassert=true -Dinjection_points=true \
    +           ulimit -c unlimited
    +           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
    +         EOF
     
      ## meson.build ##
     @@ meson.build: else
    @@ meson.build: foreach test_dir : tests
     +        env.set(name, value)
     +      endforeach
     +
    -+      if get_option('PG_TEST_EXTRA').contains('python')
    -+        reqs = files(t['requirements'])
    -+        test('install_' + venv_name,
    -+          python,
    -+          args: [ make_venv, '--requirements', reqs, venv_path ],
    -+          env: env,
    -+          priority: setup_tests_priority - 1,  # must run after tmp_install
    -+          is_parallel: false,
    -+          suite: ['setup'],
    -+          timeout: 60,  # 30s is too short for the cryptography package compile
    -+        )
    -+      endif
    ++      reqs = files(t['requirements'])
    ++      test('install_' + venv_name,
    ++        python,
    ++        args: [ make_venv, '--requirements', reqs, venv_path ],
    ++        env: env,
    ++        priority: setup_tests_priority - 1,  # must run after tmp_install
    ++        is_parallel: false,
    ++        suite: ['setup'],
    ++        timeout: 60,  # 30s is too short for the cryptography package compile
    ++      )
     +
     +      test_group = test_dir['name']
     +      test_output = test_result_dir / test_group / kind
    @@ meson.build: foreach test_dir : tests
     +        'timeout': 1000,
     +        'depends': test_deps,
     +        'env': env,
    -+      }
    ++      } + t.get('test_kwargs', {})
     +
     +      if fs.is_dir(venv_path / 'Scripts')
     +        # Windows virtualenv layout
    @@ meson.build: foreach test_dir : tests
     +        testwrap_pytest = testwrap_base + [
     +          '--testgroup', test_group,
     +          '--testname', pyt_p,
    ++          '--skip-without-extra', 'python',
     +        ]
    -+        if not get_option('PG_TEST_EXTRA').contains('python')
    -+          testwrap_pytest += ['--skip', '"python" tests not enabled in PG_TEST_EXTRA']
    -+        endif
     +
     +        test(test_group / pyt_p,
     +          python,
    @@ src/test/python/client/conftest.py (new)
     +#
     +
     +import contextlib
    ++import datetime
    ++import functools
    ++import ipaddress
    ++import os
     +import socket
     +import sys
     +import threading
    @@ src/test/python/client/conftest.py (new)
     +import psycopg2
     +import psycopg2.extras
     +import pytest
    ++from cryptography import x509
    ++from cryptography.hazmat.primitives import hashes, serialization
    ++from cryptography.hazmat.primitives.asymmetric import rsa
    ++from cryptography.x509.oid import NameOID
     +
     +import pq3
     +
    @@ src/test/python/client/conftest.py (new)
     +    with sock:
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
     +            yield conn
    ++
    ++
    ++@pytest.fixture(scope="session")
    ++def certpair(tmp_path_factory):
    ++    """
    ++    Yields a (cert, key) pair of file paths that can be used by a TLS server.
    ++    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
    ++    """
    ++
    ++    tmpdir = tmp_path_factory.mktemp("certs")
    ++    now = datetime.datetime.now(datetime.timezone.utc)
    ++
    ++    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
    ++    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
    ++
    ++    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
    ++    altNames = [
    ++        x509.DNSName("localhost"),
    ++        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
    ++        x509.IPAddress(ipaddress.IPv6Address("::1")),
    ++    ]
    ++    cert = (
    ++        x509.CertificateBuilder()
    ++        .subject_name(subject)
    ++        .issuer_name(issuer)
    ++        .public_key(key.public_key())
    ++        .serial_number(x509.random_serial_number())
    ++        .not_valid_before(now)
    ++        .not_valid_after(now + datetime.timedelta(minutes=10))
    ++        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
    ++        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
    ++    ).sign(key, hashes.SHA256())
    ++
    ++    # Writing the key with mode 0600 lets us use this from the server side, too.
    ++    keypath = str(tmpdir / "key.pem")
    ++    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
    ++        f.write(
    ++            key.private_bytes(
    ++                encoding=serialization.Encoding.PEM,
    ++                format=serialization.PrivateFormat.PKCS8,
    ++                encryption_algorithm=serialization.NoEncryption(),
    ++            )
    ++        )
    ++
    ++    certpath = str(tmpdir / "cert.pem")
    ++    with open(certpath, "wb") as f:
    ++        f.write(cert.public_bytes(serialization.Encoding.PEM))
    ++
    ++    return certpath, keypath
     
      ## src/test/python/client/test_client.py (new) ##
     @@
    @@ src/test/python/client/test_oauth.py (new)
     +
     +import base64
     +import collections
    ++import contextlib
     +import ctypes
     +import http.server
     +import json
    @@ src/test/python/client/test_oauth.py (new)
     +import os
     +import platform
     +import secrets
    ++import socket
    ++import ssl
     +import sys
     +import threading
     +import time
    @@ src/test/python/client/test_oauth.py (new)
     +    )
     +
     +
    -+def xtest_oauth_success(conn):  # TODO
    -+    initial = start_oauth_handshake(conn)
    -+
    -+    auth = get_auth_value(initial)
    -+    assert auth.startswith(b"Bearer ")
    -+
    -+    # Accept the token. TODO actually validate
    -+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
    -+    finish_handshake(conn)
    -+
    -+
     +class RawResponse(str):
     +    """
     +    Returned by registered endpoint callbacks to take full control of the
    @@ src/test/python/client/test_oauth.py (new)
     +
     +class OpenIDProvider(threading.Thread):
     +    """
    -+    A thread that runs a mock OpenID provider server.
    ++    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
     +    """
     +
    -+    def __init__(self, *, port):
    ++    def __init__(self, ssl_socket):
     +        super().__init__()
     +
     +        self.exception = None
     +
    -+        addr = ("", port)
    -+        self.server = self._Server(addr, self._Handler)
    ++        _, port = ssl_socket.getsockname()
     +
    -+        # TODO: allow HTTPS only, somehow
     +        oauth = self._OAuthState()
     +        oauth.host = f"localhost:{port}"
    -+        oauth.issuer = f"http://localhost:{port}"
    ++        oauth.issuer = f"https://localhost:{port}"
     +
     +        # The following endpoints are required to be advertised by providers,
     +        # even though our chosen client implementation does not actually make
    @@ src/test/python/client/test_oauth.py (new)
     +        )
     +        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
     +
    ++        self.server = self._HTTPSServer(ssl_socket, self._Handler)
     +        self.server.oauth = oauth
     +
     +    def run(self):
    @@ src/test/python/client/test_oauth.py (new)
     +
     +            return 200, doc
     +
    -+    class _Server(http.server.HTTPServer):
    ++    class _HTTPSServer(http.server.HTTPServer):
    ++        def __init__(self, ssl_socket, handler_cls):
    ++            # Attach the SSL socket to the server. We don't bind/activate since
    ++            # the socket is already listening.
    ++            super().__init__(None, handler_cls, bind_and_activate=False)
    ++            self.socket = ssl_socket
    ++            self.server_address = self.socket.getsockname()
    ++
    ++        def shutdown_request(self, request):
    ++            # Cleanly unwrap the SSL socket before shutting down the connection;
    ++            # otherwise careful clients will complain about truncation.
    ++            try:
    ++                request = request.unwrap()
    ++            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
    ++                # The client already closed (or aborted) the connection without
    ++                # a clean shutdown. This is seen on some platforms during tests
    ++                # that break the HTTP protocol. Just return and have the server
    ++                # close the socket.
    ++                return
    ++
    ++            super().shutdown_request(request)
    ++
     +        def handle_error(self, request, addr):
     +            self.shutdown_request(request)
     +            raise
    @@ src/test/python/client/test_oauth.py (new)
     +            oauth = self.server.oauth
     +            assert self.headers["Host"] == oauth.host
     +
    ++            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
    ++            # to work around an open redirection vuln (gh-87389) in
    ++            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
    ++            # want to test repeating leading slashes, so that's not very
    ++            # helpful. Put them back.
    ++            orig_path = self.raw_requestline.split()[1]
    ++            orig_path = str(orig_path, "iso-8859-1")
    ++            assert orig_path.endswith(self.path)  # sanity check
    ++            self.path = orig_path
    ++
     +            if handler is None:
     +                handler = oauth.endpoint(self.command, self.path)
     +                assert (
    @@ src/test/python/client/test_oauth.py (new)
     +    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
     +
     +
    ++@pytest.fixture(autouse=True)
    ++def trust_certpair_in_client(monkeypatch, certpair):
    ++    """
    ++    Set a trusted CA file for OAuth client connections.
    ++    """
    ++    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
    ++
    ++
    ++@pytest.fixture(scope="session")
    ++def ssl_socket(certpair):
    ++    """
    ++    A listening server-side socket for SSL connections, using the certpair
    ++    fixture.
    ++    """
    ++    sock = socket.create_server(("", 0))
    ++
    ++    # The TLS connections we're making are incredibly sensitive to delayed ACKs
    ++    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
    ++    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
    ++
    ++    with contextlib.closing(sock):
    ++        # Wrap the server socket for TLS.
    ++        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
    ++        ctx.load_cert_chain(*certpair)
    ++
    ++        yield ctx.wrap_socket(sock, server_side=True)
    ++
    ++
     +@pytest.fixture
    -+def openid_provider(unused_tcp_port_factory):
    ++def openid_provider(ssl_socket):
     +    """
     +    A fixture that returns the OAuth state of a running OpenID provider server. The
     +    server will be stopped when the fixture is torn down.
     +    """
    -+    thread = OpenIDProvider(port=unused_tcp_port_factory())
    ++    thread = OpenIDProvider(ssl_socket)
     +    thread.start()
     +
     +    try:
    @@ src/test/python/client/test_oauth.py (new)
     +        pytest.param(True, id="asynchronous"),
     +    ],
     +)
    -+def test_oauth_with_explicit_issuer(
    ++def test_oauth_with_explicit_discovery_uri(
     +    accept,
     +    openid_provider,
     +    asynchronous,
    @@ src/test/python/client/test_oauth.py (new)
     +    openid_provider.content_type = content_type
     +
     +    sock, client = accept(
    -+        oauth_issuer=openid_provider.issuer,
    ++        oauth_issuer=openid_provider.discovery_uri,
     +        oauth_client_id=client_id,
     +        oauth_client_secret=secret,
     +        oauth_scope=scope,
    @@ src/test/python/client/test_oauth.py (new)
     +            client.check_completed()
     +
     +
    ++@pytest.mark.parametrize(
    ++    "server_discovery",
    ++    [
    ++        pytest.param(True, id="server discovery"),
    ++        pytest.param(False, id="direct discovery"),
    ++    ],
    ++)
    ++@pytest.mark.parametrize(
    ++    "issuer, path",
    ++    [
    ++        pytest.param(
    ++            "{issuer}",
    ++            "/.well-known/oauth-authorization-server",
    ++            id="oauth",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/alt",
    ++            "/.well-known/oauth-authorization-server/alt",
    ++            id="oauth with path, IETF style",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/alt",
    ++            "/alt/.well-known/oauth-authorization-server",
    ++            id="oauth with path, broken OIDC style",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/alt",
    ++            "/alt/.well-known/openid-configuration",
    ++            id="openid with path, OIDC style",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/alt",
    ++            "/.well-known/openid-configuration/alt",
    ++            id="openid with path, IETF style",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/",
    ++            "//.well-known/openid-configuration",
    ++            id="empty path segment, OIDC style",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/",
    ++            "/.well-known/openid-configuration/",
    ++            id="empty path segment, IETF style",
    ++        ),
    ++    ],
    ++)
    ++def test_alternate_well_known_paths(
    ++    accept, openid_provider, issuer, path, server_discovery
    ++):
    ++    issuer = issuer.format(issuer=openid_provider.issuer)
    ++    discovery_uri = openid_provider.issuer + path
    ++
    ++    client_id = secrets.token_hex()
    ++    access_token = secrets.token_urlsafe()
    ++
    ++    def discovery_handler(*args):
    ++        """
    ++        Pass-through implementation of the discovery handler. Modifies the
    ++        default document to contain this test's issuer identifier.
    ++        """
    ++        code, doc = openid_provider._default_discovery_handler(*args)
    ++        doc["issuer"] = issuer
    ++        return code, doc
    ++
    ++    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
    ++
    ++    def authorization_endpoint(headers, params):
    ++        resp = {
    ++            "device_code": "12345",
    ++            "user_code": "ABCDE",
    ++            "interval": 0,
    ++            "verification_url": "https://example.com/device",
    ++            "expires_in": 5,
    ++        }
    ++
    ++        return 200, resp
    ++
    ++    openid_provider.register_endpoint(
    ++        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
    ++    )
    ++
    ++    def token_endpoint(headers, params):
    ++        # Successfully finish the request by sending the access bearer token.
    ++        resp = {
    ++            "access_token": access_token,
    ++            "token_type": "bearer",
    ++        }
    ++
    ++        return 200, resp
    ++
    ++    openid_provider.register_endpoint(
    ++        "token_endpoint", "POST", "/token", token_endpoint
    ++    )
    ++
    ++    kwargs = dict(oauth_client_id=client_id)
    ++    if server_discovery:
    ++        kwargs.update(oauth_issuer=issuer)
    ++    else:
    ++        kwargs.update(oauth_issuer=discovery_uri)
    ++
    ++    sock, client = accept(**kwargs)
    ++
    ++    if server_discovery:
    ++        with sock:
    ++            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    ++                initial = start_oauth_handshake(conn)
    ++
    ++                # For discovery, the client should send an empty auth header.
    ++                # See RFC 7628, Sec. 4.3.
    ++                auth = get_auth_value(initial)
    ++                assert auth == b""
    ++
    ++                # Always fail the discovery exchange.
    ++                fail_oauth_handshake(
    ++                    conn,
    ++                    {
    ++                        "status": "invalid_token",
    ++                        "openid-configuration": discovery_uri,
    ++                    },
    ++                )
    ++
    ++        # Expect the client to connect again.
    ++        sock, client = accept()
    ++
    ++    with sock:
    ++        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    ++            initial = start_oauth_handshake(conn)
    ++
    ++            # Validate the token.
    ++            auth = get_auth_value(initial)
    ++            assert auth == f"Bearer {access_token}".encode("ascii")
    ++
    ++            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
    ++            finish_handshake(conn)
    ++
    ++
    ++@pytest.mark.parametrize(
    ++    "server_discovery",
    ++    [
    ++        pytest.param(True, id="server discovery"),
    ++        pytest.param(False, id="direct discovery"),
    ++    ],
    ++)
    ++@pytest.mark.parametrize(
    ++    "issuer, path, expected_error",
    ++    [
    ++        pytest.param(
    ++            "{issuer}",
    ++            "/.well-known/oauth-authorization-server/",
    ++            None,
    ++            id="extra empty segment",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}",
    ++            "?/.well-known/oauth-authorization-server",
    ++            r'OAuth discovery URI ".*" must not contain query or fragment components',
    ++            id="query",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}",
    ++            "#/.well-known/oauth-authorization-server",
    ++            r'OAuth discovery URI ".*" must not contain query or fragment components',
    ++            id="fragment",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/sub/path",
    ++            "/sub/.well-known/oauth-authorization-server/path",
    ++            r'OAuth discovery URI ".*" uses an invalid format',
    ++            id="sandwiched prefix",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/path",
    ++            "/path/openid-configuration",
    ++            r'OAuth discovery URI ".*" is not a .well-known URI',
    ++            id="not .well-known",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}",
    ++            "https://.well-known/oauth-authorization-server",
    ++            r'OAuth discovery URI ".*" is not a .well-known URI',
    ++            id=".well-known prefix buried in the authority",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}",
    ++            "/.well-known/oauth-protected-resource",
    ++            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
    ++            id="unknown well-known suffix",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/path",
    ++            "/path/.well-known/openid-configuration-2",
    ++            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
    ++            id="unknown well-known suffix, OIDC style",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/path",
    ++            "/.well-known/oauth-authorization-server-2/path",
    ++            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
    ++            id="unknown well-known suffix, IETF style",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}",
    ++            "file:///.well-known/oauth-authorization-server",
    ++            r'OAuth discovery URI ".*" must use HTTPS',
    ++            id="unsupported scheme",
    ++        ),
    ++    ],
    ++)
    ++def test_bad_well_known_paths(
    ++    accept, openid_provider, issuer, path, expected_error, server_discovery
    ++):
    ++    if not server_discovery and "/.well-known/" not in path:
    ++        # An oauth_issuer without a /.well-known/ path segment is just a normal
    ++        # issuer identifier, so this isn't an interesting test.
    ++        pytest.skip("not interesting: direct discovery requires .well-known")
    ++
    ++    issuer = issuer.format(issuer=openid_provider.issuer)
    ++    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
    ++
    ++    client_id = secrets.token_hex()
    ++
    ++    def discovery_handler(*args):
    ++        """
    ++        Pass-through implementation of the discovery handler. Modifies the
    ++        default document to contain this test's issuer identifier.
    ++        """
    ++        code, doc = openid_provider._default_discovery_handler(*args)
    ++        doc["issuer"] = issuer
    ++        return code, doc
    ++
    ++    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
    ++
    ++    def fail(*args):
    ++        """
    ++        No other endpoints should be contacted; fail if the client tries.
    ++        """
    ++        assert False, "endpoint unexpectedly called"
    ++
    ++    openid_provider.register_endpoint(
    ++        "device_authorization_endpoint", "POST", "/device", fail
    ++    )
    ++    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
    ++
    ++    kwargs = dict(oauth_client_id=client_id)
    ++    if server_discovery:
    ++        kwargs.update(oauth_issuer=issuer)
    ++    else:
    ++        kwargs.update(oauth_issuer=discovery_uri)
    ++
    ++    sock, client = accept(**kwargs)
    ++
    ++    if server_discovery:
    ++        with sock:
    ++            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    ++                initial = start_oauth_handshake(conn)
    ++
    ++                # For discovery, the client should send an empty auth header.
    ++                # See RFC 7628, Sec. 4.3.
    ++                auth = get_auth_value(initial)
    ++                assert auth == b""
    ++
    ++                # Always fail the discovery exchange.
    ++                resp = {
    ++                    "status": "invalid_token",
    ++                    "openid-configuration": discovery_uri,
    ++                }
    ++                pq3.send(
    ++                    conn,
    ++                    pq3.types.AuthnRequest,
    ++                    type=pq3.authn.SASLContinue,
    ++                    body=json.dumps(resp).encode("utf-8"),
    ++                )
    ++
    ++                # FIXME: the client disconnects at this point; it'd be nicer if
    ++                # it completed the exchange.
    ++
    ++            # The client should not reconnect.
    ++
    ++    else:
    ++        expect_disconnected_handshake(sock)
    ++
    ++    if expected_error is None:
    ++        if server_discovery:
    ++            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
    ++        else:
    ++            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
    ++
    ++    with pytest.raises(psycopg2.OperationalError, match=expected_error):
    ++        client.check_completed()
    ++
    ++
     +def expect_disconnected_handshake(sock):
     +    """
     +    Helper for any tests that expect the client to disconnect immediately after
    @@ src/test/python/client/test_oauth.py (new)
     +            )
     +
     +            # The client should disconnect at this point.
    -+            assert not conn.read()
    ++            assert not conn.read(1), "client sent unexpected data"
     +
     +
    -+def test_oauth_requires_client_id(accept, openid_provider):
    -+    sock, client = accept(
    ++@pytest.mark.parametrize(
    ++    "missing",
    ++    [
    ++        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
    ++        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
    ++        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
    ++    ],
    ++)
    ++def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
    ++    params = dict(
     +        oauth_issuer=openid_provider.issuer,
    -+        # Do not set a client ID; this should cause a client error after the
    -+        # server asks for OAUTHBEARER and the client tries to contact the
    -+        # issuer.
    ++        oauth_client_id="some-id",
     +    )
     +
    ++    # Remove required parameters. This should cause a client error after the
    ++    # server asks for OAUTHBEARER and the client tries to contact the issuer.
    ++    for k in missing:
    ++        del params[k]
    ++
    ++    sock, client = accept(**params)
     +    expect_disconnected_handshake(sock)
     +
    -+    expected_error = "no oauth_client_id is set"
    ++    expected_error = "oauth_issuer and oauth_client_id are not both set"
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
     +        client.check_completed()
     +
    @@ src/test/python/client/test_oauth.py (new)
     +@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
     +def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
     +    sock, client = accept(
    -+        oauth_issuer=openid_provider.issuer,
    ++        oauth_issuer=openid_provider.discovery_uri,
     +        oauth_client_id=client_id,
     +        oauth_client_secret=secret,
     +        oauth_scope=scope,
    @@ src/test/python/client/test_oauth.py (new)
     +    accept, openid_provider, omit_interval, retries, error_code
     +):
     +    sock, client = accept(
    -+        oauth_issuer=openid_provider.issuer,
    ++        oauth_issuer=openid_provider.discovery_uri,
     +        oauth_client_id="some-id",
     +    )
     +
    @@ src/test/python/client/test_oauth.py (new)
     +    access_token = secrets.token_urlsafe()
     +
     +    sock, client = accept(
    -+        oauth_issuer=issuer,
    ++        oauth_issuer=discovery_uri,
     +        oauth_client_id="some-id",
     +        oauth_scope=scope,
     +        async_=asynchronous,
    @@ src/test/python/client/test_oauth.py (new)
     +    client_id = secrets.token_hex()
     +
     +    sock, client = accept(
    -+        oauth_issuer=openid_provider.issuer,
    ++        oauth_issuer=openid_provider.discovery_uri,
     +        oauth_client_id=client_id,
     +    )
     +
    @@ src/test/python/client/test_oauth.py (new)
     +        pytest.skip("not interesting: correct type")
     +
     +    sock, client = accept(
    -+        oauth_issuer=openid_provider.issuer,
    ++        oauth_issuer=openid_provider.discovery_uri,
     +        oauth_client_id=secrets.token_hex(),
     +    )
     +
    @@ src/test/python/client/test_oauth.py (new)
     +    client_id = secrets.token_hex()
     +
     +    sock, client = accept(
    -+        oauth_issuer=openid_provider.issuer,
    ++        oauth_issuer=openid_provider.discovery_uri,
     +        oauth_client_id=client_id,
     +    )
     +
    @@ src/test/python/client/test_oauth.py (new)
     +        pytest.skip("not interesting: correct type")
     +
     +    sock, client = accept(
    -+        oauth_issuer=openid_provider.issuer,
    ++        oauth_issuer=openid_provider.discovery_uri,
     +        oauth_client_id=secrets.token_hex(),
     +    )
     +
    @@ src/test/python/client/test_oauth.py (new)
     +    ],
     +)
     +def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
    -+    sock, client = accept(oauth_client_id=secrets.token_hex())
    ++    sock, client = accept(
    ++        oauth_issuer=openid_provider.issuer,
    ++        oauth_client_id=secrets.token_hex(),
    ++    )
     +
     +    device_code = secrets.token_hex()
     +    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
    @@ src/test/python/client/test_oauth.py (new)
     +    ],
     +)
     +def test_oauth_discovery_server_error(accept, response, expected_error):
    -+    sock, client = accept(oauth_client_id=secrets.token_hex())
    ++    sock, client = accept(
    ++        oauth_issuer="https://example.com",
    ++        oauth_client_id=secrets.token_hex(),
    ++    )
     +
     +    with sock:
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    @@ src/test/python/client/test_oauth.py (new)
     +        client.check_completed()
     +
     +
    ++# All of these tests are expected to fail before libpq tries to actually attempt
    ++# a connection to any endpoint. To avoid hitting the network in the event that a
    ++# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
     +@pytest.mark.parametrize(
     +    "bad_response,expected_error",
     +    [
    @@ src/test/python/client/test_oauth.py (new)
     +                200,
     +                {
     +                    "grant_types_supported": ["something"],
    -+                    "token_endpoint": "https://example.com/",
    ++                    "token_endpoint": "https://256.256.256.256/",
     +                    "issuer": 123,
     +                },
     +            ),
    @@ src/test/python/client/test_oauth.py (new)
     +            id="non-string issuer after other ignored fields",
     +        ),
     +        pytest.param(
    -+            (200, {"token_endpoint": "https://example.com/"}),
    ++            (200, {"token_endpoint": "https://256.256.256.256/"}),
     +            r'failed to parse OpenID discovery document: field "issuer" is missing',
     +            id="missing issuer",
     +        ),
     +        pytest.param(
    -+            (200, {"issuer": "https://example.com/"}),
    ++            (200, {"issuer": "{issuer}"}),
     +            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
     +            id="missing token endpoint",
     +        ),
    @@ src/test/python/client/test_oauth.py (new)
     +            (
     +                200,
     +                {
    -+                    "issuer": "https://example.com",
    -+                    "token_endpoint": "https://example.com/token",
    -+                    "device_authorization_endpoint": "https://example.com/dev",
    ++                    "issuer": "{issuer}",
    ++                    "token_endpoint": "https://256.256.256.256/token",
    ++                    "device_authorization_endpoint": "https://256.256.256.256/dev",
     +                },
     +            ),
    -+            r'cannot run OAuth device authorization: issuer "https://example.com" does not support device code grants',
    ++            r'cannot run OAuth device authorization: issuer "https://.*" does not support device code grants',
     +            id="missing device code grants",
     +        ),
     +        pytest.param(
     +            (
     +                200,
     +                {
    -+                    "issuer": "https://example.com",
    -+                    "token_endpoint": "https://example.com/token",
    ++                    "issuer": "{issuer}",
    ++                    "token_endpoint": "https://256.256.256.256/token",
     +                    "grant_types_supported": [
     +                        "urn:ietf:params:oauth:grant-type:device_code"
     +                    ],
     +                },
     +            ),
    -+            r'cannot run OAuth device authorization: issuer "https://example.com" does not provide a device authorization endpoint',
    ++            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
     +            id="missing device_authorization_endpoint",
     +        ),
     +        pytest.param(
     +            (
     +                200,
     +                {
    -+                    "issuer": "https://example.com",
    -+                    "token_endpoint": "https://example.com/token",
    ++                    "issuer": "{issuer}",
    ++                    "token_endpoint": "https://256.256.256.256/token",
     +                    "grant_types_supported": [
     +                        "urn:ietf:params:oauth:grant-type:device_code"
     +                    ],
    -+                    "device_authorization_endpoint": "https://example.com/dev",
    ++                    "device_authorization_endpoint": "https://256.256.256.256/dev",
     +                    "filler": "x" * 1024 * 1024,
     +                },
     +            ),
     +            r"failed to fetch OpenID discovery document: response is too large",
     +            id="gigantic discovery response",
     +        ),
    ++        pytest.param(
    ++            (
    ++                200,
    ++                {
    ++                    "issuer": "{issuer}/path",
    ++                    "token_endpoint": "https://256.256.256.256/token",
    ++                    "grant_types_supported": [
    ++                        "urn:ietf:params:oauth:grant-type:device_code"
    ++                    ],
    ++                    "device_authorization_endpoint": "https://256.256.256.256/dev",
    ++                },
    ++            ),
    ++            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
    ++            id="mismatched issuer identifier",
    ++        ),
     +        #
     +        # Exercise HTTP-level failures by breaking the protocol. Note that the
     +        # error messages here are implementation-dependent.
    @@ src/test/python/client/test_oauth.py (new)
     +    accept, openid_provider, bad_response, expected_error
     +):
     +    sock, client = accept(
    -+        oauth_issuer=openid_provider.issuer,
    ++        oauth_issuer=openid_provider.discovery_uri,
     +        oauth_client_id=secrets.token_hex(),
     +    )
     +
     +    def failing_discovery_handler(headers, params):
    ++        try:
    ++            # Insert the correct issuer value if the test wants to.
    ++            resp = bad_response[1]
    ++            iss = resp["issuer"]
    ++            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
    ++        except (AttributeError, KeyError, TypeError):
    ++            pass
    ++
     +        return bad_response
     +
     +    openid_provider.register_endpoint(
    @@ src/test/python/client/test_oauth.py (new)
     +    ],
     +)
     +def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
    -+    sock, client = accept()
    ++    sock, client = accept(
    ++        oauth_issuer="https://example.com",
    ++        oauth_client_id="some-id",
    ++    )
     +
     +    with sock:
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    @@ src/test/python/client/test_oauth.py (new)
     +    int_max = ctypes.c_uint(-1).value // 2
     +
     +    sock, client = accept(
    -+        oauth_issuer=openid_provider.issuer,
    ++        oauth_issuer=openid_provider.discovery_uri,
     +        oauth_client_id=secrets.token_hex(),
     +    )
     +
    @@ src/test/python/client/test_oauth.py (new)
     +    HTTP must be refused without PGOAUTHDEBUG.
     +    """
     +    monkeypatch.delenv("PGOAUTHDEBUG")
    ++
    ++    def to_http(uri):
    ++        """Swaps out a URI's scheme for http."""
    ++        parts = urllib.parse.urlparse(uri)
    ++        parts = parts._replace(scheme="http")
    ++        return urllib.parse.urlunparse(parts)
    ++
     +    sock, client = accept(
    ++        oauth_issuer=to_http(openid_provider.issuer),
     +        oauth_client_id=secrets.token_hex(),
     +    )
     +
    @@ src/test/python/client/test_oauth.py (new)
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
     +            initial = start_oauth_handshake(conn)
     +
    -+            # Fail the SASL exchange and link to the HTTP provider.
     +            resp = {
     +                "status": "invalid_token",
    -+                "openid-configuration": openid_provider.discovery_uri,
    ++                "openid-configuration": to_http(openid_provider.discovery_uri),
     +            }
    ++            pq3.send(
    ++                conn,
    ++                pq3.types.AuthnRequest,
    ++                type=pq3.authn.SASLContinue,
    ++                body=json.dumps(resp).encode("utf-8"),
    ++            )
     +
    -+            fail_oauth_handshake(conn, resp)
    -+
    -+    # FIXME: We'll get a second connection, but it won't do anything.
    -+    sock, _ = accept()
    -+    expect_disconnected_handshake(sock)
    ++            # FIXME: the client disconnects at this point; it'd be nicer if
    ++            # it completed the exchange.
     +
    -+    expected_error = "failed to fetch OpenID discovery document: Unsupported protocol"
    ++    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
     +        client.check_completed()
     
    @@ src/test/python/meson.build (new)
     +      './test_pq3.py',
     +    ],
     +    'env': pytest_env,
    ++    'test_kwargs': {'priority': 50}, # python tests are slow, start early
     +  },
     +}
     
    @@ src/test/python/server/conftest.py (new)
     +                        f"-c port={port}",
     +                        "-c listen_addresses=localhost",
     +                        "-c log_connections=on",
    -+                        "-c shared_preload_libraries=oauthtest",
    -+                        "-c oauth_validator_library=oauthtest",
    ++                        "-c session_preload_libraries=oauthtest",
    ++                        "-c oauth_validator_libraries=oauthtest",
     +                    ]
     +                ),
     +                "start",
    @@ src/tools/make_venv (new)
     +run(pip, 'install', 'pytest', 'pytest-tap')
     +if args.requirements:
     +    run(pip, 'install', '-r', args.requirements)
    +
    + ## src/tools/testwrap ##
    +@@ src/tools/testwrap: parser.add_argument('--testgroup', help='test group', type=str)
    + parser.add_argument('--testname', help='test name', type=str)
    + parser.add_argument('--skip', help='skip test (with reason)', type=str)
    + parser.add_argument('--pg-test-extra', help='extra tests', type=str)
    ++parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
    + parser.add_argument('test_command', nargs='*')
    + 
    + args = parser.parse_args()
    +@@ src/tools/testwrap: if args.skip is not None:
    +     print('1..0 # Skipped: ' + args.skip)
    +     sys.exit(0)
    + 
    ++if args.skip_without_extra is not None:
    ++    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
    ++    if extras is None or args.skip_without_extra not in extras.split():
    ++        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
    ++        sys.exit(0)
    ++
    + if os.path.exists(testdir) and os.path.isdir(testdir):
    +     shutil.rmtree(testdir)
    + os.makedirs(testdir)
v37-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v37-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 16f1b8fc02f84f61ae4e2d0c8a5c5d39b97f05c5 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v37 1/2] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied (see below).

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

= Server-Side Validation =

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

= OAuth HBA Method =

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   17 +-
 config/programs.m4                            |   19 +
 configure                                     |  153 ++
 configure.ac                                  |   36 +
 doc/src/sgml/client-auth.sgml                 |  177 ++
 doc/src/sgml/config.sgml                      |   21 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   29 +
 doc/src/sgml/libpq.sgml                       |  134 +
 doc/src/sgml/oauth-validators.sgml            |  140 +
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   34 +
 meson_options.txt                             |    4 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  812 ++++++
 src/backend/libpq/auth.c                      |   26 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |   17 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2440 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          |  900 ++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   43 +
 src/interfaces/libpq/fe-auth-sasl.h           |   10 +-
 src/interfaces/libpq/fe-auth-scram.c          |    6 +-
 src/interfaces/libpq/fe-auth.c                |  103 +-
 src/interfaces/libpq/fe-auth.h                |    9 +-
 src/interfaces/libpq/fe-connect.c             |   89 +-
 src/interfaces/libpq/fe-misc.c                |    7 +-
 src/interfaces/libpq/libpq-fe.h               |   88 +
 src/interfaces/libpq/libpq-int.h              |   16 +
 src/interfaces/libpq/meson.build              |    5 +
 src/interfaces/libpq/pqexpbuffer.c            |    2 +-
 src/interfaces/libpq/pqexpbuffer.h            |    6 +
 src/makefiles/meson.build                     |    2 +
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 .../modules/oauth_validator/fail_validator.c  |   44 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  157 ++
 .../modules/oauth_validator/t/001_server.pl   |  428 +++
 .../modules/oauth_validator/t/002_client.pl   |  114 +
 .../modules/oauth_validator/t/oauth_server.py |  370 +++
 src/test/modules/oauth_validator/validator.c  |  100 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |   65 +
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   15 +
 60 files changed, 6929 insertions(+), 59 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index fc413eb11e..26e747d559 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -175,6 +175,7 @@ task:
         --buildtype=debug \
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
+        -Dbuiltin_oauth=curl \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -223,6 +224,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-builtin-oauth=curl
   --with-pam
   --with-perl
   --with-python
@@ -235,6 +237,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
   -Dllvm=enabled
+  -Dbuiltin_oauth=curl
   -Duuid=e2fs
 
 
@@ -312,8 +315,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -687,8 +692,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..c58ca50ece 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,7 +142,26 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
+# explicitly set TLS 1.3 ciphersuites).
 
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
+[#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif], [])],
+[pgac_cv_check_libcurl=yes],
+[pgac_cv_check_libcurl=no])])
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    AC_MSG_ERROR([
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required.])
+fi])
 
 # PGAC_CHECK_READLINE
 # -------------------
diff --git a/configure b/configure
index 28719ed30c..ee6291013f 100755
--- a/configure
+++ b/configure
@@ -712,6 +712,7 @@ with_uuid
 with_readline
 with_systemd
 with_selinux
+with_builtin_oauth
 with_ldap
 with_krb_srvnam
 krb_srvtab
@@ -857,6 +858,7 @@ with_krb_srvnam
 with_pam
 with_bsd_auth
 with_ldap
+with_builtin_oauth
 with_bonjour
 with_selinux
 with_systemd
@@ -1566,6 +1568,8 @@ Optional Packages:
   --with-pam              build with PAM support
   --with-bsd-auth         build with BSD Authentication support
   --with-ldap             build with LDAP support
+  --with-builtin-oauth=LIB
+                          use LIB for built-in OAuth 2.0 client flows (curl)
   --with-bonjour          build with Bonjour support
   --with-selinux          build with SELinux support
   --with-systemd          build with systemd support
@@ -8510,6 +8514,57 @@ $as_echo "$with_ldap" >&6; }
 
 
 
+#
+# OAuth 2.0
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with built-in OAuth client support" >&5
+$as_echo_n "checking whether to build with built-in OAuth client support... " >&6; }
+
+
+
+# Check whether --with-builtin-oauth was given.
+if test "${with_builtin_oauth+set}" = set; then :
+  withval=$with_builtin_oauth;
+  case $withval in
+    yes)
+      as_fn_error $? "argument required for --with-builtin-oauth option" "$LINENO" 5
+      ;;
+    no)
+      as_fn_error $? "argument required for --with-builtin-oauth option" "$LINENO" 5
+      ;;
+    *)
+
+      ;;
+  esac
+
+fi
+
+
+if test x"$with_builtin_oauth" = x"" ; then
+  with_builtin_oauth=no
+fi
+
+if test x"$with_builtin_oauth" = x"curl"; then
+
+$as_echo "#define USE_BUILTIN_OAUTH 1" >>confdefs.h
+
+
+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
+  fi
+elif test x"$with_builtin_oauth" != x"no"; then
+  as_fn_error $? "--with-builtin-oauth must specify curl" "$LINENO" 5
+fi
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_builtin_oauth" >&5
+$as_echo "$with_builtin_oauth" >&6; }
+
+
+
 #
 # Bonjour
 #
@@ -12207,6 +12262,93 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_builtin_oauth" = curl ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-builtin-oauth=curl" "$LINENO" 5
+fi
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
+$as_echo_n "checking for compatible libcurl... " >&6; }
+if ${pgac_cv_check_libcurl+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+#include <curl/curlver.h>
+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
+choke me
+#endif
+int
+main ()
+{
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+  pgac_cv_check_libcurl=yes
+else
+  pgac_cv_check_libcurl=no
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
+$as_echo "$pgac_cv_check_libcurl" >&6; }
+
+if test "$pgac_cv_check_libcurl" != yes; then
+    as_fn_error $? "
+*** The installed version of libcurl is too old to use with PostgreSQL.
+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
+fi
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
@@ -14049,6 +14191,17 @@ fi
 
 done
 
+fi
+
+if test "$with_builtin_oauth" = curl; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-builtin-oauth=curl" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 533f4ab78a..162250d3dd 100644
--- a/configure.ac
+++ b/configure.ac
@@ -925,6 +925,30 @@ AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
 
+#
+# OAuth 2.0
+#
+AC_MSG_CHECKING([whether to build with built-in OAuth client support])
+PGAC_ARG_REQ(with, builtin-oauth, [LIB], [use LIB for built-in OAuth 2.0 client flows (curl)])
+if test x"$with_builtin_oauth" = x"" ; then
+  with_builtin_oauth=no
+fi
+
+if test x"$with_builtin_oauth" = x"curl"; then
+  AC_DEFINE([USE_BUILTIN_OAUTH], 1, [Define to 1 to build with OAuth 2.0 client flows. (--with-builtin-oauth)])
+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth client flows.])
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
+  fi
+elif test x"$with_builtin_oauth" != x"no"; then
+  AC_MSG_ERROR([--with-builtin-oauth must specify curl])
+fi
+
+AC_MSG_RESULT([$with_builtin_oauth])
+AC_SUBST(with_builtin_oauth)
+
+
 #
 # Bonjour
 #
@@ -1294,6 +1318,14 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_builtin_oauth" = curl ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-builtin-oauth=curl])])
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
@@ -1590,6 +1622,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_builtin_oauth" = curl; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-builtin-oauth=curl])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 51343de7ca..a567495352 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -647,6 +647,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1134,6 +1144,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2312,6 +2328,167 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+
+    <itemizedlist>
+     <listitem>
+      <para>
+       Resource owner: The user or system who owns protected resources and can
+       grant access to them.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Client: The system which accesses the protected resources using access
+       tokens.  Applications using libpq are the clients in connecting to a
+       <productname>PostgreSQL</productname> cluster.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Authorization server: The system which receives requests from, and
+       issues access tokens to, the client after the authenticated resource
+       owner has given approval.
+      </para>
+     </listitem>
+
+     <listitem>
+      <para>
+       Resource server: The system which hosts the protected resources which are
+       accessed by the client. The <productname>PostgreSQL</productname> cluster
+       being connected to is the resource server.
+      </para>
+     </listitem>
+
+    </itemizedlist>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authentication server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        The issuer identifier of the authorization server, as defined by its
+        discovery document, or a well-known URI pointing to that discovery
+        document. This parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a discovery document URI
+        will be constructed using the issuer identifier. By default, the URI
+        uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, the URI will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>trust_validator_authz</literal></term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping is skipped, and
+        the OAuth validator takes full responsibility for mapping end user
+        identities to database roles.  If the validator authorizes the token,
+        the server trusts that the user is allowed to connect under the
+        requested role, and the connection is allowed to proceed regardless of
+        the authentication status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>trust_validator_authz</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a84e60c09b..7a37b96cea 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1201,6 +1201,27 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. For more information on implementing OAuth validators see
+        <xref linkend="oauth-validators" />. This parameter can only be set in
+        the <filename>postgresql.conf</filename> file.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 5621606f59..c4e06c53f6 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1072,6 +1072,20 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-builtin-oauth">
+       <term><option>--with-builtin-oauth=<replaceable>LIBRARY</replaceable></option></term>
+       <listitem>
+        <para>
+         Build with support for OAuth 2.0 client flows.  The only
+         <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-systemd">
        <term><option>--with-systemd</option></term>
        <listitem>
@@ -2516,6 +2530,21 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-builtin-oauth">
+      <term><option>-Dbuiltin_oauth={ auto | <replaceable>LIBRARY</replaceable> | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with support for OAuth 2.0 client flows.  The only
+        <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-systemd-meson">
       <term><option>-Dsystemd={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index bfefb1289e..0a27bfa3d4 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2336,6 +2336,96 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of an issuer to contact if the server requests an OAuth
+        token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, libpq will ask the
+        server for a <emphasis>discovery document:</emphasis> a URI providing a
+        set of OAuth configuration parameters. The server must provide a URI
+        that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        This standard handshake requires two separate network connections to the
+        server per authentication attempt. To skip asking the server for a
+        discovery document URI, you may set <literal>oauth_issuer</literal> to a
+        <literal>/.well-known/</literal> URI used for OAuth discovery. (In this
+        case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) libpq currently supports the following well-known endpoints:
+        <itemizedlist>
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth">custom OAuth
+        hook</link> is installed to provide one), then this parameter must be
+        set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9963,6 +10053,50 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+
+      <para>
+       If <replaceable>hook</replaceable> is set to a null pointer instead of
+       a function pointer, the default hook will be installed.
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       Retrieves the current value of <literal>PGauthDataHook</literal>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..83ea576445
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,140 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+ </para>
+ <para>
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..af476c82fc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f..ae4732df65 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index b64d253fe4..f912029bb5 100644
--- a/meson.build
+++ b/meson.build
@@ -916,6 +916,35 @@ endif
 
 
 
+###############################################################
+# Library: OAuth (libcurl)
+###############################################################
+
+libcurl = not_found_dep
+oauth_library = 'none'
+oauthopt = get_option('builtin_oauth')
+
+if oauthopt == 'auto' and auto_features.disabled()
+  oauthopt = 'none'
+endif
+
+if oauthopt in ['auto', 'curl']
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0',
+                       required: (oauthopt == 'curl'))
+  if libcurl.found()
+    oauth_library = 'curl'
+    cdata.set('USE_BUILTIN_OAUTH', 1)
+    cdata.set('USE_OAUTH_CURL', 1)
+  endif
+endif
+
+if oauthopt == 'auto' and auto_features.enabled() and not libcurl.found()
+  error('no OAuth implementation library found')
+endif
+
+
 ###############################################################
 # Library: Tcl (for pltcl)
 #
@@ -3034,6 +3063,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3702,6 +3735,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index 3893519639..453429ab86 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -79,6 +79,10 @@ option('bonjour', type: 'feature', value: 'auto',
 option('bsd_auth', type: 'feature', value: 'auto',
   description: 'BSD Authentication support')
 
+option('builtin_oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
+  value: 'auto',
+  description: 'use LIB for built-in OAuth 2.0 client flows (curl)')
+
 option('docs', type: 'feature', value: 'auto',
   description: 'Documentation in HTML and man page format')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 0f38d712d1..068c841f32 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -193,6 +193,7 @@ with_ldap	= @with_ldap@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_builtin_oauth = @with_builtin_oauth@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..eea5032de8
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,812 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	oauth_get_mechanisms,
+	oauth_init,
+	oauth_exchange,
+
+	PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (!strcmp(key, AUTH_KEY))
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("Internal error in OAuth validator module"));
+		return false;
+	}
+
+	if (!ret->authorized)
+	{
+		status = false;
+		goto cleanup;
+	}
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+	ListCell   *l;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach(l, elemlist)
+	{
+		char	   *allowed = lfirst(l);
+
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 2fd96a7129..180f09d26f 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "trust_validator_authz"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with trust_validator_authz";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "trust_validator_authz", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a3..b64c8dea97 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8a67f01200..1295a94ec3 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4794,6 +4795,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 39a3ac2312..609f057fe7 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -120,6 +120,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..fb333a1578 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..4fcdda7430
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index a903c60a3a..9fe47091d2 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -678,6 +681,9 @@
 /* Define to 1 to build with BSD Authentication support. (--with-bsd-auth) */
 #undef USE_BSD_AUTH
 
+/* Define to 1 to build with OAuth 2.0 client flows. (--with-builtin-oauth) */
+#undef USE_BUILTIN_OAUTH
+
 /* Define to build with ICU support. (--with-icu) */
 #undef USE_ICU
 
@@ -706,6 +712,9 @@
 /* Define to select named POSIX semaphores. */
 #undef USE_NAMED_POSIX_SEMAPHORES
 
+/* Define to 1 to use libcurl for OAuth client flows. */
+#undef USE_OAUTH_CURL
+
 /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
 #undef USE_OPENSSL
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..6502059d16 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_builtin_oauth),curl)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..e8804ce6d1
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2440 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	};
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+	JsonParseErrorType result = JSON_SUCCESS;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		result = JSON_SEM_ACTION_FAILED;
+		goto cleanup;
+	}
+
+	if (ctx->active)
+	{
+		JsonTokenType expected;
+
+		/*
+		 * Make sure this matches what the active field expects. Arrays must
+		 * contain only strings with the current implementation.
+		 */
+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
+			expected = JSON_TOKEN_STRING;
+		else
+			expected = ctx->active->type;
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			result = JSON_SEM_ACTION_FAILED;
+			goto cleanup;
+		}
+
+		/*
+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
+		 * Error out in that case instead.
+		 */
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*ctx->active->scalar = token;
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token */
+		}
+		else					/* ctx->target_array */
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			temp = curl_slist_append(*ctx->active->array, token);
+			if (!temp)
+			{
+				oauth_parse_set_error(ctx, "out of memory");
+				result = JSON_SEM_ACTION_FAILED;
+				goto cleanup;
+			}
+
+			*ctx->active->array = temp;
+
+			/*
+			 * Note that curl_slist_append() makes a copy of the token, so we
+			 * can free it below.
+			 */
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+cleanup:
+	free(token);
+	return result;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required && !*fields->scalar && !*fields->array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		markPQExpBufferBroken(buf);
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..0d4185194d
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,900 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the token pointer will be ignored and the initial
+ * response will instead contain a request for the server's required OAuth
+ * parameters (Sec. 4.3). Otherwise, a bearer token must be provided.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* We must have a token. */
+		if (!token)
+		{
+			/*
+			 * Either programmer error, or something went badly wrong during
+			 * the asynchronous fetch.
+			 *
+			 * TODO: users shouldn't see this; what action should they take if
+			 * they do?
+			 */
+			libpq_append_conn_error(conn,
+									"no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	free(name);
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		if (type == JSON_TOKEN_STRING)
+		{
+			*ctx->target_field = token;
+
+			ctx->target_field = NULL;
+			ctx->target_field_name = NULL;
+
+			return JSON_SUCCESS;	/* don't free the token we're using */
+		}
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	free(token);
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules state this must be
+	 * at the beginning of the path component, but OIDC defined it at the end
+	 * instead, so we have to search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		return false;
+
+	/* TODO: what if these override what the user already specified? */
+	/* TODO: what if there's no discovery URI? */
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/* The URI must correspond to our existing issuer, to avoid mix-ups. */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			return false;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			return false;
+		}
+
+		free(discovery_issuer);
+
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		return false;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	return true;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = strdup(request->token);
+		if (!state->token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			state->token = strdup(request.token);
+			if (!state->token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_BUILTIN_OAUTH
+		/*
+		 * Hand off to our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built using --with-builtin-oauth");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly. This doesn't
+				 * require any asynchronous work.
+				 */
+				discover = true;
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, discover, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..ba4d33c79c
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 258bfd0564..b47011d077 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..d260b60c0e 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,13 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +586,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +672,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +702,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1024,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1193,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1210,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1540,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index aaf87e8e88..259e9bbc5e 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3680,6 +3699,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3835,6 +3855,19 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3868,7 +3901,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3905,6 +3948,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4586,6 +4664,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4703,6 +4782,12 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7227,6 +7312,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..3c6c7fd23b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -28,6 +28,10 @@ extern "C"
  */
 #include "postgres_ext.h"
 
+#ifdef WIN32
+#include <winsock2.h>			/* for SOCKET */
+#endif
+
 /*
  * These symbols may be used in compile-time #ifdef tests for the availability
  * of v14-and-newer libpq features.
@@ -59,6 +63,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +109,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +192,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +732,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+/* for _PQoauthBearerRequest.async() */
+#ifdef WIN32
+#define SOCKTYPE SOCKET
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										SOCKTYPE *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd..df2bd1f389 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,16 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +518,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..e4d92eb402 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -4,6 +4,7 @@
 # args for executables (which depend on libpq).
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -40,6 +41,10 @@ if gssapi.found()
   )
 endif
 
+if oauth_library == 'curl'
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/interfaces/libpq/pqexpbuffer.c b/src/interfaces/libpq/pqexpbuffer.c
index 037875c523..9473ed6749 100644
--- a/src/interfaces/libpq/pqexpbuffer.c
+++ b/src/interfaces/libpq/pqexpbuffer.c
@@ -46,7 +46,7 @@ static const char *const oom_buffer_ptr = oom_buffer;
  *
  * Put a PQExpBuffer in "broken" state if it isn't already.
  */
-static void
+void
 markPQExpBufferBroken(PQExpBuffer str)
 {
 	if (str->data != oom_buffer)
diff --git a/src/interfaces/libpq/pqexpbuffer.h b/src/interfaces/libpq/pqexpbuffer.h
index d05010066b..9956829a88 100644
--- a/src/interfaces/libpq/pqexpbuffer.h
+++ b/src/interfaces/libpq/pqexpbuffer.h
@@ -121,6 +121,12 @@ extern void initPQExpBuffer(PQExpBuffer str);
 extern void destroyPQExpBuffer(PQExpBuffer str);
 extern void termPQExpBuffer(PQExpBuffer str);
 
+/*------------------------
+ * markPQExpBufferBroken
+ *		Put a PQExpBuffer in "broken" state if it isn't already.
+ */
+extern void markPQExpBufferBroken(PQExpBuffer str);
+
 /*------------------------
  * resetPQExpBuffer
  *		Reset a PQExpBuffer to empty
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index aba7411a1b..91db55f13e 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -66,6 +66,7 @@ pgxs_kv = {
   'SUN_STUDIO_CC': 'no', # not supported so far
 
   # want the chosen option, rather than the library
+  'with_builtin_oauth' : oauth_library,
   'with_ssl' : ssl_library,
   'with_uuid': uuidopt,
 
@@ -229,6 +230,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14..bdfd5f1f8d 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index c829b61953..bd13e4afbd 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..0200a6a63e
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_builtin_oauth
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 0000000000..b0fcc07c2a
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,44 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..36cdde752c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_builtin_oauth': oauth_library,
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 0000000000..b9278a2930
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,157 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Verify OAuth hook functionality in libpq
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <stdio.h>
+#include <stdlib.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGAuthData type, PGconn *conn, void *data);
+
+static void
+usage(char *argv[])
+{
+	fprintf(stderr, "usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	fprintf(stderr, "recognized flags:\n");
+	fprintf(stderr, " -h, --help				show this message\n");
+	fprintf(stderr, " --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	fprintf(stderr, " --expected-uri URI		fail if received configuration link does not match URI\n");
+	fprintf(stderr, " --no-hook					don't install OAuth hooks (connection will fail)\n");
+	fprintf(stderr, " --token TOKEN				use the provided TOKEN value\n");
+}
+
+static bool no_hook = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+static int
+handle_auth_data(PGAuthData type, PGconn *conn, void *data)
+{
+	PQoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..1b0a04cf1f
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,428 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_builtin_oauth} ne 'curl')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok(
+		"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+		"connect",
+		expected_stderr =>
+		  qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check(
+		"user $user: validator receives correct parameters",
+		$log_start,
+		log_like => [
+			qr/oauth_validator: token="9243959234", role="$user"/,
+			qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		]);
+	$node->log_check(
+		"user $user: validator sets authenticated identity",
+		$log_start,
+		log_like =>
+		  [ qr/connection authenticated: identity="test" method=oauth/, ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+if ($node->connect_ok(
+		"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+		"connect",
+		expected_stderr =>
+		  qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check(
+		"user $user: validator receives correct parameters",
+		$log_start,
+		log_like => [
+			qr/oauth_validator: token="9243959234-alt", role="$user"/,
+			qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+		]);
+	$node->log_check(
+		"user $user: validator sets authenticated identity",
+		$log_start,
+		log_like =>
+		  [ qr/connection authenticated: identity="testalt" method=oauth/, ]);
+	$log_start = $log_end;
+}
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "user=test dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope=''";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr oauth_client_id=f02c6361-0635",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+if ($node->connect_ok(
+		"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+		"validator is used for $user",
+		expected_stderr =>
+		  qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_start = $node->wait_for_log(qr/connection authorized/, $log_start);
+}
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 0000000000..5b56292c00
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,114 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	diag "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_builtin_oauth} ne 'curl')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built using --with-builtin-oauth/
+	);
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..ae7ea7af6d
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,370 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..dbba326bc4
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,100 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 508e5e3917..8357272d67 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2513,6 +2513,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2556,7 +2561,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..a13240cd01
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e..362b20a94f 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 08521d51a9..41030d9c16 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1718,6 +1722,8 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1782,6 +1788,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1944,11 +1951,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3069,6 +3079,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3462,6 +3474,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
@@ -3665,6 +3679,7 @@ normal_rand_fctx
 nsphash_hash
 ntile_context
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v37-0002-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v37-0002-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 3d169848dbed8b9a88e732c5a3331900a84a9e71 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v37 2/2] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 ++
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  195 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2439 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 ++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 +++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6219 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 26e747d559..fede30c02c 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -319,6 +319,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -403,8 +404,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index f912029bb5..6116fcf1e3 100644
--- a/meson.build
+++ b/meson.build
@@ -3376,6 +3376,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3542,6 +3545,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7f..c7fce098eb 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..9caa3a56d4
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..60e57dba86
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2439 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_builtin_oauth") == "none",
+    reason="OAuth client tests require --with-builtin-oauth support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy reponse, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawResponse):
+                    resp = json.dumps(resp)
+                resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    if server_discovery:
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+
+                # For discovery, the client should send an empty auth header.
+                # See RFC 7628, Sec. 4.3.
+                auth = get_auth_value(initial)
+                assert auth == b""
+
+                # Always fail the discovery exchange.
+                fail_oauth_handshake(
+                    conn,
+                    {
+                        "status": "invalid_token",
+                        "openid-configuration": discovery_uri,
+                    },
+                )
+
+        # Expect the client to connect again.
+        sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    if server_discovery:
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+
+                # For discovery, the client should send an empty auth header.
+                # See RFC 7628, Sec. 4.3.
+                auth = get_auth_value(initial)
+                assert auth == b""
+
+                # Always fail the discovery exchange.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": discovery_uri,
+                }
+                pq3.send(
+                    conn,
+                    pq3.types.AuthnRequest,
+                    type=pq3.authn.SASLContinue,
+                    body=json.dumps(resp).encode("utf-8"),
+                )
+
+                # FIXME: the client disconnects at this point; it'd be nicer if
+                # it completed the exchange.
+
+            # The client should not reconnect.
+
+    else:
+        expect_disconnected_handshake(sock)
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response.encode("utf-8"),
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="scalar grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id="some-id",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": to_http(openid_provider.discovery_uri),
+            }
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=json.dumps(resp).encode("utf-8"),
+            )
+
+            # FIXME: the client disconnects at this point; it'd be nicer if
+            # it completed the exchange.
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..d529c4aabe
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_builtin_oauth': oauth_library,
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..42af80c73e
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..ee39c2a14e
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && !strcmp(token, expected_bearer))
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..ea31ad4f87
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba..ffdf760d79 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.34.1

#165Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#163)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 21 Nov 2024, at 19:51, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Tue, Nov 19, 2024 at 3:05 AM Peter Eisentraut <peter@eisentraut.org> wrote:

Personally, I'm not even a fan of the -Dssl/--with-ssl system. I'm more
attached to --with-openssl.

In hindsight, --with-ssl was prematurely pulled out from the various TLS
backend patchsets that were proposed a while back. I wonder if we should
reword "Obsolete equivalent of --with-ssl=openssl" in the docs with plain
"Equivalent of ..". (which is really for another thread.)

But if you want to stick with that, a more
suitable naming would be something like, say, --with-httplib=curl, which
means, use curl for all your http needs. Because if we later add other
functionality that can use some http, I don't think we want to enable or
disable them all individually, or even mix different http libraries for
different features. In practice, curl is a widely available and
respected library, so I'd expect packagers to be just turn it all on
without much further consideration.

Okay, I can see that. I'll work on replacing --with-builtin-oauth. Any
votes from the gallery on --with-httplib vs. --with-libcurl?

I think I would vote for --with-libcurl.

--
Daniel Gustafsson

#166Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#164)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Thanks for the updated version, it's really starting to take good shape. A few
small comments on v37 from a a first quick skim-through:

+	if (!strcmp(key, AUTH_KEY))
+	if (*expected_bearer && !strcmp(token, expected_bearer))
Nitpickery but these should be (strcmp(xxx) == 0) to match project style.
(ironically, the only !strcmp in the code was my mistake in ebc8b7d4416).

+ foreach(l, elemlist)
This one seems like a good candidate for a foreach_ptr construction.

+ *output = strdup(kvsep);
There is no check to ensure strdup worked AFAICT, and even though it's quite
unlikely to fail we definitely don't want to continue if it did.

fail_validator.c seems to have the #include list copied from validator.c and
pulls in unnecessarily many headers.

+ client's dummy reponse, and issues a FATAL error to end the exchange.
s/reponse/response/

In validate() I wonder if we should doublecheck that have a a proper set of
validator callbacks loaded just to make even more sure that we don't introduce
anything terrible in this codepath.

I will keep reviewing this version to try and provide more feedback.

--
Daniel Gustafsson

Attachments:

v37comments.diff.txttext/plain; name=v37comments.diff.txt; x-unix-mode=0644Download
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index eea5032de8..2bb84b7bc8 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -424,7 +424,7 @@ parse_kvpairs_for_auth(char **input)
 		value = sep + 1;
 		validate_kvpair(key, value);
 
-		if (!strcmp(key, AUTH_KEY))
+		if (strcmp(key, AUTH_KEY) == 0)
 		{
 			if (auth)
 				ereport(ERROR,
@@ -619,6 +619,15 @@ validate(Port *port, const char *auth)
 	if (!(token = validate_token_format(auth)))
 		return false;
 
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
 	/* Call the validation function from the validator module */
 	ret = ValidatorCallbacks->validate_cb(validator_module_state,
 										  token, port->user_name);
@@ -708,6 +717,7 @@ load_validator_library(const char *libname)
 					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
 
 	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
 
 	/* Allocate memory for validator library private state data */
 	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
@@ -738,7 +748,6 @@ check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
 	char	   *file_name = hbaline->sourcefile;
 	char	   *rawstring;
 	List	   *elemlist = NIL;
-	ListCell   *l;
 
 	*err_msg = NULL;
 
@@ -787,10 +796,8 @@ check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
 		goto done;
 	}
 
-	foreach(l, elemlist)
+	foreach_ptr(char, allowed, elemlist)
 	{
-		char	   *allowed = lfirst(l);
-
 		if (strcmp(allowed, hbaline->oauth_validator) == 0)
 			goto done;
 	}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 0d4185194d..5400e6df7a 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -841,6 +841,11 @@ oauth_exchange(void *opaq, bool final,
 			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
 			 */
 			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
 			*outputlen = strlen(*output);	/* == 1 */
 
 			state->state = FE_OAUTH_SERVER_ERROR;
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
index b0fcc07c2a..c438ed4d17 100644
--- a/src/test/modules/oauth_validator/fail_validator.c
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -16,9 +16,6 @@
 
 #include "fmgr.h"
 #include "libpq/oauth.h"
-#include "miscadmin.h"
-#include "utils/guc.h"
-#include "utils/memutils.h"
 
 PG_MODULE_MAGIC;
 
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
index 60e57dba86..ce8e0f12c9 100644
--- a/src/test/python/client/test_oauth.py
+++ b/src/test/python/client/test_oauth.py
@@ -108,7 +108,7 @@ def get_auth_value(initial):
 def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
     """
     Sends a failure response via the OAUTHBEARER mechanism, consumes the
-    client's dummy reponse, and issues a FATAL error to end the exchange.
+    client's dummy response, and issues a FATAL error to end the exchange.
 
     sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
     response. If provided, errmsg is used in the FATAL ErrorResponse.
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
index ee39c2a14e..415748b9a6 100644
--- a/src/test/python/server/oauthtest.c
+++ b/src/test/python/server/oauthtest.c
@@ -108,7 +108,7 @@ test_validate(ValidatorModuleState *state, const char *token, const char *role)
 	}
 	else
 	{
-		if (*expected_bearer && !strcmp(token, expected_bearer))
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
 			res->authorized = true;
 		if (set_authn_id)
 			res->authn_id = authn_id;
#167Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#166)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Nov 27, 2024 at 9:27 AM Daniel Gustafsson <daniel@yesql.se> wrote:

Thanks for the updated version, it's really starting to take good shape. A few
small comments on v37 from a a first quick skim-through:

Applied in v38, thanks!

fail_validator.c seems to have the #include list copied from validator.c and
pulls in unnecessarily many headers.

Oops, thanks for the cleanup.

In validate() I wonder if we should doublecheck that have a a proper set of
validator callbacks loaded just to make even more sure that we don't introduce
anything terrible in this codepath.

Seems good. I think this part of the API is going to need an
ABI-compatibility pass, too. For example, do we want a module to
allocate the result struct itself (which locks in the struct length)?
And should we have a MAGIC_NUMBER of some sort in the static callback
list, maybe?

--

Now that our JSON API can be put into "leakproof" mode [1]https://postgr.es/c/5c32c21afe6, I can fuzz
the parser implementations. libfuzzer has been able to find one known
issue, plus two more that I was unaware of:

1. Duplicate fields caused the previous field values to leak. (This
was the documented FIXME; we now error out in this case.)
2. The array-of-strings parsing had a subtle logic bug: if field "a"
was expected to be an array of strings, we would also accept the
construction `"a": "1"` as if it were equivalent to `"a": ["1"]`. This
messed up the internal tracking and tripped assertions.
3. handle_oauth_sasl_error() was leaking all of its parsed fields if
they didn't get hooked into the PGconn struct before a failure.

All three are fixed in v38; I will keep working on expanding the
amount of code covered by my fuzzers.

Additionally, the following pieces of feedback has been addressed:
- We now validate the incoming JSON as UTF-8 before lexing it, to
prevent invalid multibyte sequences from sneaking through in the
strings [2]/messages/by-id/ZjmjPyA29dIJjmjI@paquier.xyz. Still need to determine how \uXXXX sequences will
interact with the more punishing client encodings in error messages.
- --with-builtin-oauth/-Dwith_builtin_oauth has been renamed
--with-libcurl/-Dlibcurl, and the Autoconf side uses PKG_CHECK_MODULES
exclusively.
- markPQExpBufferBroken has been replaced with termPQExpBuffer in
append_urlencoded()
- the anonymous union has been named
- pg_be_oauth_mech uses a designated initializer

Next up, the many-many documentation requests, now that the fuzzers
can run while I write.

Thanks,
--Jacob

[1]: https://postgr.es/c/5c32c21afe6
[2]: /messages/by-id/ZjmjPyA29dIJjmjI@paquier.xyz

Attachments:

since-v37.diff.txttext/plain; charset=UTF-8; name=since-v37.diff.txtDownload
1:  16f1b8fc02 ! 1:  785add8015 Add OAUTHBEARER SASL mechanism
    @@ .cirrus.tasks.yml: task:
              --buildtype=debug \
              -Dcassert=true -Dinjection_points=true \
              -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
    -+        -Dbuiltin_oauth=curl \
    ++        -Dlibcurl=enabled \
              -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
              build
          EOF
     @@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
    +   --with-gssapi
    +   --with-icu
    +   --with-ldap
    ++  --with-libcurl
    +   --with-libxml
        --with-libxslt
        --with-llvm
    -   --with-lz4
    -+  --with-builtin-oauth=curl
    -   --with-pam
    -   --with-perl
    -   --with-python
     @@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
    +   --with-zstd
      
      LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
    ++  -Dlibcurl=enabled
        -Dllvm=enabled
    -+  -Dbuiltin_oauth=curl
        -Duuid=e2fs
      
    - 
     @@ .cirrus.tasks.yml: task:
          EOF
      
    @@ config/programs.m4: if test "$pgac_cv_ldap_safe" != yes; then
      *** also uses LDAP will crash on exit.])
      fi])
      
    -+# PGAC_CHECK_LIBCURL
    -+# ------------------
    -+# Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability to
    -+# explicitly set TLS 1.3 ciphersuites).
    - 
    -+AC_DEFUN([PGAC_CHECK_LIBCURL],
    -+[AC_CACHE_CHECK([for compatible libcurl], [pgac_cv_check_libcurl],
    -+[AC_COMPILE_IFELSE([AC_LANG_PROGRAM(
    -+[#include <curl/curlver.h>
    -+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
    -+choke me
    -+#endif], [])],
    -+[pgac_cv_check_libcurl=yes],
    -+[pgac_cv_check_libcurl=no])])
    -+
    -+if test "$pgac_cv_check_libcurl" != yes; then
    -+    AC_MSG_ERROR([
    -+*** The installed version of libcurl is too old to use with PostgreSQL.
    -+*** libcurl version 7.61.0 or later is required.])
    -+fi])
    - 
    +-
    +-
      # PGAC_CHECK_READLINE
      # -------------------
    + # Check for the readline library and dependent libraries, either
     
      ## configure ##
    -@@ configure: with_uuid
    +@@ configure: XML2_LIBS
    + XML2_CFLAGS
    + XML2_CONFIG
    + with_libxml
    ++LIBCURL_LIBS
    ++LIBCURL_CFLAGS
    ++with_libcurl
    + with_uuid
      with_readline
      with_systemd
    - with_selinux
    -+with_builtin_oauth
    - with_ldap
    - with_krb_srvnam
    - krb_srvtab
    -@@ configure: with_krb_srvnam
    - with_pam
    - with_bsd_auth
    - with_ldap
    -+with_builtin_oauth
    - with_bonjour
    - with_selinux
    - with_systemd
    +@@ configure: with_readline
    + with_libedit_preferred
    + with_uuid
    + with_ossp_uuid
    ++with_libcurl
    + with_libxml
    + with_libxslt
    + with_system_tzdata
    +@@ configure: PKG_CONFIG_PATH
    + PKG_CONFIG_LIBDIR
    + ICU_CFLAGS
    + ICU_LIBS
    ++LIBCURL_CFLAGS
    ++LIBCURL_LIBS
    + XML2_CONFIG
    + XML2_CFLAGS
    + XML2_LIBS
     @@ configure: Optional Packages:
    -   --with-pam              build with PAM support
    -   --with-bsd-auth         build with BSD Authentication support
    -   --with-ldap             build with LDAP support
    -+  --with-builtin-oauth=LIB
    -+                          use LIB for built-in OAuth 2.0 client flows (curl)
    -   --with-bonjour          build with Bonjour support
    -   --with-selinux          build with SELinux support
    -   --with-systemd          build with systemd support
    -@@ configure: $as_echo "$with_ldap" >&6; }
    +                           prefer BSD Libedit over GNU Readline
    +   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
    +   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
    ++  --with-libcurl          build with libcurl support for OAuth client flows
    +   --with-libxml           build with XML support
    +   --with-libxslt          use XSLT support when building contrib/xml2
    +   --with-system-tzdata=DIR
    +@@ configure: Some influential environment variables:
    +               path overriding pkg-config's built-in search path
    +   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
    +   ICU_LIBS    linker flags for ICU, overriding pkg-config
    ++  LIBCURL_CFLAGS
    ++              C compiler flags for LIBCURL, overriding pkg-config
    ++  LIBCURL_LIBS
    ++              linker flags for LIBCURL, overriding pkg-config
    +   XML2_CONFIG path to xml2-config utility
    +   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
    +   XML2_LIBS   linker flags for XML2, overriding pkg-config
    +@@ configure: fi
      
      
      
     +#
    -+# OAuth 2.0
    ++# libcurl
     +#
    -+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with built-in OAuth client support" >&5
    -+$as_echo_n "checking whether to build with built-in OAuth client support... " >&6; }
    ++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support for OAuth client flows" >&5
    ++$as_echo_n "checking whether to build with libcurl support for OAuth client flows... " >&6; }
     +
     +
     +
    -+# Check whether --with-builtin-oauth was given.
    -+if test "${with_builtin_oauth+set}" = set; then :
    -+  withval=$with_builtin_oauth;
    ++# Check whether --with-libcurl was given.
    ++if test "${with_libcurl+set}" = set; then :
    ++  withval=$with_libcurl;
     +  case $withval in
     +    yes)
    -+      as_fn_error $? "argument required for --with-builtin-oauth option" "$LINENO" 5
    ++
    ++$as_echo "#define USE_LIBCURL 1" >>confdefs.h
    ++
     +      ;;
     +    no)
    -+      as_fn_error $? "argument required for --with-builtin-oauth option" "$LINENO" 5
    ++      :
     +      ;;
     +    *)
    -+
    ++      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
     +      ;;
     +  esac
     +
    ++else
    ++  with_libcurl=no
    ++
     +fi
     +
     +
    -+if test x"$with_builtin_oauth" = x"" ; then
    -+  with_builtin_oauth=no
    ++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
    ++$as_echo "$with_libcurl" >&6; }
    ++
    ++
    ++if test "$with_libcurl" = yes ; then
    ++  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
    ++  # to explicitly set TLS 1.3 ciphersuites).
    ++
    ++pkg_failed=no
    ++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
    ++$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
    ++
    ++if test -n "$LIBCURL_CFLAGS"; then
    ++    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
    ++ elif test -n "$PKG_CONFIG"; then
    ++    if test -n "$PKG_CONFIG" && \
    ++    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
    ++  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
    ++  ac_status=$?
    ++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
    ++  test $ac_status = 0; }; then
    ++  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
    ++		      test "x$?" != "x0" && pkg_failed=yes
    ++else
    ++  pkg_failed=yes
    ++fi
    ++ else
    ++    pkg_failed=untried
    ++fi
    ++if test -n "$LIBCURL_LIBS"; then
    ++    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
    ++ elif test -n "$PKG_CONFIG"; then
    ++    if test -n "$PKG_CONFIG" && \
    ++    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
    ++  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
    ++  ac_status=$?
    ++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
    ++  test $ac_status = 0; }; then
    ++  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
    ++		      test "x$?" != "x0" && pkg_failed=yes
    ++else
    ++  pkg_failed=yes
     +fi
    ++ else
    ++    pkg_failed=untried
    ++fi
    ++
     +
    -+if test x"$with_builtin_oauth" = x"curl"; then
     +
    -+$as_echo "#define USE_BUILTIN_OAUTH 1" >>confdefs.h
    ++if test $pkg_failed = yes; then
    ++        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
    ++$as_echo "no" >&6; }
     +
    ++if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
    ++        _pkg_short_errors_supported=yes
    ++else
    ++        _pkg_short_errors_supported=no
    ++fi
    ++        if test $_pkg_short_errors_supported = yes; then
    ++	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
    ++        else
    ++	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
    ++        fi
    ++	# Put the nasty error message in config.log where it belongs
    ++	echo "$LIBCURL_PKG_ERRORS" >&5
    ++
    ++	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
    ++
    ++$LIBCURL_PKG_ERRORS
    ++
    ++Consider adjusting the PKG_CONFIG_PATH environment variable if you
    ++installed software in a non-standard prefix.
    ++
    ++Alternatively, you may set the environment variables LIBCURL_CFLAGS
    ++and LIBCURL_LIBS to avoid the need to call pkg-config.
    ++See the pkg-config man page for more details." "$LINENO" 5
    ++elif test $pkg_failed = untried; then
    ++        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
    ++$as_echo "no" >&6; }
    ++	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
    ++$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
    ++as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
    ++is in your PATH or set the PKG_CONFIG environment variable to the full
    ++path to pkg-config.
    ++
    ++Alternatively, you may set the environment variables LIBCURL_CFLAGS
    ++and LIBCURL_LIBS to avoid the need to call pkg-config.
    ++See the pkg-config man page for more details.
    ++
    ++To get pkg-config, see <http://pkg-config.freedesktop.org/>.
    ++See \`config.log' for more details" "$LINENO" 5; }
    ++else
    ++	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
    ++	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
    ++        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
    ++$as_echo "yes" >&6; }
     +
    -+$as_echo "#define USE_OAUTH_CURL 1" >>confdefs.h
    ++fi
     +
     +  # OAuth requires python for testing
     +  if test "$with_python" != yes; then
    -+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests requires --with-python to run" >&5
    -+$as_echo "$as_me: WARNING: *** OAuth support tests requires --with-python to run" >&2;}
    ++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
    ++$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
     +  fi
    -+elif test x"$with_builtin_oauth" != x"no"; then
    -+  as_fn_error $? "--with-builtin-oauth must specify curl" "$LINENO" 5
     +fi
     +
    -+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_builtin_oauth" >&5
    -+$as_echo "$with_builtin_oauth" >&6; }
    -+
    -+
     +
      #
    - # Bonjour
    + # XML
      #
     @@ configure: fi
      
    @@ configure: fi
     +# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
     +# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
     +# dependency on that platform?
    -+if test "$with_builtin_oauth" = curl ; then
    ++if test "$with_libcurl" = yes ; then
     +  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
     +$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
     +if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
    @@ configure: fi
     +  LIBS="-lcurl $LIBS"
     +
     +else
    -+  as_fn_error $? "library 'curl' is required for --with-builtin-oauth=curl" "$LINENO" 5
    -+fi
    -+
    -+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for compatible libcurl" >&5
    -+$as_echo_n "checking for compatible libcurl... " >&6; }
    -+if ${pgac_cv_check_libcurl+:} false; then :
    -+  $as_echo_n "(cached) " >&6
    -+else
    -+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
    -+/* end confdefs.h.  */
    -+#include <curl/curlver.h>
    -+#if LIBCURL_VERSION_MAJOR < 7 || (LIBCURL_VERSION_MAJOR == 7 && LIBCURL_VERSION_MINOR < 61)
    -+choke me
    -+#endif
    -+int
    -+main ()
    -+{
    -+
    -+  ;
    -+  return 0;
    -+}
    -+_ACEOF
    -+if ac_fn_c_try_compile "$LINENO"; then :
    -+  pgac_cv_check_libcurl=yes
    -+else
    -+  pgac_cv_check_libcurl=no
    -+fi
    -+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
    ++  as_fn_error $? "library 'curl' is required for --with-libcurl" "$LINENO" 5
     +fi
    -+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_check_libcurl" >&5
    -+$as_echo "$pgac_cv_check_libcurl" >&6; }
     +
    -+if test "$pgac_cv_check_libcurl" != yes; then
    -+    as_fn_error $? "
    -+*** The installed version of libcurl is too old to use with PostgreSQL.
    -+*** libcurl version 7.61.0 or later is required." "$LINENO" 5
    -+fi
     +fi
     +
      if test "$with_gssapi" = yes ; then
    @@ configure: fi
      
     +fi
     +
    -+if test "$with_builtin_oauth" = curl; then
    ++if test "$with_libcurl" = yes; then
     +  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
     +if test "x$ac_cv_header_curl_curl_h" = xyes; then :
     +
     +else
    -+  as_fn_error $? "header file <curl/curl.h> is required for --with-builtin-oauth=curl" "$LINENO" 5
    ++  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
     +fi
     +
     +
    @@ configure: fi
      if test "$PORTNAME" = "win32" ; then
     
      ## configure.ac ##
    -@@ configure.ac: AC_MSG_RESULT([$with_ldap])
    - AC_SUBST(with_ldap)
    +@@ configure.ac: fi
    + AC_SUBST(with_uuid)
      
      
     +#
    -+# OAuth 2.0
    ++# libcurl
     +#
    -+AC_MSG_CHECKING([whether to build with built-in OAuth client support])
    -+PGAC_ARG_REQ(with, builtin-oauth, [LIB], [use LIB for built-in OAuth 2.0 client flows (curl)])
    -+if test x"$with_builtin_oauth" = x"" ; then
    -+  with_builtin_oauth=no
    -+fi
    ++AC_MSG_CHECKING([whether to build with libcurl support for OAuth client flows])
    ++PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support for OAuth client flows],
    ++              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support for OAuth client flows. (--with-libcurl)])])
    ++AC_MSG_RESULT([$with_libcurl])
    ++AC_SUBST(with_libcurl)
    ++
    ++if test "$with_libcurl" = yes ; then
    ++  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
    ++  # to explicitly set TLS 1.3 ciphersuites).
    ++  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
     +
    -+if test x"$with_builtin_oauth" = x"curl"; then
    -+  AC_DEFINE([USE_BUILTIN_OAUTH], 1, [Define to 1 to build with OAuth 2.0 client flows. (--with-builtin-oauth)])
    -+  AC_DEFINE([USE_OAUTH_CURL], 1, [Define to 1 to use libcurl for OAuth client flows.])
     +  # OAuth requires python for testing
     +  if test "$with_python" != yes; then
    -+    AC_MSG_WARN([*** OAuth support tests requires --with-python to run])
    ++    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
     +  fi
    -+elif test x"$with_builtin_oauth" != x"no"; then
    -+  AC_MSG_ERROR([--with-builtin-oauth must specify curl])
     +fi
     +
    -+AC_MSG_RESULT([$with_builtin_oauth])
    -+AC_SUBST(with_builtin_oauth)
    -+
     +
      #
    - # Bonjour
    + # XML
      #
     @@ configure.ac: failure.  It is possible the compiler isn't looking in the proper directory.
      Use --without-zlib to disable zlib support.])])
    @@ configure.ac: failure.  It is possible the compiler isn't looking in the proper
     +# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
     +# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
     +# dependency on that platform?
    -+if test "$with_builtin_oauth" = curl ; then
    -+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-builtin-oauth=curl])])
    -+  PGAC_CHECK_LIBCURL
    ++if test "$with_libcurl" = yes ; then
    ++  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-libcurl])])
     +fi
     +
      if test "$with_gssapi" = yes ; then
    @@ configure.ac: elif test "$with_uuid" = ossp ; then
            [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
      fi
      
    -+if test "$with_builtin_oauth" = curl; then
    -+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-builtin-oauth=curl])])
    ++if test "$with_libcurl" = yes; then
    ++  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
     +fi
     +
      if test "$PORTNAME" = "win32" ; then
    @@ doc/src/sgml/installation.sgml: build-postgresql:
             </listitem>
            </varlistentry>
      
    -+      <varlistentry id="configure-option-with-builtin-oauth">
    -+       <term><option>--with-builtin-oauth=<replaceable>LIBRARY</replaceable></option></term>
    ++      <varlistentry id="configure-option-with-libcurl">
    ++       <term><option>--with-libcurl</option></term>
     +       <listitem>
     +        <para>
    -+         Build with support for OAuth 2.0 client flows.  The only
    -+         <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
    ++         Build with libcurl support for OAuth 2.0 client flows.
     +         This requires the <productname>curl</productname> package to be
     +         installed.  Building with this will check for the required header files
     +         and libraries to make sure that your <productname>curl</productname>
    @@ doc/src/sgml/installation.sgml: build-postgresql:
     +       </listitem>
     +      </varlistentry>
     +
    -       <varlistentry id="configure-option-with-systemd">
    -        <term><option>--with-systemd</option></term>
    +       <varlistentry id="configure-option-with-libxml">
    +        <term><option>--with-libxml</option></term>
             <listitem>
     @@ doc/src/sgml/installation.sgml: ninja install
            </listitem>
           </varlistentry>
      
    -+     <varlistentry id="configure-with-builtin-oauth">
    -+      <term><option>-Dbuiltin_oauth={ auto | <replaceable>LIBRARY</replaceable> | disabled }</option></term>
    ++     <varlistentry id="configure-with-libcurl-meson">
    ++      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
     +      <listitem>
     +       <para>
    -+        Build with support for OAuth 2.0 client flows.  The only
    -+        <replaceable>LIBRARY</replaceable> supported is <option>curl</option>.
    ++        Build with libcurl support for OAuth 2.0 client flows.
     +        This requires the <productname>curl</productname> package to be
     +        installed.  Building with this will check for the required header files
     +        and libraries to make sure that your <productname>curl</productname>
    @@ doc/src/sgml/installation.sgml: ninja install
     +      </listitem>
     +     </varlistentry>
     +
    -      <varlistentry id="configure-with-systemd-meson">
    -       <term><option>-Dsystemd={ auto | enabled | disabled }</option></term>
    +      <varlistentry id="configure-with-libxml-meson">
    +       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
            <listitem>
     
      ## doc/src/sgml/libpq.sgml ##
    @@ meson.build: endif
      
      
     +###############################################################
    -+# Library: OAuth (libcurl)
    ++# Library: libcurl
     +###############################################################
     +
    -+libcurl = not_found_dep
    -+oauth_library = 'none'
    -+oauthopt = get_option('builtin_oauth')
    -+
    -+if oauthopt == 'auto' and auto_features.disabled()
    -+  oauthopt = 'none'
    -+endif
    -+
    -+if oauthopt in ['auto', 'curl']
    ++libcurlopt = get_option('libcurl')
    ++if not libcurlopt.disabled()
     +  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
     +  # to explicitly set TLS 1.3 ciphersuites).
    -+  libcurl = dependency('libcurl', version: '>= 7.61.0',
    -+                       required: (oauthopt == 'curl'))
    ++  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
     +  if libcurl.found()
    -+    oauth_library = 'curl'
    -+    cdata.set('USE_BUILTIN_OAUTH', 1)
    -+    cdata.set('USE_OAUTH_CURL', 1)
    ++    cdata.set('USE_LIBCURL', 1)
     +  endif
    ++else
    ++  libcurl = not_found_dep
     +endif
     +
    -+if oauthopt == 'auto' and auto_features.enabled() and not libcurl.found()
    -+  error('no OAuth implementation library found')
    -+endif
     +
     +
      ###############################################################
    - # Library: Tcl (for pltcl)
    - #
    + # Library: libxml
    + ###############################################################
     @@ meson.build: libpq_deps += [
      
        gssapi,
    @@ meson.build: if meson.version().version_compare('>=0.57')
            'llvm': llvm,
     
      ## meson_options.txt ##
    -@@ meson_options.txt: option('bonjour', type: 'feature', value: 'auto',
    - option('bsd_auth', type: 'feature', value: 'auto',
    -   description: 'BSD Authentication support')
    +@@ meson_options.txt: option('icu', type: 'feature', value: 'auto',
    + option('ldap', type: 'feature', value: 'auto',
    +   description: 'LDAP support')
      
    -+option('builtin_oauth', type : 'combo', choices : ['auto', 'none', 'curl'],
    -+  value: 'auto',
    -+  description: 'use LIB for built-in OAuth 2.0 client flows (curl)')
    ++option('libcurl', type : 'feature', value: 'auto',
    ++  description: 'libcurl support for OAuth client flows')
     +
    - option('docs', type: 'feature', value: 'auto',
    -   description: 'Documentation in HTML and man page format')
    + option('libedit_preferred', type: 'boolean', value: false,
    +   description: 'Prefer BSD Libedit over GNU Readline')
      
     
      ## src/Makefile.global.in ##
    -@@ src/Makefile.global.in: with_ldap	= @with_ldap@
    +@@ src/Makefile.global.in: with_systemd	= @with_systemd@
    + with_gssapi	= @with_gssapi@
    + with_krb_srvnam	= @with_krb_srvnam@
    + with_ldap	= @with_ldap@
    ++with_libcurl	= @with_libcurl@
      with_libxml	= @with_libxml@
      with_libxslt	= @with_libxslt@
      with_llvm	= @with_llvm@
    -+with_builtin_oauth = @with_builtin_oauth@
    - with_system_tzdata = @with_system_tzdata@
    - with_uuid	= @with_uuid@
    - with_zlib	= @with_zlib@
     
      ## src/backend/libpq/Makefile ##
     @@ src/backend/libpq/Makefile: include $(top_builddir)/src/Makefile.global
    @@ src/backend/libpq/auth-oauth.c (new)
     +
     +/* Mechanism declaration */
     +const pg_be_sasl_mech pg_be_oauth_mech = {
    -+	oauth_get_mechanisms,
    -+	oauth_init,
    -+	oauth_exchange,
    ++	.get_mechanisms = oauth_get_mechanisms,
    ++	.init = oauth_init,
    ++	.exchange = oauth_exchange,
     +
    -+	PG_MAX_AUTH_TOKEN_LENGTH,
    ++	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
     +};
     +
     +
    @@ src/backend/libpq/auth-oauth.c (new)
     +		value = sep + 1;
     +		validate_kvpair(key, value);
     +
    -+		if (!strcmp(key, AUTH_KEY))
    ++		if (strcmp(key, AUTH_KEY) == 0)
     +		{
     +			if (auth)
     +				ereport(ERROR,
    @@ src/backend/libpq/auth-oauth.c (new)
     +	if (!(token = validate_token_format(auth)))
     +		return false;
     +
    ++	/*
    ++	 * Ensure that we have a validation library loaded, this should always be
    ++	 * the case and an error here is indicative of a bug.
    ++	 */
    ++	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
    ++		ereport(FATAL,
    ++				errcode(ERRCODE_INTERNAL_ERROR),
    ++				errmsg("validation of OAuth token requested without a validator loaded"));
    ++
     +	/* Call the validation function from the validator module */
     +	ret = ValidatorCallbacks->validate_cb(validator_module_state,
     +										  token, port->user_name);
    @@ src/backend/libpq/auth-oauth.c (new)
     +					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
     +
     +	ValidatorCallbacks = (*validator_init) ();
    ++	Assert(ValidatorCallbacks);
     +
     +	/* Allocate memory for validator library private state data */
     +	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
    @@ src/backend/libpq/auth-oauth.c (new)
     +	char	   *file_name = hbaline->sourcefile;
     +	char	   *rawstring;
     +	List	   *elemlist = NIL;
    -+	ListCell   *l;
     +
     +	*err_msg = NULL;
     +
    @@ src/backend/libpq/auth-oauth.c (new)
     +		goto done;
     +	}
     +
    -+	foreach(l, elemlist)
    ++	foreach_ptr(char, allowed, elemlist)
     +	{
    -+		char	   *allowed = lfirst(l);
    -+
     +		if (strcmp(allowed, hbaline->oauth_validator) == 0)
     +			goto done;
     +	}
    @@ src/include/pg_config.h.in
      #undef HAVE_LIBLDAP
      
     @@
    - /* Define to 1 to build with BSD Authentication support. (--with-bsd-auth) */
    - #undef USE_BSD_AUTH
    + /* Define to 1 to build with LDAP support. (--with-ldap) */
    + #undef USE_LDAP
      
    -+/* Define to 1 to build with OAuth 2.0 client flows. (--with-builtin-oauth) */
    -+#undef USE_BUILTIN_OAUTH
    ++/* Define to 1 to build with libcurl support for OAuth client flows.
    ++   (--with-libcurl) */
    ++#undef USE_LIBCURL
     +
    - /* Define to build with ICU support. (--with-icu) */
    - #undef USE_ICU
    - 
    -@@
    - /* Define to select named POSIX semaphores. */
    - #undef USE_NAMED_POSIX_SEMAPHORES
    - 
    -+/* Define to 1 to use libcurl for OAuth client flows. */
    -+#undef USE_OAUTH_CURL
    -+
    - /* Define to 1 to build with OpenSSL support. (--with-ssl=openssl) */
    - #undef USE_OPENSSL
    + /* Define to 1 to build with XML support. (--with-libxml) */
    + #undef USE_LIBXML
      
     
      ## src/interfaces/libpq/Makefile ##
    @@ src/interfaces/libpq/Makefile: OBJS += \
      	fe-secure-gssapi.o
      endif
      
    -+ifeq ($(with_builtin_oauth),curl)
    ++ifeq ($(with_libcurl),yes)
     +OBJS += fe-auth-oauth-curl.o
     +endif
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	{
     +		char	  **scalar;		/* for all scalar types */
     +		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
    -+	};
    ++	}			target;
     +
     +	bool		required;		/* REQUIRED field, or just OPTIONAL? */
     +};
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		}
     +	}
     +
    -+	free(name);
     +	return JSON_SUCCESS;
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	struct oauth_parse *ctx = state;
     +
     +	--ctx->nested;
    ++	if (!ctx->nested)
    ++		Assert(!ctx->active);	/* all fields should be fully processed */
    ++
     +	return JSON_SUCCESS;
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +oauth_json_scalar(void *state, char *token, JsonTokenType type)
     +{
     +	struct oauth_parse *ctx = state;
    -+	JsonParseErrorType result = JSON_SUCCESS;
     +
     +	if (!ctx->nested)
     +	{
     +		oauth_parse_set_error(ctx, "top-level element must be an object");
    -+		result = JSON_SEM_ACTION_FAILED;
    -+		goto cleanup;
    ++		return JSON_SEM_ACTION_FAILED;
     +	}
     +
     +	if (ctx->active)
     +	{
    -+		JsonTokenType expected;
    ++		const struct json_field *field = ctx->active;
    ++		JsonTokenType expected = field->type;
     +
    -+		/*
    -+		 * Make sure this matches what the active field expects. Arrays must
    -+		 * contain only strings with the current implementation.
    -+		 */
    -+		if (ctx->active->type == JSON_TOKEN_ARRAY_START)
    ++		/* Make sure this matches what the active field expects. */
    ++		if (expected == JSON_TOKEN_ARRAY_START)
    ++		{
    ++			/* Are we actually inside an array? */
    ++			if (ctx->nested < 2)
    ++			{
    ++				report_type_mismatch(ctx);
    ++				return JSON_SEM_ACTION_FAILED;
    ++			}
    ++
    ++			/* Currently, arrays can only contain strings. */
     +			expected = JSON_TOKEN_STRING;
    -+		else
    -+			expected = ctx->active->type;
    ++		}
     +
     +		if (type != expected)
     +		{
     +			report_type_mismatch(ctx);
    -+			result = JSON_SEM_ACTION_FAILED;
    -+			goto cleanup;
    ++			return JSON_SEM_ACTION_FAILED;
     +		}
     +
     +		/*
    -+		 * FIXME if the JSON field is duplicated, we'll leak the prior value.
    -+		 * Error out in that case instead.
    ++		 * We don't allow duplicate field names; error out if the target has
    ++		 * already been set.
     +		 */
    -+		if (ctx->active->type != JSON_TOKEN_ARRAY_START)
    ++		if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
    ++			|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
    ++		{
    ++			oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
    ++								  field->name);
    ++			return JSON_SEM_ACTION_FAILED;
    ++		}
    ++
    ++		if (field->type != JSON_TOKEN_ARRAY_START)
     +		{
     +			Assert(ctx->nested == 1);
     +
    -+			*ctx->active->scalar = token;
    ++			*field->target.scalar = strdup(token);
    ++			if (!*field->target.scalar)
    ++				return JSON_OUT_OF_MEMORY;
    ++
     +			ctx->active = NULL;
     +
    -+			return JSON_SUCCESS;	/* don't free the token */
    ++			return JSON_SUCCESS;
     +		}
    -+		else					/* ctx->target_array */
    ++		else
     +		{
     +			struct curl_slist *temp;
     +
     +			Assert(ctx->nested == 2);
     +
    -+			temp = curl_slist_append(*ctx->active->array, token);
    ++			/* Note that curl_slist_append() makes a copy of the token. */
    ++			temp = curl_slist_append(*field->target.array, token);
     +			if (!temp)
    -+			{
    -+				oauth_parse_set_error(ctx, "out of memory");
    -+				result = JSON_SEM_ACTION_FAILED;
    -+				goto cleanup;
    -+			}
    ++				return JSON_OUT_OF_MEMORY;
     +
    -+			*ctx->active->array = temp;
    -+
    -+			/*
    -+			 * Note that curl_slist_append() makes a copy of the token, so we
    -+			 * can free it below.
    -+			 */
    ++			*field->target.array = temp;
     +		}
     +	}
     +	else
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		/* otherwise we just ignore it */
     +	}
     +
    -+cleanup:
    -+	free(token);
    -+	return result;
    ++	return JSON_SUCCESS;
     +}
     +
     +/*
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		return false;
     +	}
     +
    ++	/*
    ++	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
    ++	 * that up front.
    ++	 */
    ++	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
    ++	{
    ++		actx_error(actx, "response is not valid UTF-8");
    ++		return false;
    ++	}
    ++
     +	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
    ++	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
     +
     +	ctx.errbuf = &actx->errbuf;
     +	ctx.fields = fields;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	/* Check all required fields. */
     +	while (fields->name)
     +	{
    -+		if (fields->required && !*fields->scalar && !*fields->array)
    ++		if (fields->required
    ++			&& !*fields->target.scalar
    ++			&& !*fields->target.array)
     +		{
     +			actx_error(actx, "field \"%s\" is missing", fields->name);
     +			goto cleanup;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	escaped = curl_easy_escape(NULL, s, 0);
     +	if (!escaped)
     +	{
    -+		markPQExpBufferBroken(buf);
    ++		termPQExpBuffer(buf);	/* mark the buffer broken */
     +		return;
     +	}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		}
     +	}
     +
    -+	free(name);
     +	return JSON_SUCCESS;
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	if (!ctx->nested)
     +	{
     +		ctx->errmsg = libpq_gettext("top-level element must be an object");
    ++		return JSON_SEM_ACTION_FAILED;
     +	}
    -+	else if (ctx->target_field)
    ++
    ++	if (ctx->target_field)
     +	{
     +		Assert(ctx->nested == 1);
     +
    -+		if (type == JSON_TOKEN_STRING)
    ++		/*
    ++		 * We don't allow duplicate field names; error out if the target has
    ++		 * already been set.
    ++		 */
    ++		if (*ctx->target_field)
     +		{
    -+			*ctx->target_field = token;
    -+
    -+			ctx->target_field = NULL;
    -+			ctx->target_field_name = NULL;
    ++			oauth_json_set_error(ctx,
    ++								 libpq_gettext("field \"%s\" is duplicated"),
    ++								 ctx->target_field_name);
    ++			return JSON_SEM_ACTION_FAILED;
    ++		}
     +
    -+			return JSON_SUCCESS;	/* don't free the token we're using */
    ++		/* The only fields we support are strings. */
    ++		if (type != JSON_TOKEN_STRING)
    ++		{
    ++			oauth_json_set_error(ctx,
    ++								 libpq_gettext("field \"%s\" must be a string"),
    ++								 ctx->target_field_name);
    ++			return JSON_SEM_ACTION_FAILED;
     +		}
     +
    -+		oauth_json_set_error(ctx,
    -+							 libpq_gettext("field \"%s\" must be a string"),
    -+							 ctx->target_field_name);
    ++		*ctx->target_field = strdup(token);
    ++		if (!*ctx->target_field)
    ++			return JSON_OUT_OF_MEMORY;
    ++
    ++		ctx->target_field = NULL;
    ++		ctx->target_field_name = NULL;
    ++	}
    ++	else
    ++	{
    ++		/* otherwise we just ignore it */
     +	}
     +
    -+	free(token);
    -+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
    ++	return JSON_SUCCESS;
     +}
     +
     +#define HTTPS_SCHEME "https://"
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +}
     +
     +static bool
    -+handle_oauth_sasl_error(PGconn *conn, char *msg, int msglen)
    ++handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
     +{
     +	JsonLexContext lex = {0};
     +	JsonSemAction sem = {0};
     +	JsonParseErrorType err;
     +	struct json_ctx ctx = {0};
     +	char	   *errmsg = NULL;
    ++	bool		success = false;
    ++
    ++	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
     +
     +	/* Sanity check. */
     +	if (strlen(msg) != msglen)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		return false;
     +	}
     +
    ++	/*
    ++	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
    ++	 * that up front.
    ++	 */
    ++	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
    ++	{
    ++		libpq_append_conn_error(conn,
    ++								"server's error response is not valid UTF-8");
    ++		return false;
    ++	}
    ++
     +	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
    ++	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
     +
     +	initPQExpBuffer(&ctx.errbuf);
     +	sem.semstate = &ctx;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	freeJsonLexContext(&lex);
     +
     +	if (errmsg)
    -+		return false;
    ++		goto cleanup;
     +
     +	/* TODO: what if these override what the user already specified? */
     +	/* TODO: what if there's no discovery URI? */
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		/* The URI must correspond to our existing issuer, to avoid mix-ups. */
     +		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
     +		if (!discovery_issuer)
    -+			return false;		/* error message already set */
    ++			goto cleanup;		/* error message already set */
     +
     +		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
     +		{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +									conn->oauth_issuer_id);
     +
     +			free(discovery_issuer);
    -+			return false;
    ++			goto cleanup;
     +		}
     +
     +		free(discovery_issuer);
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			free(conn->oauth_discovery_uri);
     +
     +		conn->oauth_discovery_uri = ctx.discovery_uri;
    ++		ctx.discovery_uri = NULL;
     +	}
     +
     +	if (ctx.scope)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			free(conn->oauth_scope);
     +
     +		conn->oauth_scope = ctx.scope;
    ++		ctx.scope = NULL;
     +	}
     +	/* TODO: missing error scope should clear any existing connection scope */
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	{
     +		libpq_append_conn_error(conn,
     +								"server sent error response without a status");
    -+		return false;
    ++		goto cleanup;
     +	}
     +
     +	if (strcmp(ctx.status, "invalid_token") == 0)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	}
     +	/* TODO: include status in hard failure message */
     +
    -+	return true;
    ++	success = true;
    ++
    ++cleanup:
    ++	free(ctx.status);
    ++	free(ctx.scope);
    ++	free(ctx.discovery_uri);
    ++
    ++	return success;
     +}
     +
     +static void
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	}
     +	else
     +	{
    -+#if USE_BUILTIN_OAUTH
    ++#if USE_LIBCURL
     +		/*
     +		 * Hand off to our built-in OAuth flow.
     +		 *
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		conn->oauth_want_retry = PG_BOOL_NO;
     +
     +#else
    -+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built using --with-builtin-oauth");
    ++		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built using --with-libcurl");
     +		goto fail;
     +
     +#endif
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
     +			 */
     +			*output = strdup(kvsep);
    ++			if (unlikely(!*output))
    ++			{
    ++				libpq_append_conn_error(conn, "out of memory");
    ++				return SASL_FAILED;
    ++			}
     +			*outputlen = strlen(*output);	/* == 1 */
     +
     +			state->state = FE_OAUTH_SERVER_ERROR;
    @@ src/interfaces/libpq/meson.build: if gssapi.found()
        )
      endif
      
    -+if oauth_library == 'curl'
    ++if libcurl.found()
     +  libpq_sources += files('fe-auth-oauth-curl.c')
     +endif
     +
    @@ src/interfaces/libpq/meson.build: if gssapi.found()
        kwargs: gen_export_kwargs,
      )
     
    - ## src/interfaces/libpq/pqexpbuffer.c ##
    -@@ src/interfaces/libpq/pqexpbuffer.c: static const char *const oom_buffer_ptr = oom_buffer;
    -  *
    -  * Put a PQExpBuffer in "broken" state if it isn't already.
    -  */
    --static void
    -+void
    - markPQExpBufferBroken(PQExpBuffer str)
    - {
    - 	if (str->data != oom_buffer)
    -
    - ## src/interfaces/libpq/pqexpbuffer.h ##
    -@@ src/interfaces/libpq/pqexpbuffer.h: extern void initPQExpBuffer(PQExpBuffer str);
    - extern void destroyPQExpBuffer(PQExpBuffer str);
    - extern void termPQExpBuffer(PQExpBuffer str);
    - 
    -+/*------------------------
    -+ * markPQExpBufferBroken
    -+ *		Put a PQExpBuffer in "broken" state if it isn't already.
    -+ */
    -+extern void markPQExpBufferBroken(PQExpBuffer str);
    -+
    - /*------------------------
    -  * resetPQExpBuffer
    -  *		Reset a PQExpBuffer to empty
    -
      ## src/makefiles/meson.build ##
    -@@ src/makefiles/meson.build: pgxs_kv = {
    -   'SUN_STUDIO_CC': 'no', # not supported so far
    - 
    -   # want the chosen option, rather than the library
    -+  'with_builtin_oauth' : oauth_library,
    -   'with_ssl' : ssl_library,
    -   'with_uuid': uuidopt,
    - 
     @@ src/makefiles/meson.build: pgxs_deps = {
        'gssapi': gssapi,
        'icu': icu,
    @@ src/test/modules/oauth_validator/Makefile (new)
     +include $(top_srcdir)/contrib/contrib-global.mk
     +
     +export PYTHON
    -+export with_builtin_oauth
    ++export with_libcurl
     +export with_python
     +
     +endif
    @@ src/test/modules/oauth_validator/fail_validator.c (new)
     +
     +#include "fmgr.h"
     +#include "libpq/oauth.h"
    -+#include "miscadmin.h"
    -+#include "utils/guc.h"
    -+#include "utils/memutils.h"
     +
     +PG_MODULE_MAGIC;
     +
    @@ src/test/modules/oauth_validator/meson.build (new)
     +    ],
     +    'env': {
     +      'PYTHON': python.path(),
    -+      'with_builtin_oauth': oauth_library,
    ++      'with_libcurl': libcurl.found() ? 'yes' : 'no',
     +      'with_python': 'yes',
     +    },
     +  },
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
     +}
     +
    -+if ($ENV{with_builtin_oauth} ne 'curl')
    ++if ($ENV{with_libcurl} ne 'yes')
     +{
     +	plan skip_all => 'client-side OAuth not supported by this build';
     +}
    @@ src/test/modules/oauth_validator/t/002_client.pl (new)
     +	$log_start,
     +	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
     +
    -+if ($ENV{with_builtin_oauth} ne 'curl')
    ++if ($ENV{with_libcurl} ne 'yes')
     +{
     +	# libpq should help users out if no OAuth support is built in.
     +	test(
     +		"fails without custom hook installed",
     +		flags => ["--no-hook"],
     +		expected_stderr =>
    -+		  qr/no custom OAuth flows are available, and libpq was not built using --with-builtin-oauth/
    ++		  qr/no custom OAuth flows are available, and libpq was not built using --with-libcurl/
     +	);
     +}
     +
    @@ src/tools/pgindent/typedefs.list: explain_get_index_name_hook_type
      fe_scram_state
      fe_scram_state_enum
      fetch_range_request
    -@@ src/tools/pgindent/typedefs.list: normal_rand_fctx
    - nsphash_hash
    +@@ src/tools/pgindent/typedefs.list: nsphash_hash
      ntile_context
    + nullingrel_info
      numeric
     +oauth_state
      object_access_hook_type
2:  3d169848db ! 2:  28cc3463aa DO NOT MERGE: Add pytest suite for OAuth
    @@ src/test/python/client/test_oauth.py (new)
     +# The client tests need libpq to have been compiled with OAuth support; skip
     +# them otherwise.
     +pytestmark = pytest.mark.skipif(
    -+    os.getenv("with_builtin_oauth") == "none",
    -+    reason="OAuth client tests require --with-builtin-oauth support",
    ++    os.getenv("with_libcurl") != "yes",
    ++    reason="OAuth client tests require --with-libcurl support",
     +)
     +
     +if platform.system() == "Darwin":
    @@ src/test/python/client/test_oauth.py (new)
     +def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
     +    """
     +    Sends a failure response via the OAUTHBEARER mechanism, consumes the
    -+    client's dummy reponse, and issues a FATAL error to end the exchange.
    ++    client's dummy response, and issues a FATAL error to end the exchange.
     +
     +    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
     +    response. If provided, errmsg is used in the FATAL ErrorResponse.
    @@ src/test/python/client/test_oauth.py (new)
     +    pass
     +
     +
    ++class RawBytes(bytes):
    ++    """
    ++    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
    ++    implementations to issue invalid encodings.
    ++    """
    ++
    ++    pass
    ++
    ++
     +class OpenIDProvider(threading.Thread):
     +    """
     +    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
    @@ src/test/python/client/test_oauth.py (new)
     +            self.end_headers()
     +
     +            if resp is not None:
    -+                if not isinstance(resp, RawResponse):
    -+                    resp = json.dumps(resp)
    -+                resp = resp.encode("utf-8")
    ++                if not isinstance(resp, RawBytes):
    ++                    if not isinstance(resp, RawResponse):
    ++                        resp = json.dumps(resp)
    ++                    resp = resp.encode("utf-8")
     +                self.wfile.write(resp)
     +
     +            self.close_connection = True
    @@ src/test/python/client/test_oauth.py (new)
     +            id="bad JSON: invalid syntax",
     +        ),
     +        pytest.param(
    ++            b"\xFF\xFF\xFF\xFF",
    ++            "server's error response is not valid UTF-8",
    ++            id="bad JSON: invalid encoding",
    ++        ),
    ++        pytest.param(
     +            '"abcde"',
     +            "top-level element must be an object",
     +            id="bad JSON: top-level element is a string",
    @@ src/test/python/client/test_oauth.py (new)
     +            id="bad JSON: int openid-configuration member",
     +        ),
     +        pytest.param(
    ++            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
    ++            'field "openid-configuration" is duplicated',
    ++            id="bad JSON: duplicated field",
    ++        ),
    ++        pytest.param(
     +            '{ "status": "invalid_token", "scope": 1 }',
     +            'field "scope" must be a string',
     +            id="bad JSON: int scope member",
    @@ src/test/python/client/test_oauth.py (new)
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
     +            initial = start_oauth_handshake(conn)
     +
    ++            if isinstance(response, str):
    ++                response = response.encode("utf-8")
    ++
     +            # Fail the SASL exchange with an invalid JSON response.
     +            pq3.send(
     +                conn,
     +                pq3.types.AuthnRequest,
     +                type=pq3.authn.SASLContinue,
    -+                body=response.encode("utf-8"),
    ++                body=response,
     +            )
     +
     +            # The client should disconnect, so the socket is closed here. (If
    @@ src/test/python/client/test_oauth.py (new)
     +            id="NULL bytes in document",
     +        ),
     +        pytest.param(
    ++            (200, RawBytes(b"blah\xFFblah")),
    ++            r"failed to parse OpenID discovery document: response is not valid UTF-8",
    ++            id="document is not UTF-8",
    ++        ),
    ++        pytest.param(
     +            (200, 123),
     +            r"failed to parse OpenID discovery document: top-level element must be an object",
     +            id="scalar at top level",
    @@ src/test/python/client/test_oauth.py (new)
     +        pytest.param(
     +            (200, {"grant_types_supported": 123}),
     +            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
    -+            id="scalar grant types field",
    ++            id="numeric grant types field",
    ++        ),
    ++        pytest.param(
    ++            (
    ++                200,
    ++                {
    ++                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
    ++                },
    ++            ),
    ++            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
    ++            id="string grant types field",
     +        ),
     +        pytest.param(
     +            (200, {"grant_types_supported": {}}),
    @@ src/test/python/client/test_oauth.py (new)
     +            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
     +            id="mismatched issuer identifier",
     +        ),
    ++        pytest.param(
    ++            (
    ++                200,
    ++                RawResponse(
    ++                    """{
    ++                        "issuer": "https://256.256.256.256/path",
    ++                        "token_endpoint": "https://256.256.256.256/token",
    ++                        "grant_types_supported": [
    ++                            "urn:ietf:params:oauth:grant-type:device_code"
    ++                        ],
    ++                        "device_authorization_endpoint": "https://256.256.256.256/dev",
    ++                        "device_authorization_endpoint": "https://256.256.256.256/dev"
    ++                    }"""
    ++                ),
    ++            ),
    ++            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
    ++            id="duplicated field",
    ++        ),
     +        #
     +        # Exercise HTTP-level failures by breaking the protocol. Note that the
     +        # error messages here are implementation-dependent.
    @@ src/test/python/meson.build (new)
     +subdir('server')
     +
     +pytest_env = {
    -+  'with_builtin_oauth': oauth_library,
    ++  'with_libcurl': libcurl.found() ? 'yes' : 'no',
     +
     +  # Point to the default database; the tests will create their own databases as
     +  # needed.
    @@ src/test/python/server/oauthtest.c (new)
     +	}
     +	else
     +	{
    -+		if (*expected_bearer && !strcmp(token, expected_bearer))
    ++		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
     +			res->authorized = true;
     +		if (set_authn_id)
     +			res->authn_id = authn_id;
v38-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v38-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 785add801574a0d5cb60afabe9b5e9d9c5151487 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v38 1/2] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied (see below).

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

= Server-Side Validation =

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

= OAuth HBA Method =

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   17 +-
 config/programs.m4                            |    2 -
 configure                                     |  213 ++
 configure.ac                                  |   32 +
 doc/src/sgml/client-auth.sgml                 |  177 ++
 doc/src/sgml/config.sgml                      |   21 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  134 +
 doc/src/sgml/oauth-validators.sgml            |  140 +
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   23 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  819 ++++++
 src/backend/libpq/auth.c                      |   26 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |   17 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    7 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2459 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          |  947 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   43 +
 src/interfaces/libpq/fe-auth-sasl.h           |   10 +-
 src/interfaces/libpq/fe-auth-scram.c          |    6 +-
 src/interfaces/libpq/fe-auth.c                |  103 +-
 src/interfaces/libpq/fe-auth.h                |    9 +-
 src/interfaces/libpq/fe-connect.c             |   89 +-
 src/interfaces/libpq/fe-misc.c                |    7 +-
 src/interfaces/libpq/libpq-fe.h               |   88 +
 src/interfaces/libpq/libpq-int.h              |   16 +
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 .../modules/oauth_validator/fail_validator.c  |   41 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  157 ++
 .../modules/oauth_validator/t/001_server.pl   |  428 +++
 .../modules/oauth_validator/t/002_client.pl   |  114 +
 .../modules/oauth_validator/t/oauth_server.py |  370 +++
 src/test/modules/oauth_validator/validator.c  |  100 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/test/perl/PostgreSQL/Test/OAuthServer.pm  |   65 +
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   15 +
 58 files changed, 7012 insertions(+), 60 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c
 create mode 100644 src/test/perl/PostgreSQL/Test/OAuthServer.pm

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 18e944ca89..bb5b07db27 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -175,6 +175,7 @@ task:
         --buildtype=debug \
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
+        -Dlibcurl=enabled \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -219,6 +220,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -234,6 +236,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-zstd
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
+  -Dlibcurl=enabled
   -Dllvm=enabled
   -Duuid=e2fs
 
@@ -312,8 +315,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +694,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f..d4ff8c82af 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,8 +142,6 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
-
-
 # PGAC_CHECK_READLINE
 # -------------------
 # Check for the readline library and dependent libraries, either
diff --git a/configure b/configure
index ff59f1422d..71e18cc06b 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support for OAuth client flows
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,144 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support for OAuth client flows" >&5
+$as_echo_n "checking whether to build with libcurl support for OAuth client flows... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12207,6 +12356,59 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-libcurl" "$LINENO" 5
+fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
@@ -13955,6 +14157,17 @@ fi
 
 done
 
+fi
+
+if test "$with_libcurl" = yes; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 2181700964..137e72fa08 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,27 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support for OAuth client flows])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support for OAuth client flows],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support for OAuth client flows. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1315,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-libcurl])])
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
@@ -1588,6 +1616,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_libcurl" = yes; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85a..5faaaf3057 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,167 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+
+    <itemizedlist>
+     <listitem>
+      <para>
+       Resource owner: The user or system who owns protected resources and can
+       grant access to them.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Client: The system which accesses the protected resources using access
+       tokens.  Applications using libpq are the clients in connecting to a
+       <productname>PostgreSQL</productname> cluster.
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       Authorization server: The system which receives requests from, and
+       issues access tokens to, the client after the authenticated resource
+       owner has given approval.
+      </para>
+     </listitem>
+
+     <listitem>
+      <para>
+       Resource server: The system which hosts the protected resources which are
+       accessed by the client. The <productname>PostgreSQL</productname> cluster
+       being connected to is the resource server.
+      </para>
+     </listitem>
+
+    </itemizedlist>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authentication server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        The issuer identifier of the authorization server, as defined by its
+        discovery document, or a well-known URI pointing to that discovery
+        document. This parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a discovery document URI
+        will be constructed using the issuer identifier. By default, the URI
+        uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, the URI will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>trust_validator_authz</literal></term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping is skipped, and
+        the OAuth validator takes full responsibility for mapping end user
+        identities to database roles.  If the validator authorizes the token,
+        the server trusts that the user is allowed to connect under the
+        requested role, and the connection is allowed to proceed regardless of
+        the authentication status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>trust_validator_authz</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e0c8325a39..cc88c5009a 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,27 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. For more information on implementing OAuth validators see
+        <xref linkend="oauth-validators" />. This parameter can only be set in
+        the <filename>postgresql.conf</filename> file.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index ebdb5b3bc2..3fca2910da 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1141,6 +1141,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2582,6 +2595,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 01f259fd0d..6cc3ec6b6a 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2345,6 +2345,96 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of an issuer to contact if the server requests an OAuth
+        token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, libpq will ask the
+        server for a <emphasis>discovery document:</emphasis> a URI providing a
+        set of OAuth configuration parameters. The server must provide a URI
+        that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        This standard handshake requires two separate network connections to the
+        server per authentication attempt. To skip asking the server for a
+        discovery document URI, you may set <literal>oauth_issuer</literal> to a
+        <literal>/.well-known/</literal> URI used for OAuth discovery. (In this
+        case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) libpq currently supports the following well-known endpoints:
+        <itemizedlist>
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth">custom OAuth
+        hook</link> is installed to provide one), then this parameter must be
+        set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9972,6 +10062,50 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <para>
+   <variablelist>
+    <varlistentry id="libpq-PQsetAuthDataHook">
+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       TODO
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+      </para>
+
+      <para>
+       If <replaceable>hook</replaceable> is set to a null pointer instead of
+       a function pointer, the default hook will be installed.
+      </para>
+     </listitem>
+    </varlistentry>
+
+    <varlistentry id="libpq-PQgetAuthDataHook">
+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+     <listitem>
+      <para>
+       Retrieves the current value of <literal>PGauthDataHook</literal>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </para>
+
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..83ea576445
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,140 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+ </para>
+ <para>
+  OAuth validation modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname> which contains all that
+   libpq need to perform token validation using the module. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks, libpq will call them as required to process the authentication
+   request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..af476c82fc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f..ae4732df65 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 451c3f6d85..5aa053e729 100644
--- a/meson.build
+++ b/meson.build
@@ -848,6 +848,24 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+  endif
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3037,6 +3055,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3705,6 +3727,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index 3893519639..a3d49e2261 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support for OAuth client flows')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 0f38d712d1..339aa6ffa0 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..11e6ba90f6
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,819 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+#define KVSEP 0x01
+#define AUTH_KEY "auth"
+#define BEARER_SCHEME "Bearer "
+
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("Internal error in OAuth validator module"));
+		return false;
+	}
+
+	if (!ret->authorized)
+	{
+		status = false;
+		goto cleanup;
+	}
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c91606..0cf3e31c9f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 3104b871cf..56b51479bb 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "trust_validator_authz"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with trust_validator_authz";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "trust_validator_authz") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "trust_validator_authz", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512..c85527fb01 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a3..b64c8dea97 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8cf1afbad2..6f985e7582 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4813,6 +4814,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index a2ac7575ca..f066d49161 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..22f6ab9f1d 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82..fb333a1578 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..4fcdda7430
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index ab0f8cc2b4..154d9c0f4a 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -672,6 +675,10 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support for OAuth client flows.
+   (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc..5feec8738c 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b5..eb8f9d65a1 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..b5ffa0bd5d
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2459 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+typedef enum
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+} OAuthStep;
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	OAuthStep	step;			/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	if (!ctx->nested)
+		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+			|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+		{
+			oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+								  field->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds. RFC 8628, Sec. 3.5.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PQpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..1b40df9497
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,947 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->state = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the token pointer will be ignored and the initial
+ * response will instead contain a request for the server's required OAuth
+ * parameters (Sec. 4.3). Otherwise, a bearer token must be provided.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* We must have a token. */
+		if (!token)
+		{
+			/*
+			 * Either programmer error, or something went badly wrong during
+			 * the asynchronous fetch.
+			 *
+			 * TODO: users shouldn't see this; what action should they take if
+			 * they do?
+			 */
+			libpq_append_conn_error(conn,
+									"no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules state this must be
+	 * at the beginning of the path component, but OIDC defined it at the end
+	 * instead, so we have to search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	/* TODO: what if these override what the user already specified? */
+	/* TODO: what if there's no discovery URI? */
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/* The URI must correspond to our existing issuer, to avoid mix-ups. */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+		ctx.discovery_uri = NULL;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+		ctx.scope = NULL;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PQoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PQoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = strdup(request->token);
+		if (!state->token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PQoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PQoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			state->token = strdup(request.token);
+			if (!state->token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/*
+		 * Hand off to our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built using --with-libcurl");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->state)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->state = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly. This doesn't
+				 * require any asynchronous work.
+				 */
+				discover = true;
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, discover, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->state = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->state = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..ba4d33c79c
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,43 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+typedef enum
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+} fe_oauth_state_enum;
+
+typedef struct
+{
+	fe_oauth_state_enum state;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 258bfd0564..b47011d077 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..d260b60c0e 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,6 +40,7 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
@@ -430,7 +431,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +449,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +536,13 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +586,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +672,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +702,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1024,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1193,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1210,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1540,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..3e2bc1333f 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
@@ -29,4 +33,7 @@ extern char *pg_fe_scram_build_secret(const char *password,
 									  int iterations,
 									  const char **errstr);
 
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
 #endif							/* FE_AUTH_H */
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index aaf87e8e88..259e9bbc5e 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -365,6 +365,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +645,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2663,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3680,6 +3699,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3835,6 +3855,19 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3868,7 +3901,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3905,6 +3948,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4586,6 +4664,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4703,6 +4782,12 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7227,6 +7312,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..3c6c7fd23b 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -28,6 +28,10 @@ extern "C"
  */
 #include "postgres_ext.h"
 
+#ifdef WIN32
+#include <winsock2.h>			/* for SOCKET */
+#endif
+
 /*
  * These symbols may be used in compile-time #ifdef tests for the availability
  * of v14-and-newer libpq features.
@@ -59,6 +63,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +109,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +192,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGAuthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +732,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PQpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PQpromptOAuthDevice;
+
+/* for _PQoauthBearerRequest.async() */
+#ifdef WIN32
+#define SOCKTYPE SOCKET
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PQoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PQoauthBearerRequest *request,
+										SOCKTYPE *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PQoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd..df2bd1f389 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -369,6 +369,8 @@ typedef struct pg_conn_host
 								 * found in password file. */
 } pg_conn_host;
 
+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
+
 /*
  * PGconn stores all the state data associated with a single connection
  * to a backend.
@@ -432,6 +434,16 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +518,10 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d1..dc6f3ecab8 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -4,6 +4,7 @@
 # args for executables (which depend on libpq).
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -40,6 +41,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index aba7411a1b..d84743990a 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14..bdfd5f1f8d 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index c829b61953..bd13e4afbd 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..f297ed5c96
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 0000000000..c438ed4d17
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,41 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..4b78c90557
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 0000000000..b9278a2930
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,157 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Verify OAuth hook functionality in libpq
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <stdio.h>
+#include <stdlib.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGAuthData type, PGconn *conn, void *data);
+
+static void
+usage(char *argv[])
+{
+	fprintf(stderr, "usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	fprintf(stderr, "recognized flags:\n");
+	fprintf(stderr, " -h, --help				show this message\n");
+	fprintf(stderr, " --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	fprintf(stderr, " --expected-uri URI		fail if received configuration link does not match URI\n");
+	fprintf(stderr, " --no-hook					don't install OAuth hooks (connection will fail)\n");
+	fprintf(stderr, " --token TOKEN				use the provided TOKEN value\n");
+}
+
+static bool no_hook = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+static int
+handle_auth_data(PGAuthData type, PGconn *conn, void *data)
+{
+	PQoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..ca75bfaebd
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,428 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use PostgreSQL::Test::OAuthServer;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = PostgreSQL::Test::OAuthServer->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok(
+		"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+		"connect",
+		expected_stderr =>
+		  qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check(
+		"user $user: validator receives correct parameters",
+		$log_start,
+		log_like => [
+			qr/oauth_validator: token="9243959234", role="$user"/,
+			qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		]);
+	$node->log_check(
+		"user $user: validator sets authenticated identity",
+		$log_start,
+		log_like =>
+		  [ qr/connection authenticated: identity="test" method=oauth/, ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+if ($node->connect_ok(
+		"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+		"connect",
+		expected_stderr =>
+		  qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check(
+		"user $user: validator receives correct parameters",
+		$log_start,
+		log_like => [
+			qr/oauth_validator: token="9243959234-alt", role="$user"/,
+			qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+		]);
+	$node->log_check(
+		"user $user: validator sets authenticated identity",
+		$log_start,
+		log_like =>
+		  [ qr/connection authenticated: identity="testalt" method=oauth/, ]);
+	$log_start = $log_end;
+}
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "user=test dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope=''";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr oauth_client_id=f02c6361-0635",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+if ($node->connect_ok(
+		"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+		"validator is used for $user",
+		expected_stderr =>
+		  qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_start = $node->wait_for_log(qr/connection authorized/, $log_start);
+}
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 0000000000..23ee89b590
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,114 @@
+
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	diag "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built using --with-libcurl/
+	);
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..ae7ea7af6d
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,370 @@
+#! /usr/bin/env python3
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..dbba326bc4
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,100 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 508e5e3917..8357272d67 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2513,6 +2513,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2556,7 +2561,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/test/perl/PostgreSQL/Test/OAuthServer.pm b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
new file mode 100644
index 0000000000..a13240cd01
--- /dev/null
+++ b/src/test/perl/PostgreSQL/Test/OAuthServer.pm
@@ -0,0 +1,65 @@
+#!/usr/bin/perl
+
+package PostgreSQL::Test::OAuthServer;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Socket;
+use IO::Select;
+use Test::More;
+
+local *server_socket;
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	read($read_fh, $port, 7) // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+1;
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e..362b20a94f 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 2d4c870423..23459b41f1 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -153,6 +153,7 @@ ArrayMetaState
 ArraySubWorkspace
 ArrayToken
 ArrayType
+AsyncAuthFunc
 AsyncQueueControl
 AsyncQueueEntry
 AsyncRequest
@@ -369,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1718,6 +1722,8 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthStep
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1783,6 +1789,7 @@ PFN
 PGAlignedBlock
 PGAlignedXLogBlock
 PGAsyncStatusType
+PGAuthData
 PGCALL2
 PGChecksummablePage
 PGContextVisibility
@@ -1945,11 +1952,14 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
 PQnoticeReceiver
+PQoauthBearerRequest
 PQprintOpt
+PQpromptOAuthDevice
 PQsslKeyPassHook_OpenSSL_type
 PREDICATELOCK
 PREDICATELOCKTAG
@@ -3070,6 +3080,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3463,6 +3475,8 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
+fe_oauth_state_enum
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
@@ -3667,6 +3681,7 @@ nsphash_hash
 ntile_context
 nullingrel_info
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v38-0002-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v38-0002-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 28cc3463aaf37fd941bc51f606e5d6baff62e856 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v38 2/2] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  195 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2495 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 ++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 +++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6275 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index bb5b07db27..dbc83df82f 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -319,6 +319,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -403,8 +404,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 5aa053e729..898145cc92 100644
--- a/meson.build
+++ b/meson.build
@@ -3368,6 +3368,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3534,6 +3537,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7f..c7fce098eb 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..9caa3a56d4
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..0472c29fbb
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2495 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PQPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PQOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PQOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PQOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PQPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PQOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    if server_discovery:
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+
+                # For discovery, the client should send an empty auth header.
+                # See RFC 7628, Sec. 4.3.
+                auth = get_auth_value(initial)
+                assert auth == b""
+
+                # Always fail the discovery exchange.
+                fail_oauth_handshake(
+                    conn,
+                    {
+                        "status": "invalid_token",
+                        "openid-configuration": discovery_uri,
+                    },
+                )
+
+        # Expect the client to connect again.
+        sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    if server_discovery:
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+
+                # For discovery, the client should send an empty auth header.
+                # See RFC 7628, Sec. 4.3.
+                auth = get_auth_value(initial)
+                assert auth == b""
+
+                # Always fail the discovery exchange.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": discovery_uri,
+                }
+                pq3.send(
+                    conn,
+                    pq3.types.AuthnRequest,
+                    type=pq3.authn.SASLContinue,
+                    body=json.dumps(resp).encode("utf-8"),
+                )
+
+                # FIXME: the client disconnects at this point; it'd be nicer if
+                # it completed the exchange.
+
+            # The client should not reconnect.
+
+    else:
+        expect_disconnected_handshake(sock)
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PQOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id="some-id",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": to_http(openid_provider.discovery_uri),
+            }
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=json.dumps(resp).encode("utf-8"),
+            )
+
+            # FIXME: the client disconnects at this point; it'd be nicer if
+            # it completed the exchange.
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..e137df852e
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..42af80c73e
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..415748b9a6
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..ea31ad4f87
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba..ffdf760d79 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.34.1

#168Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#167)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Dec 5, 2024 at 10:29 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Next up, the many-many documentation requests, now that the fuzzers
can run while I write.

v39 adds a great deal of documentation for implementers of custom
client flows and validators, and addresses the following upthread
feedback:
- the trust_validator_authz HBA option has been renamed to
delegate_ident_mapping
- delegate_ident_mapping is now tested as part of the oauth_validator suite
- typedefs for AsyncAuthFunc, OAuthStep, and fe_oauth_state_enum have
been removed (and the last has been renamed `enum fe_oauth_step`)
- pg_oauth_mech has been moved to fe-auth-oauth.h
- PostgreSQL::Test::OAuthServer has been moved into the
oauth_validator folder as OAuth::Server
- pgperlcritic now passes

Of Peter's notes, I think just the Windows testing comments and a
better explanation of the MAX_OAUTH_RESPONSE_SIZE remain.

On Fri, Nov 8, 2024 at 1:21 AM Peter Eisentraut <peter@eisentraut.org> wrote:

* src/interfaces/libpq/libpq-fe.h

The naming scheme of types and functions in this file is clearly
obscure and has grown randomly over time. But at least my intuition
is that the preferred way is

types start with PG
function start with PQ

and the next letter is usually lower case. (PQconnectdb, PQhost,
PGconn, PQresult)

Okay, I think I've corrected this (`struct PQxxx` are now `struct
PGxxx`, PGAuthData is now PGauthData). To summarize the new API:
- PGauthData is an enum containing PQAUTHDATA_* constants
- PGpromptOAuthDevice and PGoauthBearerRequest are type-specific
callback structures
- the PQauthDataHook and all of its related types and API start with
PQ, to parallel the PQsetSSLKeyPassHook_OpenSSL API

Thanks,
--Jacob

Attachments:

since-v38.diff.txttext/plain; charset=US-ASCII; name=since-v38.diff.txtDownload
1:  785add80157 ! 1:  3dc642d68c8 Add OAUTHBEARER SASL mechanism
    @@ doc/src/sgml/client-auth.sgml: host ... radius radiusservers="server1,server2" r
     +
     +    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
     +    is built, see <xref linkend="installation"/> for more information.
    ++   </para>
     +
    -+    <itemizedlist>
    -+     <listitem>
    -+      <para>
    -+       Resource owner: The user or system who owns protected resources and can
    -+       grant access to them.
    -+      </para>
    -+     </listitem>
    -+     <listitem>
    -+      <para>
    -+       Client: The system which accesses the protected resources using access
    -+       tokens.  Applications using libpq are the clients in connecting to a
    -+       <productname>PostgreSQL</productname> cluster.
    -+      </para>
    -+     </listitem>
    -+     <listitem>
    -+      <para>
    -+       Authorization server: The system which receives requests from, and
    -+       issues access tokens to, the client after the authenticated resource
    -+       owner has given approval.
    -+      </para>
    -+     </listitem>
    ++   <para>
    ++    This documentation uses the following terminology when discussing the OAuth
    ++    ecosystem:
     +
    -+     <listitem>
    -+      <para>
    -+       Resource server: The system which hosts the protected resources which are
    -+       accessed by the client. The <productname>PostgreSQL</productname> cluster
    -+       being connected to is the resource server.
    -+      </para>
    -+     </listitem>
    ++    <variablelist>
     +
    -+    </itemizedlist>
    ++     <varlistentry>
    ++      <term>Resource Owner (or End User)</term>
    ++      <listitem>
    ++       <para>
    ++        The user or system who owns protected resources and can grant access to
    ++        them. This documentation also uses the term <emphasis>end user</emphasis>
    ++        when the resource owner is a person. When you use
    ++        <application>psql</application> to connect to the database using OAuth,
    ++        you are the resource owner/end user.
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++     <varlistentry>
    ++      <term>Client</term>
    ++      <listitem>
    ++       <para>
    ++        The system which accesses the protected resources using access
    ++        tokens. Applications using libpq, such as <application>psql</application>,
    ++        are the OAuth clients when connecting to a
    ++        <productname>PostgreSQL</productname> cluster.
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++     <varlistentry>
    ++      <term>Resource Server</term>
    ++      <listitem>
    ++       <para>
    ++        The system which hosts the protected resources which are
    ++        accessed by the client. The <productname>PostgreSQL</productname>
    ++        cluster being connected to is the resource server.
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++     <varlistentry>
    ++      <term>Provider</term>
    ++      <listitem>
    ++       <para>
    ++        The organization, product vendor, or other entity which develops and/or
    ++        administers the OAuth servers and clients for a given application.
    ++        Different providers typically choose different implementation details
    ++        for their OAuth systems; a client of one provider is not generally
    ++        guaranteed to have access to the servers of another.
    ++       </para>
    ++       <para>
    ++        This use of the term "provider" is not standard, but it seems to be in
    ++        wide use colloquially. (It should not be confused with OpenID's similar
    ++        term "Identity Provider". While the implementation of OAuth in
    ++        <productname>PostgreSQL</productname> is intended to be interoperable
    ++        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
    ++        and does not require its use.)
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++     <varlistentry>
    ++      <term>Authorization Server</term>
    ++      <listitem>
    ++       <para>
    ++        The system which receives requests from, and issues access tokens to,
    ++        the client after the authenticated resource owner has given approval.
    ++        <productname>PostgreSQL</productname> does not provide an authorization
    ++        server; it's obtained from the OAuth provider.
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++     <varlistentry>
    ++      <term>Issuer</term>
    ++      <listitem>
    ++       <para>
    ++        An identifier for an authorization server, printed as an
    ++        <literal>https://</literal> URL, which provides a trusted "namespace"
    ++        for OAuth clients and applications. The issuer identifier allows a
    ++        single authorization server to talk to the clients of mutually
    ++        untrusting entities, as long as they maintain separate issuers.
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++    </variablelist>
    ++
    ++    <note>
    ++     <para>
    ++      For small deployments, there may not be a meaningful distinction between
    ++      the "provider", "authorization server", and "issuer". However, for more
    ++      complicated setups, there may be a one-to-many (or many-to-many)
    ++      relationship: a provider may rent out multiple issuer identifiers to
    ++      separate tenants, then provide multiple authorization servers, possibly
    ++      with different supported feature sets, to interact with their clients.
    ++     </para>
    ++    </note>
     +   </para>
     +
     +   <para>
    @@ doc/src/sgml/client-auth.sgml: host ... radius radiusservers="server1,server2" r
     +    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
     +    which are a type of access token used with OAuth 2.0 where the token is an
     +    opaque string.  The format of the access token is implementation specific
    -+    and is chosen by each authentication server.
    ++    and is chosen by each authorization server.
     +   </para>
     +
     +   <para>
    @@ doc/src/sgml/client-auth.sgml: host ... radius radiusservers="server1,server2" r
     +     </varlistentry>
     +
     +     <varlistentry>
    -+      <term><literal>trust_validator_authz</literal></term>
    ++      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
    ++       <literal>delegate_ident_mapping</literal>
    ++      </term>
     +      <listitem>
     +       <para>
     +        An advanced option which is not intended for common use.
     +       </para>
     +       <para>
    -+        When set to <literal>1</literal>, standard user mapping is skipped, and
    -+        the OAuth validator takes full responsibility for mapping end user
    -+        identities to database roles.  If the validator authorizes the token,
    -+        the server trusts that the user is allowed to connect under the
    -+        requested role, and the connection is allowed to proceed regardless of
    -+        the authentication status of the user.
    ++        When set to <literal>1</literal>, standard user mapping with
    ++        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
    ++        takes full responsibility for mapping end user identities to database
    ++        roles.  If the validator authorizes the token, the server trusts that
    ++        the user is allowed to connect under the requested role, and the
    ++        connection is allowed to proceed regardless of the authentication
    ++        status of the user.
     +       </para>
     +       <para>
     +        This parameter is incompatible with <literal>map</literal>.
     +       </para>
     +       <warning>
     +        <para>
    -+         <literal>trust_validator_authz</literal> provides additional
    ++         <literal>delegate_ident_mapping</literal> provides additional
     +         flexibility in the design of the authentication system, but it also
     +         requires careful implementation of the OAuth validator, which must
     +         determine whether the provided token carries sufficient end-user
    @@ doc/src/sgml/config.sgml: include_dir 'conf.d'
     +        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
     +        must explicitly set a <literal>validator</literal> chosen from this
     +        list. If set to an empty string (the default), OAuth connections will be
    -+        refused. For more information on implementing OAuth validators see
    -+        <xref linkend="oauth-validators" />. This parameter can only be set in
    -+        the <filename>postgresql.conf</filename> file.
    ++        refused. This parameter can only be set in the
    ++        <filename>postgresql.conf</filename> file.
    ++       </para>
    ++       <para>
    ++        Validator modules must be implemented/obtained separately;
    ++        <productname>PostgreSQL</productname> does not ship with any default
    ++        implementations. For more information on implementing OAuth validators,
    ++        see <xref linkend="oauth-validators" />.
     +       </para>
     +      </listitem>
     +     </varlistentry>
    @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
     +        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
     +       </para>
     +       <para>
    -+        As part of the standard authentication handshake, libpq will ask the
    -+        server for a <emphasis>discovery document:</emphasis> a URI providing a
    -+        set of OAuth configuration parameters. The server must provide a URI
    -+        that is directly constructed from the components of the
    ++        As part of the standard authentication handshake, <application>libpq</application>
    ++        will ask the server for a <emphasis>discovery document:</emphasis> a URI
    ++        providing a set of OAuth configuration parameters. The server must
    ++        provide a URI that is directly constructed from the components of the
     +        <literal>oauth_issuer</literal>, and this value must exactly match the
     +        issuer identifier that is declared in the discovery document itself, or
     +        the connection will fail. This is required to prevent a class of "mix-up
    @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
     +        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
     +        client will not have a chance to ask the server for a correct scope
     +        setting, and the default scopes for a token may not be sufficient to
    -+        connect.) libpq currently supports the following well-known endpoints:
    -+        <itemizedlist>
    ++        connect.) <application>libpq</application> currently supports the
    ++        following well-known endpoints:
    ++        <itemizedlist spacing="compact">
     +         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
     +         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
     +        </itemizedlist>
    @@ doc/src/sgml/libpq.sgml: void PQinitSSL(int do_ssl);
     +   TODO
     +  </para>
     +
    -+  <para>
    -+   <variablelist>
    -+    <varlistentry id="libpq-PQsetAuthDataHook">
    -+     <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
    ++  <sect2 id="libpq-oauth-authdata-hooks">
    ++   <title>Authdata Hooks</title>
     +
    -+     <listitem>
    -+      <para>
    -+       TODO
    ++   <para>
    ++    The behavior of the OAuth flow may be modified or replaced by a client using
    ++    the following hook API:
    ++
    ++    <variablelist>
    ++     <varlistentry id="libpq-PQsetAuthDataHook">
    ++      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
    ++
    ++      <listitem>
    ++       <para>
    ++        Sets the <symbol>PGauthDataHook</symbol>, overriding
    ++        <application>libpq</application>'s handling of one or more aspects of
    ++        its OAuth client flow.
     +<synopsis>
     +void PQsetAuthDataHook(PQauthDataHook_type hook);
     +</synopsis>
    -+      </para>
    ++        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
    ++        default handler will be reinstalled. Otherwise, the application passes
    ++        a pointer to a callback function with the signature:
    ++<programlisting>
    ++int hook_fn(PGauthData type, PGconn *conn, void *data);
    ++</programlisting>
    ++        which <application>libpq</application> will call when when action is
    ++        required of the application. <replaceable>type</replaceable> describes
    ++        the request being made, <replaceable>conn</replaceable> is the
    ++        connection handle being authenticated, and <replaceable>data</replaceable>
    ++        points to request-specific metadata. The contents of this pointer are
    ++        determined by <replaceable>type</replaceable>; see
    ++        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
    ++        list.
    ++       </para>
    ++       <para>
    ++        Hooks can be chained together to allow cooperative and/or fallback
    ++        behavior. In general, a hook implementation should examine the incoming
    ++        <replaceable>type</replaceable> (and, potentially, the request metadata
    ++        and/or the settings for the particular <replaceable>conn</replaceable>
    ++        in use) to decide whether or not to handle a specific piece of authdata.
    ++        If not, it should delegate to the previous hook in the chain
    ++        (retrievable via <function>PQgetAuthDataHook</function>).
    ++       </para>
    ++       <para>
    ++        Success is indicated by returning an integer greater than zero.
    ++        Returning a negative integer signals an error condition and abandons the
    ++        connection attempt. (A zero value is reserved for the default
    ++        implementation.)
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++
    ++     <varlistentry id="libpq-PQgetAuthDataHook">
    ++      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
    ++
    ++      <listitem>
    ++       <para>
    ++        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
    ++<synopsis>
    ++PQauthDataHook_type PQgetAuthDataHook(void);
    ++</synopsis>
    ++        At initialization time (before the first call to
    ++        <function>PQsetAuthDataHook</function>), this function will return
    ++        <symbol>PQdefaultAuthDataHook</symbol>.
    ++       </para>
    ++      </listitem>
    ++     </varlistentry>
    ++    </variablelist>
    ++   </para>
    ++
    ++   <sect3 id="libpq-oauth-authdata-hooks-types">
    ++    <title>Hook Types</title>
    ++    <para>
    ++     The following <symbol>PGauthData</symbol> types and their corresponding
    ++     <replaceable>data</replaceable> structures are defined:
    ++
    ++     <variablelist>
    ++      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
    ++       <term>
    ++        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
    ++        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
    ++       </term>
    ++       <listitem>
    ++        <para>
    ++         Replaces the default user prompt during the builtin device
    ++         authorization client flow. <replaceable>data</replaceable> points to
    ++         an instance of <symbol>PGpromptOAuthDevice</symbol>:
    ++<synopsis>
    ++typedef struct _PGpromptOAuthDevice
    ++{
    ++    const char *verification_uri;   /* verification URI to visit */
    ++    const char *user_code;          /* user code to enter */
    ++} PGpromptOAuthDevice;
    ++</synopsis>
    ++        </para>
    ++        <para>
    ++         The OAuth Device Authorization flow included in <application>libpq</application>
    ++         requires the end user to visit a URL with a browser, then enter a code
    ++         which permits <application>libpq</application> to connect to the server
    ++         on their behalf. The default prompt simply prints the
    ++         <literal>verification_uri</literal> and <literal>user_code</literal>
    ++         on standard error. Replacement implementations may display this
    ++         information using any preferred method, for example with a GUI.
    ++        </para>
    ++        <para>
    ++         This callback is only invoked during the builtin device
    ++         authorization flow. If the application installs a
    ++         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
    ++         flow</link>, this authdata type will not be used.
    ++        </para>
    ++       </listitem>
    ++      </varlistentry>
    ++
    ++      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
    ++       <term>
    ++        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
    ++        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
    ++       </term>
    ++       <listitem>
    ++        <para>
    ++         Replaces the entire OAuth flow with a custom implementation. The hook
    ++         should either directly return a Bearer token for the current
    ++         user/issuer/scope combination, if one is available without blocking, or
    ++         else set up an asynchronous callback to retrieve one.
    ++        </para>
    ++        <para>
    ++         <replaceable>data</replaceable> points to an instance
    ++         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
    ++         by the implementation:
    ++<synopsis>
    ++typedef struct _PGoauthBearerRequest
    ++{
    ++    /* Hook inputs (constant across all calls) */
    ++    const char *const openid_configuration; /* OIDC discovery URI */
    ++    const char *const scope;                /* required scope(s), or NULL */
     +
    ++    /* Hook outputs */
    ++
    ++    /* Callback implementing a custom asynchronous OAuth flow. */
    ++    PostgresPollingStatusType (*async) (PGconn *conn,
    ++                                        struct _PGoauthBearerRequest *request,
    ++                                        SOCKTYPE *altsock);
    ++
    ++    /* Callback to clean up custom allocations. */
    ++    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
    ++
    ++    char       *token;   /* acquired Bearer token */
    ++    void       *user;    /* hook-defined allocated data */
    ++} PGoauthBearerRequest;
    ++</synopsis>
    ++        </para>
    ++        <para>
    ++         Two pieces of information are provided to the hook by
    ++         <application>libpq</application>:
    ++         <replaceable>openid_configuration</replaceable> contains the URL of an
    ++         OAuth discovery document describing the authorization server's
    ++         supported flows, and <replaceable>scope</replaceable> contains a
    ++         (possibly empty) space-separated list of OAuth scopes which are
    ++         required to access the server. Either or both may be
    ++         <literal>NULL</literal> to indicate that the information was not
    ++         discoverable. (In this case, implementations may be able to establish
    ++         the requirements using some other preconfigured knowledge, or they may
    ++         choose to fail.)
    ++        </para>
    ++        <para>
    ++         The final output of the hook is <replaceable>token</replaceable>, which
    ++         must point to a valid Bearer token for use on the connection. (This
    ++         token should be issued by the
    ++         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
    ++         scopes, or the connection will be rejected by the server's validator
    ++         module.) The allocated token string must remain valid until
    ++         <application>libpq</application> is finished connecting; the hook
    ++         should set a <replaceable>cleanup</replaceable> callback which will be
    ++         called when <application>libpq</application> no longer requires it.
    ++        </para>
    ++        <para>
    ++         If an implementation cannot immediately produce a
    ++         <replaceable>token</replaceable> during the initial call to the hook,
    ++         it should set the <replaceable>async</replaceable> callback to handle
    ++         nonblocking communication with the authorization server.
    ++         <footnote>
    ++          <para>
    ++           Performing blocking operations during the
    ++           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
    ++           interfere with nonblocking connection APIs such as
    ++           <function>PQconnectPoll</function> and prevent concurrent connections
    ++           from making progress. Applications which only ever use the
    ++           synchronous connection primitives, such as
    ++           <function>PQconnectdb</function>, may synchronously retrieve a token
    ++           during the hook instead of implementing the
    ++           <replaceable>async</replaceable> callback, but they will necessarily
    ++           be limited to one connection at a time.
    ++          </para>
    ++         </footnote>
    ++         This will be called to begin the flow immediately upon return from the
    ++         hook. When the callback cannot make further progress without blocking,
    ++         it should return either <symbol>PGRES_POLLING_READING</symbol> or
    ++         <symbol>PGRES_POLLING_WRITING</symbol> after setting
    ++         <literal>*pgsocket</literal> to the file descriptor that will be marked
    ++         ready to read/write when progress can be made again. (This descriptor
    ++         is then provided to the top-level polling loop via
    ++         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
    ++         after setting <replaceable>token</replaceable> when the flow is
    ++         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
    ++        </para>
    ++        <para>
    ++         Implementations may wish to store additional data for bookkeeping
    ++         across calls to the <replaceable>async</replaceable> and
    ++         <replaceable>cleanup</replaceable> callbacks. The
    ++         <replaceable>user</replaceable> pointer is provided for this purpose;
    ++         <application>libpq</application> will not touch its contents and the
    ++         application may use it at its convenience. (Remember to free any
    ++         allocations during token cleanup.)
    ++        </para>
    ++       </listitem>
    ++      </varlistentry>
    ++     </variablelist>
    ++    </para>
    ++   </sect3>
    ++  </sect2>
    ++
    ++  <sect2 id="libpq-oauth-debugging">
    ++   <title>Debugging and Developer Settings</title>
    ++
    ++   <para>
    ++    A "dangerous debugging mode" may be enabled by setting the environment
    ++    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
    ++    for ease of local development and testing only. It does several things that
    ++    you will not want a production system to do:
    ++
    ++    <itemizedlist spacing="compact">
    ++     <listitem>
     +      <para>
    -+       If <replaceable>hook</replaceable> is set to a null pointer instead of
    -+       a function pointer, the default hook will be installed.
    ++       permits the use of unencrypted HTTP during the OAuth provider exchange
     +      </para>
     +     </listitem>
    -+    </varlistentry>
    -+
    -+    <varlistentry id="libpq-PQgetAuthDataHook">
    -+     <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
    -+
     +     <listitem>
     +      <para>
    -+       Retrieves the current value of <literal>PGauthDataHook</literal>.
    -+<synopsis>
    -+PQauthDataHook_type PQgetAuthDataHook(void);
    -+</synopsis>
    ++       allows the system's trusted CA list to be completely replaced using the
    ++       <envar>PGOAUTHCAFILE</envar> environment variable
     +      </para>
     +     </listitem>
    -+    </varlistentry>
    -+   </variablelist>
    -+  </para>
    -+
    ++     <listitem>
    ++      <para>
    ++       sprays HTTP traffic (containing several critical secrets) to standard
    ++       error during the OAuth flow
    ++      </para>
    ++     </listitem>
    ++     <listitem>
    ++      <para>
    ++       permits the use of zero-second retry intervals, which can cause the
    ++       client to busy-loop and pointlessly consume CPU
    ++      </para>
    ++     </listitem>
    ++    </itemizedlist>
    ++   </para>
    ++   <warning>
    ++    <para>
    ++     Do not share the output of the OAuth flow traffic with third parties. It
    ++     contains secrets that can be used to attack your clients and servers.
    ++    </para>
    ++   </warning>
    ++  </sect2>
     + </sect1>
     +
      
    @@ doc/src/sgml/oauth-validators.sgml (new)
     + <para>
     +  <productname>PostgreSQL</productname> provides infrastructure for creating
     +  custom modules to perform server-side validation of OAuth bearer tokens.
    ++  Because OAuth implementations vary so wildly, and bearer token validation is
    ++  heavily dependent on the issuing party, the server cannot check the token
    ++  itself; validator modules provide the glue between the server and the OAuth
    ++  provider in use.
     + </para>
     + <para>
    -+  OAuth validation modules must at least consist of an initialization function
    ++  OAuth validator modules must at least consist of an initialization function
     +  (see <xref linkend="oauth-validator-init"/>) and the required callback for
     +  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
     + </para>
    ++ <warning>
    ++  <para>
    ++   Since a misbehaving validator might let unauthorized users into the database,
    ++   correct implementation is critical. See
    ++   <xref linkend="oauth-validator-design"/> for design considerations.
    ++  </para>
    ++ </warning>
    ++
    ++ <sect1 id="oauth-validator-design">
    ++  <title>Safely Designing a Validator Module</title>
    ++  <warning>
    ++   <para>
    ++    Read and understand the entirety of this section before implementing a
    ++    validator module. A malfunctioning validator is potentially worse than no
    ++    authentication at all, both because of the false sense of security it
    ++    provides, and because it may contribute to attacks against other pieces of
    ++    an OAuth ecosystem.
    ++   </para>
    ++  </warning>
    ++
    ++  <sect2 id="oauth-validator-design-responsibilities">
    ++   <title>Validator Responsibilities</title>
    ++   <para>
    ++    TODO
    ++   </para>
    ++   <variablelist>
    ++    <varlistentry>
    ++     <term>Validate the Token</term>
    ++     <listitem>
    ++      <para>
    ++       The validator must first ensure that the presented token is in fact a
    ++       valid Bearer token for use in client authentication. The correct way to
    ++       do this depends on the provider, but it generally involves either
    ++       cryptographic operations to prove that the token was created by a trusted
    ++       party (offline validation), or the presentation of the token to that
    ++       trusted party so that it can perform validation for you (online
    ++       validation).
    ++      </para>
    ++      <para>
    ++       Online validation, usually implemented via
    ++       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
    ++       Introspection</ulink>, requires fewer steps of a validator module and
    ++       allows central revocation of a token in the event that it is stolen
    ++       or misissued. However, it does require the module to make at least one
    ++       network call per authentication attempt (all of which must complete
    ++       within the configured <xref linkend="guc-authentication-timeout"/>).
    ++       Additionally, your provider may not provide introspection endpoints for
    ++       use by external resource servers.
    ++      </para>
    ++      <para>
    ++       Offline validation is much more involved, typically requiring a validator
    ++       to maintain a list of trusted signing keys for a provider and then
    ++       check the token's cryptographic signature along with its contents.
    ++       Implementations must follow the provider's instructions to the letter,
    ++       including any verification of issuer ("where is this token from?"),
    ++       audience ("who is this token for?"), and validity period ("when can this
    ++       token be used?"). Since there is no communication between the module and
    ++       the provider, tokens cannot be centrally revoked using this method;
    ++       offline validator implementations may wish to place restrictions on the
    ++       maximum length of a token's validity period.
    ++      </para>
    ++      <para>
    ++       If the token cannot be validated, the module should immediately fail.
    ++       Further authentication/authorization is pointless if the bearer token
    ++       wasn't issued by a trusted party.
    ++      </para>
    ++     </listitem>
    ++    </varlistentry>
    ++    <varlistentry>
    ++     <term>Authorize the Client</term>
    ++     <listitem>
    ++      <para>
    ++       Next the validator must ensure that the end user has given the client
    ++       permission to access the server on their behalf. This generally involves
    ++       checking the scopes that have been assigned to the token, to make sure
    ++       that they cover database access for the current HBA parameters.
    ++      </para>
    ++      <para>
    ++       The purpose of this step is to prevent an OAuth client from obtaining a
    ++       token under false pretenses. If the validator requires all tokens to
    ++       carry scopes that cover database access, the provider should then loudly
    ++       prompt the user to grant that access during the flow. This gives them the
    ++       opportunity to reject the request if the client isn't supposed to be
    ++       using their credentials to connect to databases.
    ++      </para>
    ++      <para>
    ++       While it is possible to establish client authorization without explicit
    ++       scopes by using out-of-band knowledge of the deployed architecture, doing
    ++       so removes the user from the loop, which prevents them from catching
    ++       deployment mistakes and allows any such mistakes to be exploited
    ++       silently. Access to the database must be tightly restricted to only
    ++       trusted clients
    ++       <footnote>
    ++        <para>
    ++         That is, "trusted" in the sense that the OAuth client and the
    ++         <productname>PostgreSQL</productname> server are controlled by the same
    ++         entity. Notably, the Device Authorization client flow supported by
    ++         libpq does not usually meet this bar, since it's designed for use by
    ++         public/untrusted clients.
    ++        </para>
    ++       </footnote>
    ++       if users are not prompted for additional scopes.
    ++      </para>
    ++     </listitem>
    ++    </varlistentry>
    ++    <varlistentry>
    ++     <term>Authenticate the End User</term>
    ++     <listitem>
    ++      <para>
    ++       Finally, the validator should determine a user identifier for the token,
    ++       either by asking the provider for this information or by extracting it
    ++       from the token itself, and return that identifier to the server (which
    ++       will then make a final authorization decision using the HBA
    ++       configuration). This identifier will be available within the session via
    ++       <link linkend="functions-info-session-table"><function>system_user</function></link>
    ++       and recorded in the server logs if <xref linkend="guc-log-connections"/>
    ++       is enabled.
    ++      </para>
    ++      <para>
    ++       Different providers may record a variety of different authentication
    ++       information for an end user, typically referred to as
    ++       <emphasis>claims</emphasis>. Providers usually document which of these
    ++       claims are trustworthy enough to use for authorization decisions and
    ++       which are not. (For instance, it would probably not be wise to use an
    ++       end user's full name as the identifier for authentication, since many
    ++       providers allow users to change their display names arbitrarily.)
    ++       Ultimately, the choice of which claim (or combination of claims) to use
    ++       comes down to the provider implementation and application requirements.
    ++      </para>
    ++      <para>
    ++       Note that anonymous/pseudonymous login is possible as well, by enabling
    ++       usermap delegation; see
    ++       <xref linkend="oauth-validator-design-usermap-delegation"/>.
    ++      </para>
    ++     </listitem>
    ++    </varlistentry>
    ++   </variablelist>
    ++  </sect2>
    ++
    ++  <sect2 id="oauth-validator-design-guidelines">
    ++   <title>General Coding Guidelines</title>
    ++   <para>
    ++    TODO
    ++   </para>
    ++   <variablelist>
    ++    <varlistentry>
    ++     <term>Token Confidentiality</term>
    ++     <listitem>
    ++      <para>
    ++       Modules should not write tokens, or pieces of tokens, into the server
    ++       log. This is true even if the module considers the token invalid; an
    ++       attacker who confuses a client into communicating with the wrong provider
    ++       should not be able to retrieve that (otherwise valid) token from the
    ++       disk.
    ++      </para>
    ++      <para>
    ++       Implementations that send tokens over the network (for example, to
    ++       perform online token validation with a provider) must authenticate the
    ++       peer and ensure that strong transport security is in use.
    ++      </para>
    ++     </listitem>
    ++    </varlistentry>
    ++    <varlistentry>
    ++     <term>Logging</term>
    ++     <listitem>
    ++      <para>
    ++       Modules may use the same <link linkend="error-message-reporting">logging
    ++       facilities</link> as standard extensions; however, the rules for emitting
    ++       log entries to the client are subtly different during the authentication
    ++       phase of the connection. Generally speaking, modules should log
    ++       verification problems at the <symbol>COMMERROR</symbol> level and return
    ++       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
    ++       to unwind the stack, to avoid leaking information to unauthenticated
    ++       clients.
    ++      </para>
    ++     </listitem>
    ++    </varlistentry>
    ++    <varlistentry>
    ++     <term>Interruptibility</term>
    ++     <listitem>
    ++      <para>
    ++       Modules must remain interruptible by signals so that the server can
    ++       correctly handle authentication timeouts and shutdown signals from
    ++       <application>pg_ctl</application>. For example, a module receiving
    ++       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
    ++       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
    ++       The same should be done during any long-running loops. Failure to follow
    ++       this guidance may result in hung sessions.
    ++      </para>
    ++     </listitem>
    ++    </varlistentry>
    ++    <varlistentry>
    ++     <term>Testing</term>
    ++     <listitem>
    ++      <para>
    ++       The breadth of testing an OAuth system is well beyond the scope of this
    ++       documentation, but note that implementers should consider negative
    ++       testing to be mandatory. It's trivial to design a module that lets
    ++       authorized users in; the whole point of the system is to keep
    ++       unauthorized users out.
    ++      </para>
    ++     </listitem>
    ++    </varlistentry>
    ++   </variablelist>
    ++  </sect2>
    ++
    ++  <sect2 id="oauth-validator-design-usermap-delegation">
    ++   <title>Authorizing Users (Usermap Delegation)</title>
    ++   <para>
    ++    The standard deliverable of a validation module is the user identifier,
    ++    which the server will then compare to any configured
    ++    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
    ++    mappings</link> and determine whether the end user is authorized to connect.
    ++    However, OAuth is itself an authorization framework, and tokens may carry
    ++    information about user privileges. For example, a token may be associated
    ++    with the organizational groups that a user belongs to, or list the roles
    ++    that a user may assume, and duplicating that knowledge into local usermaps
    ++    for every server may not be desirable.
    ++   </para>
    ++   <para>
    ++    To bypass username mapping entirely, and have the validator module assume
    ++    the additional responsibility of authorizing user connections, the HBA may
    ++    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
    ++    The module may then use token scopes or an equivalent method to decide
    ++    whether the user is allowed to connect under their desired role. The user
    ++    identifier will still be recorded by the server, but it plays no part in
    ++    determining whether to continue the connection.
    ++   </para>
    ++   <para>
    ++    Using this scheme, authentication itself is optional. As long as the module
    ++    reports that the connection is authorized, login will continue even if there
    ++    is no recorded user identifier at all. This makes it possible to implement
    ++    anonymous or pseudonymous access to the database, where the third-party
    ++    provider performs all necessary authentication but does not provide any
    ++    user-identifying information to the server. (Some providers may create an
    ++    anonymized ID number that can be recorded instead, for later auditing.)
    ++   </para>
    ++   <para>
    ++    Usermap delegation provides the most architectural flexibility, but it turns
    ++    the validator module into a single point of failure for connection
    ++    authorization. Use with caution.
    ++   </para>
    ++  </sect2>
    ++ </sect1>
     +
     + <sect1 id="oauth-validator-init">
     +  <title>Initialization Functions</title>
    @@ doc/src/sgml/oauth-validators.sgml (new)
     +   validator module a function named
     +   <function>_PG_oauth_validator_module_init</function> must be provided. The
     +   return value of the function must be a pointer to a struct of type
    -+   <structname>OAuthValidatorCallbacks</structname> which contains all that
    -+   libpq need to perform token validation using the module. The returned
    ++   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
    ++   the module's token validation functions. The returned
     +   pointer must be of server lifetime, which is typically achieved by defining
     +   it as a <literal>static const</literal> variable in global scope.
     +<programlisting>
    @@ doc/src/sgml/oauth-validators.sgml (new)
     +  <title>OAuth Validator Callbacks</title>
     +  <para>
     +   OAuth validator modules implement their functionality by defining a set of
    -+   callbacks, libpq will call them as required to process the authentication
    -+   request from the user.
    ++   callbacks. The server will call them as required to process the
    ++   authentication request from the user.
     +  </para>
     +
     +  <sect2 id="oauth-validator-callback-startup">
    @@ src/backend/libpq/auth-oauth.c (new)
     +	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
     +};
     +
    -+
    ++/* Valid states for the oauth_exchange() machine. */
     +typedef enum
     +{
     +	OAUTH_STATE_INIT = 0,
    @@ src/backend/libpq/auth-oauth.c (new)
     +	OAUTH_STATE_FINISHED,
     +} oauth_state;
     +
    ++/* Mechanism callback state. */
     +struct oauth_ctx
     +{
     +	oauth_state state;
    @@ src/backend/libpq/auth-oauth.c (new)
     +static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
     +static bool validate(Port *port, const char *auth);
     +
    -+#define KVSEP 0x01
    -+#define AUTH_KEY "auth"
    -+#define BEARER_SCHEME "Bearer "
    ++/* Constants seen in an OAUTHBEARER client initial response. */
    ++#define KVSEP 0x01				/* separator byte for key/value pairs */
    ++#define AUTH_KEY "auth"			/* key containing the Authorization header */
    ++#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
     +
    ++/*
    ++ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
    ++ *
    ++ * For a full description of the API, see libpq/sasl.h.
    ++ */
     +static void
     +oauth_get_mechanisms(Port *port, StringInfo buf)
     +{
    @@ src/backend/libpq/auth-oauth.c (new)
     +	appendStringInfoChar(buf, '\0');
     +}
     +
    ++/*
    ++ * Initializes mechanism state and loads the configured validator module.
    ++ *
    ++ * For a full description of the API, see libpq/sasl.h.
    ++ */
     +static void *
     +oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
     +{
    @@ src/backend/libpq/auth-oauth.c (new)
     +	return ctx;
     +}
     +
    ++/*
    ++ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
    ++ * apart the client initial response and validates the Bearer token. It also
    ++ * handles the dummy error response for a failed handshake, as described in
    ++ * Sec. 3.2.3.
    ++ *
    ++ * For a full description of the API, see libpq/sasl.h.
    ++ */
     +static int
     +oauth_exchange(void *opaq, const char *input, int inputlen,
     +			   char **output, int *outputlen, const char **logdetail)
    @@ src/backend/libpq/auth-oauth.c (new)
     +	return NULL;
     +}
     +
    ++/*
    ++ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
    ++ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
    ++ * discovery document, which the client may use to conduct its OAuth flow.
    ++ */
     +static void
     +generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
     +{
    @@ src/backend/libpq/auth-oauth.c (new)
     +	return token;
     +}
     +
    ++/*
    ++ * Checks that the "auth" kvpair in the client response contains a syntactically
    ++ * valid Bearer token, then passes it along to the loaded validator module for
    ++ * authorization. Returns true if validation succeeds.
    ++ */
     +static bool
     +validate(Port *port, const char *auth)
     +{
    @@ src/backend/libpq/auth-oauth.c (new)
     +	before_shmem_exit(shutdown_validator_library, 0);
     +}
     +
    ++/*
    ++ * Call the validator module's shutdown callback, if one is provided. This is
    ++ * invoked via before_shmem_exit().
    ++ */
     +static void
     +shutdown_validator_library(int code, Datum arg)
     +{
    @@ src/backend/libpq/hba.c: parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
     +					errcode(ERRCODE_CONFIG_FILE_ERROR),
     +			/* translator: strings are replaced with hba options */
     +					errmsg("%s cannot be used in combination with %s",
    -+						   "map", "trust_validator_authz"),
    ++						   "map", "delegate_ident_mapping"),
     +					errcontext("line %d of configuration file \"%s\"",
     +							   line_num, file_name));
    -+			*err_msg = "map cannot be used in combination with trust_validator_authz";
    ++			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
     +			return NULL;
     +		}
     +	}
    @@ src/backend/libpq/hba.c: parse_hba_auth_opt(char *name, char *val, HbaLine *hbal
     +		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
     +		hbaline->oauth_validator = pstrdup(val);
     +	}
    -+	else if (strcmp(name, "trust_validator_authz") == 0)
    ++	else if (strcmp(name, "delegate_ident_mapping") == 0)
     +	{
    -+		REQUIRE_AUTH_OPTION(uaOAuth, "trust_validator_authz", "oauth");
    ++		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
     +		if (strcmp(val, "1") == 0)
     +			hbaline->oauth_skip_usermap = true;
     +		else
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +/*-------------------------------------------------------------------------
     + *
     + * fe-auth-oauth-curl.c
    -+ *	   The libcurl implementation of OAuth/OIDC authentication.
    ++ *	   The libcurl implementation of OAuth/OIDC authentication, using the
    ++ *	   OAuth Device Authorization Grant (RFC 8628).
     + *
     + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
     + * Portions Copyright (c) 1994, Regents of the University of California
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + */
     +
     +/* States for the overall async machine. */
    -+typedef enum
    ++enum OAuthStep
     +{
     +	OAUTH_STEP_INIT = 0,
     +	OAUTH_STEP_DISCOVERY,
     +	OAUTH_STEP_DEVICE_AUTHORIZATION,
     +	OAUTH_STEP_TOKEN_REQUEST,
     +	OAUTH_STEP_WAIT_INTERVAL,
    -+} OAuthStep;
    ++};
     +
     +/*
     + * The async_ctx holds onto state that needs to persist across multiple calls
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + */
     +struct async_ctx
     +{
    -+	OAuthStep	step;			/* where are we in the flow? */
    ++	enum OAuthStep step;		/* where are we in the flow? */
     +
     +#ifdef HAVE_SYS_EPOLL_H
     +	int			timerfd;		/* a timerfd for signaling async timeouts */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + * JSON Parser Definitions
     + */
     +
    ++/*
    ++ * Parses authorization server metadata. Fields are defined by OIDC Discovery
    ++ * 1.0 and RFC 8414.
    ++ */
     +static bool
     +parse_provider(struct async_ctx *actx, struct provider *provider)
     +{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	return parsed;
     +}
     +
    ++/*
    ++ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
    ++ */
     +static bool
     +parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
     +{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	return true;
     +}
     +
    ++/*
    ++ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
    ++ * uses the error response defined in RFC 6749, Sec. 5.2).
    ++ */
     +static bool
     +parse_token_error(struct async_ctx *actx, struct token_error *err)
     +{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
     +}
     +
    ++/*
    ++ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
    ++ * success response defined in RFC 6749, Sec. 5.1).
    ++ */
     +static bool
     +parse_access_token(struct async_ctx *actx, struct token *tok)
     +{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	/*
     +	 * authorization_pending and slow_down are the only acceptable errors;
    -+	 * anything else and we bail.
    ++	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
     +	 */
     +	err = &tok.err;
     +	if (strcmp(err->error, "authorization_pending") != 0 &&
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	/*
     +	 * A slow_down error requires us to permanently increase our retry
    -+	 * interval by five seconds. RFC 8628, Sec. 3.5.
    ++	 * interval by five seconds.
     +	 */
     +	if (strcmp(err->error, "slow_down") == 0)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +prompt_user(struct async_ctx *actx, PGconn *conn)
     +{
     +	int			res;
    -+	PQpromptOAuthDevice prompt = {
    ++	PGpromptOAuthDevice prompt = {
     +		.verification_uri = actx->authz.verification_uri,
     +		.user_code = actx->authz.user_code,
     +		/* TODO: optional fields */
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +/*-------------------------------------------------------------------------
     + *
     + * fe-auth-oauth.c
    -+ *	   The front-end (client) implementation of OAuth/OIDC authentication.
    ++ *	   The front-end (client) implementation of OAuth/OIDC authentication
    ++ *	   using the SASL OAUTHBEARER mechanism.
     + *
     + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
     + * Portions Copyright (c) 1994, Regents of the University of California
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	oauth_free,
     +};
     +
    ++/*
    ++ * Initializes mechanism state for OAUTHBEARER.
    ++ *
    ++ * For a full description of the API, see libpq/fe-auth-sasl.h.
    ++ */
     +static void *
     +oauth_init(PGconn *conn, const char *password,
     +		   const char *sasl_mechanism)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	if (!state)
     +		return NULL;
     +
    -+	state->state = FE_OAUTH_INIT;
    ++	state->step = FE_OAUTH_INIT;
     +	state->conn = conn;
     +
     +	return state;
     +}
     +
    ++/*
    ++ * Frees the state allocated by oauth_init().
    ++ */
    ++static void
    ++oauth_free(void *opaq)
    ++{
    ++	fe_oauth_state *state = opaq;
    ++
    ++	free(state->token);
    ++	if (state->async_ctx)
    ++		state->free_async_ctx(state->conn, state->async_ctx);
    ++
    ++	free(state);
    ++}
    ++
     +#define kvsep "\x01"
     +
     +/*
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	return response;
     +}
     +
    ++/*
    ++ * JSON Parser (for the OAUTHBEARER error result)
    ++ */
    ++
    ++/* Relevant JSON fields in the error result object. */
     +#define ERROR_STATUS_FIELD "status"
     +#define ERROR_SCOPE_FIELD "scope"
     +#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	return issuer;
     +}
     +
    ++/*
    ++ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
    ++ * stores any discovered openid_configuration and scope settings for the
    ++ * connection. conn->oauth_want_retry will be set if the error status is
    ++ * suitable for a second attempt.
    ++ */
     +static bool
     +handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
     +{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	return success;
     +}
     +
    -+static void
    -+free_request(PGconn *conn, void *vreq)
    -+{
    -+	PQoauthBearerRequest *request = vreq;
    -+
    -+	if (request->cleanup)
    -+		request->cleanup(conn, request);
    -+
    -+	free(request);
    -+}
    -+
    ++/*
    ++ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
    ++ * Delegates the retrieval of the token to the application's async callback.
    ++ *
    ++ * This will be called multiple times as needed; the application is responsible
    ++ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
    ++ * statuses for use by PQconnectPoll().
    ++ */
     +static PostgresPollingStatusType
     +run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
     +{
     +	fe_oauth_state *state = conn->sasl_state;
    -+	PQoauthBearerRequest *request = state->async_ctx;
    ++	PGoauthBearerRequest *request = state->async_ctx;
     +	PostgresPollingStatusType status;
     +
     +	if (!request->async)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	return status;
     +}
     +
    ++/*
    ++ * Cleanup callback for the user flow. Delegates most of its job to the
    ++ * user-provided cleanup implementation.
    ++ */
    ++static void
    ++free_request(PGconn *conn, void *vreq)
    ++{
    ++	PGoauthBearerRequest *request = vreq;
    ++
    ++	if (request->cleanup)
    ++		request->cleanup(conn, request);
    ++
    ++	free(request);
    ++}
    ++
    ++/*
    ++ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
    ++ * token for presentation to the server.
    ++ *
    ++ * If the application has registered a custom flow handler using
    ++ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
    ++ * if it has one cached for immediate use), or set up for a series of
    ++ * asynchronous callbacks which will be managed by run_user_oauth_flow().
    ++ *
    ++ * If the default handler is used instead, a Device Authorization flow is used
    ++ * for the connection if support has been compiled in. (See
    ++ * fe-auth-oauth-curl.c for implementation details.)
    ++ *
    ++ * If neither a custom handler nor the builtin flow is available, the connection
    ++ * fails here.
    ++ */
     +static bool
     +setup_token_request(PGconn *conn, fe_oauth_state *state)
     +{
     +	int			res;
    -+	PQoauthBearerRequest request = {
    ++	PGoauthBearerRequest request = {
     +		.openid_configuration = conn->oauth_discovery_uri,
     +		.scope = conn->oauth_scope,
     +	};
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
     +	if (res > 0)
     +	{
    -+		PQoauthBearerRequest *request_copy;
    ++		PGoauthBearerRequest *request_copy;
     +
     +		if (request.token)
     +		{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	return true;
     +}
     +
    ++/*
    ++ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
    ++ *
    ++ * If the necessary OAuth parameters are set up on the connection, this will run
    ++ * the client flow asynchronously and present the resulting token to the server.
    ++ * Otherwise, an empty discovery response will be sent and any parameters sent
    ++ * back by the server will be stored for a second attempt.
    ++ *
    ++ * For a full description of the API, see libpq/sasl.h.
    ++ */
     +static SASLStatus
     +oauth_exchange(void *opaq, bool final,
     +			   char *input, int inputlen,
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	*output = NULL;
     +	*outputlen = 0;
     +
    -+	switch (state->state)
    ++	switch (state->step)
     +	{
     +		case FE_OAUTH_INIT:
     +			/* We begin in the initial response phase. */
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +					 */
     +					Assert(conn->async_auth);	/* should have been set
     +												 * already */
    -+					state->state = FE_OAUTH_REQUESTING_TOKEN;
    ++					state->step = FE_OAUTH_REQUESTING_TOKEN;
     +					return SASL_ASYNC;
     +				}
     +			}
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +				return SASL_FAILED;
     +
     +			*outputlen = strlen(*output);
    -+			state->state = FE_OAUTH_BEARER_SENT;
    ++			state->step = FE_OAUTH_BEARER_SENT;
     +
     +			return SASL_CONTINUE;
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			}
     +			*outputlen = strlen(*output);	/* == 1 */
     +
    -+			state->state = FE_OAUTH_SERVER_ERROR;
    ++			state->step = FE_OAUTH_SERVER_ERROR;
     +			return SASL_CONTINUE;
     +
     +		case FE_OAUTH_SERVER_ERROR:
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	return false;
     +}
     +
    -+static void
    -+oauth_free(void *opaq)
    -+{
    -+	fe_oauth_state *state = opaq;
    -+
    -+	free(state->token);
    -+	if (state->async_ctx)
    -+		state->free_async_ctx(state->conn, state->async_ctx);
    -+
    -+	free(state);
    -+}
    -+
     +/*
     + * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
     + */
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     +#include "libpq-int.h"
     +
     +
    -+typedef enum
    ++enum fe_oauth_step
     +{
     +	FE_OAUTH_INIT,
     +	FE_OAUTH_REQUESTING_TOKEN,
     +	FE_OAUTH_BEARER_SENT,
     +	FE_OAUTH_SERVER_ERROR,
    -+} fe_oauth_state_enum;
    ++};
     +
     +typedef struct
     +{
    -+	fe_oauth_state_enum state;
    ++	enum fe_oauth_step step;
     +
     +	PGconn	   *conn;
     +	char	   *token;
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     +extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
     +extern bool oauth_unsafe_debugging_enabled(void);
     +
    ++/* Mechanisms in fe-auth-oauth.c */
    ++extern const pg_fe_sasl_mech pg_oauth_mech;
    ++
     +#endif							/* FE_AUTH_OAUTH_H */
     
      ## src/interfaces/libpq/fe-auth-sasl.h ##
    @@ src/interfaces/libpq/fe-auth.c
      #include "common/scram-common.h"
      #include "fe-auth.h"
      #include "fe-auth-sasl.h"
    ++#include "fe-auth-oauth.h"
    + #include "libpq-fe.h"
    + 
    + #ifdef ENABLE_GSS
     @@ src/interfaces/libpq/fe-auth.c: pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
       * Initialize SASL authentication exchange.
       */
    @@ src/interfaces/libpq/fe-auth.c: PQchangePassword(PGconn *conn, const char *user,
     +}
     +
     +int
    -+PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data)
    ++PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
     +{
     +	return 0;					/* handle nothing */
     +}
    @@ src/interfaces/libpq/fe-auth.h
      extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
      extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
      
    -@@ src/interfaces/libpq/fe-auth.h: extern char *pg_fe_scram_build_secret(const char *password,
    - 									  int iterations,
    - 									  const char **errstr);
    - 
    -+/* Mechanisms in fe-auth-oauth.c */
    -+extern const pg_fe_sasl_mech pg_oauth_mech;
    -+
    - #endif							/* FE_AUTH_H */
     
      ## src/interfaces/libpq/fe-connect.c ##
    +@@
    + #include "common/scram-common.h"
    + #include "common/string.h"
    + #include "fe-auth.h"
    ++#include "fe-auth-oauth.h"
    + #include "libpq-fe.h"
    + #include "libpq-int.h"
    + #include "mb/pg_wchar.h"
     @@ src/interfaces/libpq/fe-connect.c: static const internalPQconninfoOption PQconninfoOptions[] = {
      		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
      	offsetof(struct pg_conn, load_balance_hosts)},
    @@ src/interfaces/libpq/libpq-fe.h: typedef enum
     +	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
     +									 * URL */
     +	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
    -+} PGAuthData;
    ++} PGauthData;
     +
      /* PGconn encapsulates a connection to the backend.
       * The contents of this struct are not supposed to be known to applications.
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
      
      /* === in fe-auth.c === */
      
    -+typedef struct _PQpromptOAuthDevice
    ++typedef struct _PGpromptOAuthDevice
     +{
     +	const char *verification_uri;	/* verification URI to visit */
     +	const char *user_code;		/* user code to enter */
    -+} PQpromptOAuthDevice;
    ++} PGpromptOAuthDevice;
     +
    -+/* for _PQoauthBearerRequest.async() */
    ++/* for PGoauthBearerRequest.async() */
     +#ifdef WIN32
     +#define SOCKTYPE SOCKET
     +#else
     +#define SOCKTYPE int
     +#endif
     +
    -+typedef struct _PQoauthBearerRequest
    ++typedef struct _PGoauthBearerRequest
     +{
     +	/* Hook inputs (constant across all calls) */
     +	const char *const openid_configuration; /* OIDC discovery URI */
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +	 * request->token must be set by the hook.
     +	 */
     +	PostgresPollingStatusType (*async) (PGconn *conn,
    -+										struct _PQoauthBearerRequest *request,
    ++										struct _PGoauthBearerRequest *request,
     +										SOCKTYPE *altsock);
     +
     +	/*
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +	 * This is technically optional, but highly recommended, because there is
     +	 * no other indication as to when it is safe to free the token.
     +	 */
    -+	void		(*cleanup) (PGconn *conn, struct _PQoauthBearerRequest *request);
    ++	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
     +
     +	/*
     +	 * The hook should set this to the Bearer token contents for the
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +	 * the cleanup callback.
     +	 */
     +	void	   *user;
    -+} PQoauthBearerRequest;
    ++} PGoauthBearerRequest;
     +
     +#undef SOCKTYPE
     +
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
      extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
      extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
      
    -+typedef int (*PQauthDataHook_type) (PGAuthData type, PGconn *conn, void *data);
    ++typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
     +extern void PQsetAuthDataHook(PQauthDataHook_type hook);
     +extern PQauthDataHook_type PQgetAuthDataHook(void);
    -+extern int	PQdefaultAuthDataHook(PGAuthData type, PGconn *conn, void *data);
    ++extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
     +
      /* === in encnames.c === */
      
      extern int	pg_char_to_encoding(const char *name);
     
      ## src/interfaces/libpq/libpq-int.h ##
    -@@ src/interfaces/libpq/libpq-int.h: typedef struct pg_conn_host
    - 								 * found in password file. */
    - } pg_conn_host;
    - 
    -+typedef PostgresPollingStatusType (*AsyncAuthFunc) (PGconn *conn, pgsocket *altsock);
    -+
    - /*
    -  * PGconn stores all the state data associated with a single connection
    -  * to a backend.
     @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
      								 * cancel request, instead of being a normal
      								 * connection that's used for queries */
    @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
      										 * know which auth response we're
      										 * sending */
      
    -+	AsyncAuthFunc async_auth;	/* callback for external async authentication */
    ++	/* Callback for external async authentication */
    ++	PostgresPollingStatusType (*async_auth) (PGconn *conn, pgsocket *altsock);
     +	pgsocket	altsock;		/* alternative socket for client to poll */
     +
     +
    @@ src/test/modules/oauth_validator/Makefile (new)
     +
     +endif
     
    + ## src/test/modules/oauth_validator/README (new) ##
    +@@
    ++Test programs and libraries for OAuth
    ++-------------------------------------
    ++
    ++This folder contains tests for the client- and server-side OAuth
    ++implementations. Most tests are run end-to-end to test both simultaneously. The
    ++tests in t/001_server use a mock OAuth authorization server, implemented jointly
    ++by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
    ++Authorization flow. The tests in t/002_client exercise custom OAuth flows and
    ++don't need an authorization server.
    ++
    ++Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
    ++since localhost HTTP servers will be started. A Python installation is required
    ++to run the mock authorization server.
    +
      ## src/test/modules/oauth_validator/fail_validator.c (new) ##
     @@
     +/*-------------------------------------------------------------------------
    @@ src/test/modules/oauth_validator/fail_validator.c (new)
     +										 const char *token,
     +										 const char *role);
     +
    ++/* Callback implementations (we only need the main one) */
     +static const OAuthValidatorCallbacks validator_callbacks = {
     +	.validate_cb = fail_token,
     +};
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +/*-------------------------------------------------------------------------
     + *
     + * oauth_hook_client.c
    -+ *		Verify OAuth hook functionality in libpq
    ++ *		Test driver for t/002_client.pl, which verifies OAuth hook
    ++ *		functionality in libpq.
     + *
     + * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
     + * Portions Copyright (c) 1994, Regents of the University of California
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +#include "getopt_long.h"
     +#include "libpq-fe.h"
     +
    -+static int	handle_auth_data(PGAuthData type, PGconn *conn, void *data);
    ++static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
     +
     +static void
     +usage(char *argv[])
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +	return 0;
     +}
     +
    ++/*
    ++ * PQauthDataHook implementation. Replaces the default client flow by handling
    ++ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
    ++ */
     +static int
    -+handle_auth_data(PGAuthData type, PGconn *conn, void *data)
    ++handle_auth_data(PGauthData type, PGconn *conn, void *data)
     +{
    -+	PQoauthBearerRequest *req = data;
    ++	PGoauthBearerRequest *req = data;
     +
     +	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
     +		return 0;
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
      ## src/test/modules/oauth_validator/t/001_server.pl (new) ##
     @@
     +
    ++#
    ++# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
    ++# setup.
    ++#
     +# Copyright (c) 2021-2024, PostgreSQL Global Development Group
    ++#
     +
     +use strict;
     +use warnings FATAL => 'all';
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +use MIME::Base64 qw(encode_base64);
     +use PostgreSQL::Test::Cluster;
     +use PostgreSQL::Test::Utils;
    -+use PostgreSQL::Test::OAuthServer;
     +use Test::More;
     +
    ++use FindBin;
    ++use lib $FindBin::RealBin;
    ++
    ++use OAuth::Server;
    ++
     +if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
     +{
     +	plan skip_all =>
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +# Save a background connection for later configuration changes.
     +my $bgconn = $node->background_psql('postgres');
     +
    -+my $webserver = PostgreSQL::Test::OAuthServer->new();
    ++my $webserver = OAuth::Server->new();
     +$webserver->run();
     +
     +END
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
     +# that case as well.)
     +$common_connstr =
    -+  "user=test dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope=''";
    ++  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
     +
     +$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
     +$node->reload;
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +  $node->wait_for_log(qr/reloading configuration files/, $log_start);
     +
     +if ($node->connect_fails(
    -+		"$common_connstr oauth_client_id=f02c6361-0635",
    ++		"$common_connstr user=test",
     +		"validator must set authn_id",
     +		expected_stderr => qr/OAuth bearer authentication failed/))
     +{
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +	$log_start = $log_end;
     +}
     +
    ++#
    ++# Test user mapping.
    ++#
    ++
    ++# Allow "user@example.com" to log in under the test role.
    ++unlink($node->data_dir . '/pg_ident.conf');
    ++$node->append_conf(
    ++	'pg_ident.conf', qq{
    ++oauthmap	user\@example.com	test
    ++});
    ++
    ++# test and testalt use the map; testparam uses ident delegation.
    ++unlink($node->data_dir . '/pg_hba.conf');
    ++$node->append_conf(
    ++	'pg_hba.conf', qq{
    ++local all test      oauth issuer="$issuer" scope="" map=oauthmap
    ++local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
    ++local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
    ++});
    ++
    ++# To start, have the validator use the role names as authn IDs.
    ++$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
    ++
    ++$node->reload;
    ++$log_start =
    ++  $node->wait_for_log(qr/reloading configuration files/, $log_start);
    ++
    ++# The test and testalt roles should no longer map correctly.
    ++$node->connect_fails(
    ++	"$common_connstr user=test",
    ++	"mismatched username map (test)",
    ++	expected_stderr => qr/OAuth bearer authentication failed/);
    ++$node->connect_fails(
    ++	"$common_connstr user=testalt",
    ++	"mismatched username map (testalt)",
    ++	expected_stderr => qr/OAuth bearer authentication failed/);
    ++
    ++# Have the validator identify the end user as user@example.com.
    ++$bgconn->query_safe(
    ++	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
    ++$node->reload;
    ++$log_start =
    ++  $node->wait_for_log(qr/reloading configuration files/, $log_start);
    ++
    ++# Now the test role can be logged into. (testalt still can't be mapped.)
    ++$node->connect_ok(
    ++	"$common_connstr user=test",
    ++	"matched username map (test)",
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
    ++$node->connect_fails(
    ++	"$common_connstr user=testalt",
    ++	"mismatched username map (testalt)",
    ++	expected_stderr => qr/OAuth bearer authentication failed/);
    ++
    ++# testparam ignores the map entirely.
    ++$node->connect_ok(
    ++	"$common_connstr user=testparam",
    ++	"delegated ident (testparam)",
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
    ++
     +$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
     +$node->reload;
     +$log_start =
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     
      ## src/test/modules/oauth_validator/t/002_client.pl (new) ##
     @@
    -+
    ++#
    ++# Exercises the API for custom OAuth client flows, using the oauth_hook_client
    ++# test driver.
    ++#
     +# Copyright (c) 2021-2024, PostgreSQL Global Development Group
    ++#
     +
     +use strict;
     +use warnings FATAL => 'all';
    @@ src/test/modules/oauth_validator/t/002_client.pl (new)
     +	}
     +
     +	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
    -+	diag "running '" . join("' '", @cmd) . "'";
    ++	note "running '" . join("' '", @cmd) . "'";
     +
     +	my ($stdout, $stderr) = run_command(\@cmd);
     +
    @@ src/test/modules/oauth_validator/t/002_client.pl (new)
     +
     +done_testing();
     
    + ## src/test/modules/oauth_validator/t/OAuth/Server.pm (new) ##
    +@@
    ++
    ++# Copyright (c) 2024, PostgreSQL Global Development Group
    ++
    ++=pod
    ++
    ++=head1 NAME
    ++
    ++OAuth::Server - runs a mock OAuth authorization server for testing
    ++
    ++=head1 SYNOPSIS
    ++
    ++  use OAuth::Server;
    ++
    ++  my $server = OAuth::Server->new();
    ++  $server->run;
    ++
    ++  my $port = $server->port;
    ++  my $issuer = "http://localhost:$port";
    ++
    ++  # test against $issuer...
    ++
    ++  $server->stop;
    ++
    ++=head1 DESCRIPTION
    ++
    ++This is glue API between the Perl tests and the Python authorization server
    ++daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
    ++in its standard library, so the implementation was ported from Perl.)
    ++
    ++This authorization server does not use TLS (it implements a nonstandard, unsafe
    ++issuer at "http://localhost:<port>"), so libpq in particular will need to set
    ++PGOAUTHDEBUG=UNSAFE to be able to talk to it.
    ++
    ++=cut
    ++
    ++package OAuth::Server;
    ++
    ++use warnings;
    ++use strict;
    ++use Scalar::Util;
    ++use Test::More;
    ++
    ++=pod
    ++
    ++=head1 METHODS
    ++
    ++=over
    ++
    ++=item SSL::Server->new()
    ++
    ++Create a new OAuth Server object.
    ++
    ++=cut
    ++
    ++sub new
    ++{
    ++	my $class = shift;
    ++
    ++	my $self = {};
    ++	bless($self, $class);
    ++
    ++	return $self;
    ++}
    ++
    ++=pod
    ++
    ++=item $server->port()
    ++
    ++Returns the port in use by the server.
    ++
    ++=cut
    ++
    ++sub port
    ++{
    ++	my $self = shift;
    ++
    ++	return $self->{'port'};
    ++}
    ++
    ++=pod
    ++
    ++=item $server->run()
    ++
    ++Runs the authorization server daemon in t/oauth_server.py.
    ++
    ++=cut
    ++
    ++sub run
    ++{
    ++	my $self = shift;
    ++	my $port;
    ++
    ++	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
    ++	  or die "failed to start OAuth server: $!";
    ++
    ++	# Get the port number from the daemon. It closes stdout afterwards; that way
    ++	# we can slurp in the entire contents here rather than worrying about the
    ++	# number of bytes to read.
    ++	$port = do { local $/ = undef; <$read_fh> }
    ++	  // die "failed to read port number: $!";
    ++	chomp $port;
    ++	die "server did not advertise a valid port"
    ++	  unless Scalar::Util::looks_like_number($port);
    ++
    ++	$self->{'pid'} = $pid;
    ++	$self->{'port'} = $port;
    ++	$self->{'child'} = $read_fh;
    ++
    ++	note("OAuth provider (PID $pid) is listening on port $port\n");
    ++}
    ++
    ++=pod
    ++
    ++=item $server->stop()
    ++
    ++Sends SIGTERM to the authorization server and waits for it to exit.
    ++
    ++=cut
    ++
    ++sub stop
    ++{
    ++	my $self = shift;
    ++
    ++	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
    ++
    ++	kill(15, $self->{'pid'});
    ++	$self->{'pid'} = undef;
    ++
    ++	# Closing the popen() handle waits for the process to exit.
    ++	close($self->{'child'});
    ++	$self->{'child'} = undef;
    ++}
    ++
    ++=pod
    ++
    ++=back
    ++
    ++=cut
    ++
    ++1;
    +
      ## src/test/modules/oauth_validator/t/oauth_server.py (new) ##
     @@
     +#! /usr/bin/env python3
    ++#
    ++# A mock OAuth authorization server, designed to be invoked from
    ++# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
    ++# so that the Perl tests can contact it) and runs as a daemon until it is
    ++# signaled.
    ++#
     +
     +import base64
     +import http.server
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +
     +
     +class OAuthHandler(http.server.BaseHTTPRequestHandler):
    ++    """
    ++    Core implementation of the authorization server. The API is
    ++    inheritance-based, with entry points at do_GET() and do_POST(). See the
    ++    documentation for BaseHTTPRequestHandler.
    ++    """
    ++
     +    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
     +
     +    def _check_issuer(self):
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +
     +
     +def main():
    ++    """
    ++    Starts the authorization server on localhost. The ephemeral port in use will
    ++    be printed to stdout.
    ++    """
    ++
     +    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
     +
     +    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +    port = s.socket.getsockname()[1]
     +    print(port)
     +
    ++    # stdout is closed to allow the parent to just "read to the end".
     +    stdout = sys.stdout.fileno()
     +    sys.stdout.close()
     +    os.close(stdout)
    @@ src/test/modules/oauth_validator/validator.c (new)
     +											 const char *token,
     +											 const char *role);
     +
    ++/* Callback implementations (exercise all three) */
     +static const OAuthValidatorCallbacks validator_callbacks = {
     +	.startup_cb = validator_startup,
     +	.shutdown_cb = validator_shutdown,
    @@ src/test/modules/oauth_validator/validator.c (new)
     +
     +static char *authn_id = NULL;
     +
    ++/*---
    ++ * Extension entry point. Sets up GUCs for use by tests:
    ++ *
    ++ * - oauth_validator.authn_id	Sets the user identifier to return during token
    ++ *								validation. Defaults to the username in the
    ++ *								startup packet.
    ++ */
     +void
     +_PG_init(void)
     +{
    @@ src/test/modules/oauth_validator/validator.c (new)
     +	MarkGUCPrefixReserved("oauth_validator");
     +}
     +
    ++/*
    ++ * Validator module entry point.
    ++ */
     +const OAuthValidatorCallbacks *
     +_PG_oauth_validator_module_init(void)
     +{
    @@ src/test/modules/oauth_validator/validator.c (new)
     +
     +#define PRIVATE_COOKIE ((void *) 13579)
     +
    ++/*
    ++ * Startup callback, to set up private data for the validator.
    ++ */
     +static void
     +validator_startup(ValidatorModuleState *state)
     +{
     +	state->private_data = PRIVATE_COOKIE;
     +}
     +
    ++/*
    ++ * Shutdown callback, to tear down the validator.
    ++ */
     +static void
     +validator_shutdown(ValidatorModuleState *state)
     +{
    @@ src/test/modules/oauth_validator/validator.c (new)
     +			 state->private_data);
     +}
     +
    ++/*
    ++ * Validator implementation. Logs the incoming data and authorizes the token;
    ++ * the behavior can be modified via the module's GUC settings.
    ++ */
     +static ValidatorModuleResult *
     +validate_token(ValidatorModuleState *state, const char *token, const char *role)
     +{
    @@ src/test/perl/PostgreSQL/Test/Cluster.pm: sub connect_ok
      	$self->log_check($test_name, $log_location, %params);
      }
     
    - ## src/test/perl/PostgreSQL/Test/OAuthServer.pm (new) ##
    -@@
    -+#!/usr/bin/perl
    -+
    -+package PostgreSQL::Test::OAuthServer;
    -+
    -+use warnings;
    -+use strict;
    -+use Scalar::Util;
    -+use Socket;
    -+use IO::Select;
    -+use Test::More;
    -+
    -+local *server_socket;
    -+
    -+sub new
    -+{
    -+	my $class = shift;
    -+
    -+	my $self = {};
    -+	bless($self, $class);
    -+
    -+	return $self;
    -+}
    -+
    -+sub port
    -+{
    -+	my $self = shift;
    -+
    -+	return $self->{'port'};
    -+}
    -+
    -+sub run
    -+{
    -+	my $self = shift;
    -+	my $port;
    -+
    -+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
    -+	  or die "failed to start OAuth server: $!";
    -+
    -+	read($read_fh, $port, 7) // die "failed to read port number: $!";
    -+	chomp $port;
    -+	die "server did not advertise a valid port"
    -+	  unless Scalar::Util::looks_like_number($port);
    -+
    -+	$self->{'pid'} = $pid;
    -+	$self->{'port'} = $port;
    -+	$self->{'child'} = $read_fh;
    -+
    -+	note("OAuth provider (PID $pid) is listening on port $port\n");
    -+}
    -+
    -+sub stop
    -+{
    -+	my $self = shift;
    -+
    -+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
    -+
    -+	kill(15, $self->{'pid'});
    -+	$self->{'pid'} = undef;
    -+
    -+	# Closing the popen() handle waits for the process to exit.
    -+	close($self->{'child'});
    -+	$self->{'child'} = undef;
    -+}
    -+
    -+1;
    -
      ## src/tools/pgindent/pgindent ##
     @@ src/tools/pgindent/pgindent: sub pre_indent
      	# Protect wrapping in CATALOG()
    @@ src/tools/pgindent/pgindent: sub post_indent
      	# Undo change of dash-protected block comments
     
      ## src/tools/pgindent/typedefs.list ##
    -@@ src/tools/pgindent/typedefs.list: ArrayMetaState
    - ArraySubWorkspace
    - ArrayToken
    - ArrayType
    -+AsyncAuthFunc
    - AsyncQueueControl
    - AsyncQueueEntry
    - AsyncRequest
     @@ src/tools/pgindent/typedefs.list: CState
      CTECycleClause
      CTEMaterialize
    @@ src/tools/pgindent/typedefs.list: NumericDigit
      NumericSortSupport
      NumericSumAccum
      NumericVar
    -+OAuthStep
     +OAuthValidatorCallbacks
      OM_uint32
      OP
      OSAPerGroupState
    -@@ src/tools/pgindent/typedefs.list: PFN
    - PGAlignedBlock
    - PGAlignedXLogBlock
    - PGAsyncStatusType
    -+PGAuthData
    - PGCALL2
    - PGChecksummablePage
    - PGContextVisibility
    +@@ src/tools/pgindent/typedefs.list: PGVerbosity
    + PG_Locale_Strategy
    + PG_Lock_Status
    + PG_init_t
    ++PGauthData
    + PGcancel
    + PGcancelConn
    + PGcmdQueueEntry
    +@@ src/tools/pgindent/typedefs.list: PGconn
    + PGdataValue
    + PGlobjfuncs
    + PGnotify
    ++PGoauthBearerRequest
    + PGpipelineStatus
    ++PGpromptOAuthDevice
    + PGresAttDesc
    + PGresAttValue
    + PGresParamDesc
     @@ src/tools/pgindent/typedefs.list: PQArgBlock
      PQEnvironmentOption
      PQExpBuffer
    @@ src/tools/pgindent/typedefs.list: PQArgBlock
      PQcommMethods
      PQconninfoOption
      PQnoticeProcessor
    - PQnoticeReceiver
    -+PQoauthBearerRequest
    - PQprintOpt
    -+PQpromptOAuthDevice
    - PQsslKeyPassHook_OpenSSL_type
    - PREDICATELOCK
    - PREDICATELOCKTAG
     @@ src/tools/pgindent/typedefs.list: VacuumRelation
      VacuumStmt
      ValidIOData
    @@ src/tools/pgindent/typedefs.list: explain_get_index_name_hook_type
      fasthash_state
      fd_set
     +fe_oauth_state
    -+fe_oauth_state_enum
      fe_scram_state
      fe_scram_state_enum
      fetch_range_request
2:  28cc3463aaf ! 2:  566d90d30a7 DO NOT MERGE: Add pytest suite for OAuth
    @@ src/test/python/client/test_oauth.py (new)
     +PGRES_POLLING_OK = 3
     +
     +
    -+class PQPromptOAuthDevice(ctypes.Structure):
    ++class PGPromptOAuthDevice(ctypes.Structure):
     +    _fields_ = [
     +        ("verification_uri", ctypes.c_char_p),
     +        ("user_code", ctypes.c_char_p),
     +    ]
     +
     +
    -+class PQOAuthBearerRequest(ctypes.Structure):
    ++class PGOAuthBearerRequest(ctypes.Structure):
     +    pass
     +
     +
    -+PQOAuthBearerRequest._fields_ = [
    ++PGOAuthBearerRequest._fields_ = [
     +    ("openid_configuration", ctypes.c_char_p),
     +    ("scope", ctypes.c_char_p),
     +    (
    @@ src/test/python/client/test_oauth.py (new)
     +        ctypes.CFUNCTYPE(
     +            ctypes.c_int,
     +            ctypes.c_void_p,
    -+            ctypes.POINTER(PQOAuthBearerRequest),
    ++            ctypes.POINTER(PGOAuthBearerRequest),
     +            ctypes.POINTER(ctypes.c_int),
     +        ),
     +    ),
     +    (
     +        "cleanup",
    -+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest)),
    ++        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
     +    ),
     +    ("token", ctypes.c_char_p),
     +    ("user", ctypes.c_void_p),
    @@ src/test/python/client/test_oauth.py (new)
     +        handle_by_default = 0  # does an implementation have to be provided?
     +
     +        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
    -+            cls = PQPromptOAuthDevice
    ++            cls = PGPromptOAuthDevice
     +            handle_by_default = 1
     +        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
    -+            cls = PQOAuthBearerRequest
    ++            cls = PGOAuthBearerRequest
     +        else:
     +            return 0
     +
    @@ src/test/python/client/test_oauth.py (new)
     +    @ctypes.CFUNCTYPE(
     +        ctypes.c_int,
     +        ctypes.c_void_p,
    -+        ctypes.POINTER(PQOAuthBearerRequest),
    ++        ctypes.POINTER(PGOAuthBearerRequest),
     +        ctypes.POINTER(ctypes.c_int),
     +    )
     +    def get_token_wrapper(pgconn, p_request, p_altsock):
    @@ src/test/python/client/test_oauth.py (new)
     +            logging.error("Exception during async callback:\n" + traceback.format_exc())
     +            return PGRES_POLLING_FAILED
     +
    -+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PQOAuthBearerRequest))
    ++    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
     +    def cleanup(pgconn, p_request):
     +        """
     +        Should be called exactly once per connection.
    @@ src/test/python/server/test_oauth.py (new)
     +    ctx = Context()
     +    hba_lines = [
     +        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
    -+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" trust_validator_authz=1\n',
    ++        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
     +        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
     +    ]
     +    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
v39-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v39-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 3dc642d68c864c1e3e65cfb6923344dc1eab2dee Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v39 1/2] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows; see
below).

The client implementation requires libcurl and its development headers.
Pass `curl` to --with-oauth/-Doauth during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied (see below).

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

= Debug Mode =

A "dangerous debugging mode" may be enabled in libpq, by setting the
environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
that you will not want in a production system:

- permits the use of plaintext HTTP in the OAuth provider exchange
- sprays HTTP traffic, containing several critical secrets, to stderr
- permits the use of zero-second retry intervals, which can DoS the
  client

= PQauthDataHook =

Clients may override two pieces of OAuth handling using the new
PQsetAuthDataHook():

- PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
  standard error when using the builtin device authorization flow

- PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
  custom asynchronous implementation

In general, a hook implementation should examine the incoming `type` to
decide whether or not to handle a specific piece of authdata; if not, it
should delegate to the previous hook in the chain (retrievable via
PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
follow the authdata-specific instructions. Returning an integer < 0
signals an error condition and abandons the connection attempt.

== PQAUTHDATA_PROMPT_OAUTH_DEVICE ==

The hook should display the device prompt (URL + code) using whatever
method it prefers.

== PQAUTHDATA_OAUTH_BEARER_TOKEN ==

The hook should either directly return a Bearer token for the current
user/issuer/scope combination, if one is available without blocking, or
else set up an asynchronous callback to retrieve one. See the
documentation for PQoauthBearerRequest.

= Server-Side Validation =

Because OAuth implementations vary so wildly, and bearer token
validation is heavily dependent on the issuing party, authn/z is done by
communicating with an external validator module using callbacks.
The module is responsible for:

1. Validate the bearer token. The correct way to do this depends on the
   issuer, but it generally involves either cryptographic operations to
   prove that the token was issued by a trusted party, or the
   presentation of the bearer token to some other party so that _it_ can
   perform validation.

   The command MUST maintain confidentiality of the bearer token, since
   in most cases it can be used just like a password. (There are ways to
   cryptographically bind tokens to client certificates, but they are
   way beyond the scope of this commit message.)

   If the token cannot be validated, the authorized member of the
   ValidatorModuleResult struct is used to indicate failure.
   Further authentication/authorization is pointless if
   the bearer token wasn't issued by someone you trust.

3. Authenticate the user, authorize the user, or both:

   a. To authenticate the user, use the bearer token to retrieve some
      trusted identifier string for the end user. The exact process for
      this is, again, issuer-dependent. The module wull return the
      authenticated identity in the authn_id member.

   b. To optionally authorize the user, in combination with the HBA
      option trust_validator_authz=1 (see below).

      The hard part is in determining whether the given token truly
      authorizes the client to use the given role, which must
      unfortunately be left as an exercise to the reader.

      This obviously requires some care, as a poorly implemented token
      validator may silently open the entire database to anyone with a
      bearer token. But it may be a more portable approach, since OAuth
      is designed as an authorization framework, not an authentication
      framework. For example, the user's bearer token could carry an
      "allow_superuser_access" claim, which would authorize pseudonymous
      database access as any role. It's then up to the OAuth system
      administrators to ensure that allow_superuser_access is doled out
      only to the proper users.

   c. It's possible that the user can be successfully authenticated but
      isn't authorized to connect. In this case, the validator module may
	  return the authenticated ID and then fail with false authorized
	  member.  (This can make it easier to see what's going on in the
	  Postgres logs.)

= OAuth HBA Method =

The oauth method supports the following HBA options (but note that two
of them are not optional, since we have no way of choosing sensible
defaults):

  issuer: Required. The URL of the OAuth issuing party, which the client
          must contact to receive a bearer token.

          Some real-world examples as of time of writing:
          - https://accounts.google.com
          - https://login.microsoft.com/[tenant-id]/v2.0

  scope:  Required. The OAuth scope(s) required for the server to
          authenticate and/or authorize the user. This is heavily
          deployment-specific, but a simple example is "openid email".

  map:    Optional. Specify a standard PostgreSQL user map; this works
          the same as with other auth methods such as peer. If a map is
          not specified, the user ID returned by the token validator
          must exactly match the role that's being requested (but see
          trust_validator_authz, below).

  trust_validator_authz:
          Optional. When set to 1, this allows the token validator to
          take full control of the authorization process. Standard user
          mapping is skipped: if the validator command succeeds, the
          client is allowed to connect under its desired role and no
          further checks are done.

Several TODOs:
- don't retry forever if the server won't accept our token
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fix intermittent failure in the cleanup callback tests (race
  condition?)
- support require_auth
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- allow passing the configured issuer to the oauth_validator_command, to
  deal with multi-issuer setups
- fill in documentation stubs
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   17 +-
 config/programs.m4                            |    2 -
 configure                                     |  213 ++
 configure.ac                                  |   32 +
 doc/src/sgml/client-auth.sgml                 |  250 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  363 +++
 doc/src/sgml/oauth-validators.sgml            |  388 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   23 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  853 ++++++
 src/backend/libpq/auth.c                      |   26 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |   17 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    7 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2475 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1005 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   46 +
 src/interfaces/libpq/fe-auth-sasl.h           |   10 +-
 src/interfaces/libpq/fe-auth-scram.c          |    6 +-
 src/interfaces/libpq/fe-auth.c                |  104 +-
 src/interfaces/libpq/fe-auth.h                |    6 +-
 src/interfaces/libpq/fe-connect.c             |   90 +-
 src/interfaces/libpq/fe-misc.c                |    7 +-
 src/interfaces/libpq/libpq-fe.h               |   88 +
 src/interfaces/libpq/libpq-int.h              |   15 +
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  162 ++
 .../modules/oauth_validator/t/001_server.pl   |  499 ++++
 .../modules/oauth_validator/t/002_client.pl   |  118 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  388 +++
 src/test/modules/oauth_validator/validator.c  |  121 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   12 +
 59 files changed, 7881 insertions(+), 60 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 18e944ca89d..bb5b07db275 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -175,6 +175,7 @@ task:
         --buildtype=debug \
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
+        -Dlibcurl=enabled \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -219,6 +220,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -234,6 +236,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-zstd
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
+  -Dlibcurl=enabled
   -Dllvm=enabled
   -Duuid=e2fs
 
@@ -312,8 +315,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +694,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f5..d4ff8c82afc 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,8 +142,6 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
-
-
 # PGAC_CHECK_READLINE
 # -------------------
 # Check for the readline library and dependent libraries, either
diff --git a/configure b/configure
index 518c33b73a9..0e812880c20 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support for OAuth client flows
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,144 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support for OAuth client flows" >&5
+$as_echo_n "checking whether to build with libcurl support for OAuth client flows... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12207,6 +12356,59 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-libcurl" "$LINENO" 5
+fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
@@ -13955,6 +14157,17 @@ fi
 
 done
 
+fi
+
+if test "$with_libcurl" = yes; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 247ae97fa4c..4850a543292 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,27 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support for OAuth client flows])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support for OAuth client flows],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support for OAuth client flows. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1315,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-libcurl])])
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
@@ -1588,6 +1616,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_libcurl" = yes; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..7ce02481eea 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,240 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        The issuer identifier of the authorization server, as defined by its
+        discovery document, or a well-known URI pointing to that discovery
+        document. This parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a discovery document URI
+        will be constructed using the issuer identifier. By default, the URI
+        uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, the URI will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e0c8325a39c..1993752ce76 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index ebdb5b3bc2d..3fca2910dad 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1141,6 +1141,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2582,6 +2595,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 01f259fd0dc..c5fd4355f8e 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2345,6 +2345,97 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of an issuer to contact if the server requests an OAuth
+        token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URI
+        providing a set of OAuth configuration parameters. The server must
+        provide a URI that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        This standard handshake requires two separate network connections to the
+        server per authentication attempt. To skip asking the server for a
+        discovery document URI, you may set <literal>oauth_issuer</literal> to a
+        <literal>/.well-known/</literal> URI used for OAuth discovery. (In this
+        case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth">custom OAuth
+        hook</link> is installed to provide one), then this parameter must be
+        set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9972,6 +10063,278 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URI */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..3e17805e53f
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,388 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in hung sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but note that implementers should consider negative
+       testing to be mandatory. It's trivial to design a module that lets
+       authorized users in; the whole point of the system is to keep
+       unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f7..ae4732df656 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index e5ce437a5c7..246210ad8a7 100644
--- a/meson.build
+++ b/meson.build
@@ -854,6 +854,24 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+  endif
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3030,6 +3048,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3698,6 +3720,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index 38935196394..a3d49e2261c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support for OAuth client flows')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index eac3d001211..5771983af93 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..ad3161b9ce8
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,853 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
+ * it's pointed out in RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ *
+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("Internal error in OAuth validator module"));
+		return false;
+	}
+
+	if (!ret->authorized)
+	{
+		status = false;
+		goto cleanup;
+	}
+
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked via before_shmem_exit().
+ */
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c916060..0cf3e31c9fe 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 3104b871cf1..f35ba634662 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512c..c85527fb018 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8cf1afbad20..6f985e75824 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4813,6 +4814,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index a2ac7575ca7..f066d491614 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..8fe56267780
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf65..22f6ab9f1d8 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82a..fb333a15782 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..4fcdda74305
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..d65b6b06396 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -663,6 +666,10 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support for OAuth client flows.
+   (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc7..5feec8738c5 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 5d8213e0b57..eb8f9d65a17 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -205,3 +205,6 @@ PQcancelFinish            202
 PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
+PQsetAuthDataHook         206
+PQgetAuthDataHook         207
+PQdefaultAuthDataHook     208
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..dade373b8c4
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2475 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	if (!ctx->nested)
+		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+			|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+		{
+			oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+								  field->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves.
+	 *
+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
+	 * CURLOPT_SOCKOPTFUNCTION maybe...
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The top-level, nonblocking entry point for the libcurl implementation. This
+ * will be called several times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..cb290c5c113
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1005 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the token pointer will be ignored and the initial
+ * response will instead contain a request for the server's required OAuth
+ * parameters (Sec. 4.3). Otherwise, a bearer token must be provided.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* We must have a token. */
+		if (!token)
+		{
+			/*
+			 * Either programmer error, or something went badly wrong during
+			 * the asynchronous fetch.
+			 *
+			 * TODO: users shouldn't see this; what action should they take if
+			 * they do?
+			 */
+			libpq_append_conn_error(conn,
+									"no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules state this must be
+	 * at the beginning of the path component, but OIDC defined it at the end
+	 * instead, so we have to search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection. conn->oauth_want_retry will be set if the error status is
+ * suitable for a second attempt.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	/* TODO: what if these override what the user already specified? */
+	/* TODO: what if there's no discovery URI? */
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/* The URI must correspond to our existing issuer, to avoid mix-ups. */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+		ctx.discovery_uri = NULL;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+		ctx.scope = NULL;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = strdup(request->token);
+		if (!state->token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+/*
+ * Cleanup callback for the user flow. Delegates most of its job to the
+ * user-provided cleanup implementation.
+ */
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PGoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			state->token = strdup(request.token);
+			if (!state->token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/*
+		 * Hand off to our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built using --with-libcurl");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->step = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly. This doesn't
+				 * require any asynchronous work.
+				 */
+				discover = true;
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, discover, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->step = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..f74aba80cea
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,46 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 258bfd0564f..b47011d077d 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d97..da168eb2f5d 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e943..6289b8b60a7 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -430,7 +432,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +450,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +587,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +673,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +703,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1025,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1194,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1211,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1541,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c5086882..1003fff042c 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index aaf87e8e885..c5811a362da 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -27,6 +27,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -365,6 +366,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -628,6 +646,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2645,6 +2664,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3680,6 +3700,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3835,6 +3856,19 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3868,7 +3902,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3905,6 +3949,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4586,6 +4665,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4703,6 +4783,12 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7227,6 +7313,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d1..e2ba483ea86 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c4..4e1376052c4 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -28,6 +28,10 @@ extern "C"
  */
 #include "postgres_ext.h"
 
+#ifdef WIN32
+#include <winsock2.h>			/* for SOCKET */
+#endif
+
 /*
  * These symbols may be used in compile-time #ifdef tests for the availability
  * of v14-and-newer libpq features.
@@ -59,6 +63,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +109,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +192,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -717,10 +732,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef WIN32
+#define SOCKTYPE SOCKET
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE *altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd4..20ad524c4dc 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -432,6 +432,16 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -506,6 +516,11 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	/* Callback for external async authentication */
+	PostgresPollingStatusType (*async_auth) (PGconn *conn, pgsocket *altsock);
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d18..dc6f3ecab89 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -4,6 +4,7 @@
 # args for executables (which depend on libpq).
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -40,6 +41,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index aba7411a1be..d84743990a6 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14b..bdfd5f1f8de 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index c829b619530..bd13e4afbd6 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..f297ed5c968
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..138a8104622
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..f77a3e115c6
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..4b78c90557c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..70227a78150
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,162 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <stdio.h>
+#include <stdlib.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+
+static void
+usage(char *argv[])
+{
+	fprintf(stderr, "usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	fprintf(stderr, "recognized flags:\n");
+	fprintf(stderr, " -h, --help				show this message\n");
+	fprintf(stderr, " --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	fprintf(stderr, " --expected-uri URI		fail if received configuration link does not match URI\n");
+	fprintf(stderr, " --no-hook					don't install OAuth hooks (connection will fail)\n");
+	fprintf(stderr, " --token TOKEN				use the provided TOKEN value\n");
+}
+
+static bool no_hook = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..a1347af4772
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,499 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+if ($node->connect_ok(
+		"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+		"connect",
+		expected_stderr =>
+		  qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check(
+		"user $user: validator receives correct parameters",
+		$log_start,
+		log_like => [
+			qr/oauth_validator: token="9243959234", role="$user"/,
+			qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		]);
+	$node->log_check(
+		"user $user: validator sets authenticated identity",
+		$log_start,
+		log_like =>
+		  [ qr/connection authenticated: identity="test" method=oauth/, ]);
+	$log_start = $log_end;
+}
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+if ($node->connect_ok(
+		"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+		"connect",
+		expected_stderr =>
+		  qr@Visit https://example\.org/ and enter the code: postgresuser@))
+{
+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
+	$node->log_check(
+		"user $user: validator receives correct parameters",
+		$log_start,
+		log_like => [
+			qr/oauth_validator: token="9243959234-alt", role="$user"/,
+			qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
+		]);
+	$node->log_check(
+		"user $user: validator sets authenticated identity",
+		$log_start,
+		log_like =>
+		  [ qr/connection authenticated: identity="testalt" method=oauth/, ]);
+	$log_start = $log_end;
+}
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+if ($node->connect_fails(
+		"$common_connstr user=test",
+		"validator must set authn_id",
+		expected_stderr => qr/OAuth bearer authentication failed/))
+{
+	$log_end =
+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
+		$log_start);
+
+	$node->log_check(
+		"validator must set authn_id: breadcrumbs are logged",
+		$log_start,
+		log_like => [
+			qr/connection authenticated: identity=""/,
+			qr/DETAIL:\s+Validator provided no identity/,
+			qr/FATAL:\s+OAuth bearer authentication failed/,
+		]);
+
+	$log_start = $log_end;
+}
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+if ($node->connect_ok(
+		"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+		"validator is used for $user",
+		expected_stderr =>
+		  qr@Visit https://example\.com/ and enter the code: postgresuser@))
+{
+	$log_start = $node->wait_for_log(qr/connection authorized/, $log_start);
+}
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..90608e55656
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,118 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built using --with-libcurl/
+	);
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..f0f23d1d1a8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..3f9d21aa4e7
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,388 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..748c179f666
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,121 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+static char *authn_id = NULL;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token;
+ * the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = true;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 508e5e3917a..8357272d678 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2513,6 +2513,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2556,7 +2561,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e4..362b20a94f7 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index ce33e55bf1d..c2541fdd500 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -369,6 +369,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1719,6 +1722,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1827,6 +1831,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1834,7 +1839,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1946,6 +1953,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3071,6 +3079,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3464,6 +3474,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
@@ -3668,6 +3679,7 @@ nsphash_hash
 ntile_context
 nullingrel_info
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v39-0002-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v39-0002-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 566d90d30a750c6beee3ce91d9689f13f34c95e3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v39 2/2] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  195 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2495 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 ++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 +++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6275 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index bb5b07db275..dbc83df82fc 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -319,6 +319,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -403,8 +404,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 246210ad8a7..71a3d7d56f6 100644
--- a/meson.build
+++ b/meson.build
@@ -3361,6 +3361,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3527,6 +3530,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7fd..c7fce098eb1 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 00000000000..0e8f027b2ec
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 00000000000..b0695b6287e
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 00000000000..acf339a5899
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 00000000000..9caa3a56d44
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 00000000000..8372376ede4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 00000000000..612fa2ac905
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2495 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "urn:ietf:params:oauth:grant-type:device_code"
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PGPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PGOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PGOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PGOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PGPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PGOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    if server_discovery:
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+
+                # For discovery, the client should send an empty auth header.
+                # See RFC 7628, Sec. 4.3.
+                auth = get_auth_value(initial)
+                assert auth == b""
+
+                # Always fail the discovery exchange.
+                fail_oauth_handshake(
+                    conn,
+                    {
+                        "status": "invalid_token",
+                        "openid-configuration": discovery_uri,
+                    },
+                )
+
+        # Expect the client to connect again.
+        sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    if server_discovery:
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+
+                # For discovery, the client should send an empty auth header.
+                # See RFC 7628, Sec. 4.3.
+                auth = get_auth_value(initial)
+                assert auth == b""
+
+                # Always fail the discovery exchange.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": discovery_uri,
+                }
+                pq3.send(
+                    conn,
+                    pq3.types.AuthnRequest,
+                    type=pq3.authn.SASLContinue,
+                    body=json.dumps(resp).encode("utf-8"),
+                )
+
+                # FIXME: the client disconnects at this point; it'd be nicer if
+                # it completed the exchange.
+
+            # The client should not reconnect.
+
+    else:
+        expect_disconnected_handshake(sock)
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PGOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id="some-id",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": to_http(openid_provider.discovery_uri),
+            }
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=json.dumps(resp).encode("utf-8"),
+            )
+
+            # FIXME: the client disconnects at this point; it'd be nicer if
+            # it completed the exchange.
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 00000000000..1a73865ee47
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 00000000000..e137df852ef
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 00000000000..ef809e288af
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 00000000000..ab7a6e7fb96
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 00000000000..0dfcffb83e0
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 00000000000..42af80c73ee
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 00000000000..85534b9cc99
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 00000000000..415748b9a66
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 00000000000..2839343ffa1
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 00000000000..02126dba792
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 00000000000..dee4855fc0b
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 00000000000..7c6817de31c
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 00000000000..075c02c1ca6
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 00000000000..804307ee120
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba7..ffdf760d79a 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.34.1

#169Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#167)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 5 Dec 2024, at 19:29, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

Seems good. I think this part of the API is going to need an
ABI-compatibility pass, too. For example, do we want a module to
allocate the result struct itself (which locks in the struct length)?
And should we have a MAGIC_NUMBER of some sort in the static callback
list, maybe?

I think we should, I just now experimented with setting the server major
version (backed by PG_VERSION_NUM) in the callback struct and added a simple
test. I'm not sure if there is a whole lot more we need, maybe an opaque
integer for the module to identify its version?

--with-libcurl/-Dlibcurl, and the Autoconf side uses PKG_CHECK_MODULES
exclusively.

Why only use PKG_CHECK_MODULES for this rather than treating it more like other
dependencies where we fall back on other methods if not found? While I'm
clearly not the target audience, I build libcurl all the time and being able to
point to a directory would be nice. There's also the curl-config utility which
should be in all packaged versions.

--
Daniel Gustafsson

#170Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#169)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sun, Dec 15, 2024 at 2:18 PM Daniel Gustafsson <daniel@yesql.se> wrote:

I think we should, I just now experimented with setting the server major
version (backed by PG_VERSION_NUM) in the callback struct and added a simple
test. I'm not sure if there is a whole lot more we need, maybe an opaque
integer for the module to identify its version?

I think PG_VERSION_NUM should be handled by the standard
PG_MODULE_MAGIC. But an integer for the validator version would be
good, I think.

--with-libcurl/-Dlibcurl, and the Autoconf side uses PKG_CHECK_MODULES
exclusively.

Why only use PKG_CHECK_MODULES for this rather than treating it more like other
dependencies where we fall back on other methods if not found?

That was from Peter's request up in [1]/messages/by-id/6bde5f56-9e7a-4148-b81c-eb6532cb3651@eisentraut.org. (I don't have strong opinions
on which to support, but I am vaguely persuaded by the idea of parity
between Meson and Autoconf.)

While I'm
clearly not the target audience, I build libcurl all the time and being able to
point to a directory would be nice.

Doesn't the PKG_CONFIG_PATH envvar let you do that for Autoconf? Or,
if you're using Meson, -Dpkg_config_path? I was using the latter for
my local Curl builds.

There's also the curl-config utility which
should be in all packaged versions.

Hmm, I wonder if Meson supports alternative names for pkg-config.
Though I guess the --version handling would be different between the
two?

--Jacob

[1]: /messages/by-id/6bde5f56-9e7a-4148-b81c-eb6532cb3651@eisentraut.org

#171Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#167)
4 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Dec 5, 2024 at 10:29 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

1. Duplicate fields caused the previous field values to leak. (This
was the documented FIXME; we now error out in this case.)
2. The array-of-strings parsing had a subtle logic bug: if field "a"
was expected to be an array of strings, we would also accept the
construction `"a": "1"` as if it were equivalent to `"a": ["1"]`. This
messed up the internal tracking and tripped assertions.

My "fix" for these in v38 included a silly mistake: the
grant_types_supported array could no longer contain more than one item
without being considered duplicated. :/ I've updated the tests to
exercise this case and fixed it in v40. Fuzzers are still happy so
far.

I also made a bad assumption about the return value of
connect_ok/fails() in the server log tests, so they weren't always
checking what they should have been. These have been rewritten
entirely (and IMO the tests are more readable as a result). Some
additional negative tests have been added to oauth_validator as well.

v40 also contains:
- explicit testing for connect_timeout compatibility
- support for require_auth=oauth, including compatibility with
require_auth=!scram-sha-256
- the ability for a validator to set authn_id even if the token is not
authorized, for auditability in the logs
- the use of pq_block_sigpipe() for additional safety in the face of
CURLOPT_NOSIGNAL

I have split out the require_auth changes temporarily (0002) for ease
of review, and I plan to ping the last thread where SASL support in
require_auth was discussed [1]/messages/by-id/ZB5jftra/n2TbdLx@paquier.xyz.

Thanks!
--Jacob

[1]: /messages/by-id/ZB5jftra/n2TbdLx@paquier.xyz

Attachments:

since-v39.diff.txttext/plain; charset=US-ASCII; name=since-v39.diff.txtDownload
1:  3dc642d68c8 ! 1:  7ee8628abac Add OAUTHBEARER SASL mechanism
    @@ Commit message
         Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
         speaking to a OAuth-enabled server, it looks a bit like this:
     
    -        $ psql 'host=example.org oauth_client_id=f02c6361-0635-...'
    +        $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
             Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG
     
         The OAuth issuer must support device authorization. No other OAuth flows
    -    are currently implemented (but clients may provide their own flows; see
    -    below).
    +    are currently implemented (but clients may provide their own flows).
     
         The client implementation requires libcurl and its development headers.
    -    Pass `curl` to --with-oauth/-Doauth during configuration. The server
    +    Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
         implementation does not require additional build-time dependencies, but
    -    an external validator module must be supplied (see below).
    +    an external validator module must be supplied.
     
         Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!
     
    -    = Debug Mode =
    -
    -    A "dangerous debugging mode" may be enabled in libpq, by setting the
    -    environment variable PGOAUTHDEBUG=UNSAFE. This will do several things
    -    that you will not want in a production system:
    -
    -    - permits the use of plaintext HTTP in the OAuth provider exchange
    -    - sprays HTTP traffic, containing several critical secrets, to stderr
    -    - permits the use of zero-second retry intervals, which can DoS the
    -      client
    -
    -    = PQauthDataHook =
    -
    -    Clients may override two pieces of OAuth handling using the new
    -    PQsetAuthDataHook():
    -
    -    - PQAUTHDATA_PROMPT_OAUTH_DEVICE: replaces the default user prompt to
    -      standard error when using the builtin device authorization flow
    -
    -    - PQAUTHDATA_OAUTH_BEARER_TOKEN: replaces the entire OAuth flow with a
    -      custom asynchronous implementation
    -
    -    In general, a hook implementation should examine the incoming `type` to
    -    decide whether or not to handle a specific piece of authdata; if not, it
    -    should delegate to the previous hook in the chain (retrievable via
    -    PQgetAuthDataHook()). Otherwise, it should return an integer > 0 and
    -    follow the authdata-specific instructions. Returning an integer < 0
    -    signals an error condition and abandons the connection attempt.
    -
    -    == PQAUTHDATA_PROMPT_OAUTH_DEVICE ==
    -
    -    The hook should display the device prompt (URL + code) using whatever
    -    method it prefers.
    -
    -    == PQAUTHDATA_OAUTH_BEARER_TOKEN ==
    -
    -    The hook should either directly return a Bearer token for the current
    -    user/issuer/scope combination, if one is available without blocking, or
    -    else set up an asynchronous callback to retrieve one. See the
    -    documentation for PQoauthBearerRequest.
    -
    -    = Server-Side Validation =
    -
    -    Because OAuth implementations vary so wildly, and bearer token
    -    validation is heavily dependent on the issuing party, authn/z is done by
    -    communicating with an external validator module using callbacks.
    -    The module is responsible for:
    -
    -    1. Validate the bearer token. The correct way to do this depends on the
    -       issuer, but it generally involves either cryptographic operations to
    -       prove that the token was issued by a trusted party, or the
    -       presentation of the bearer token to some other party so that _it_ can
    -       perform validation.
    -
    -       The command MUST maintain confidentiality of the bearer token, since
    -       in most cases it can be used just like a password. (There are ways to
    -       cryptographically bind tokens to client certificates, but they are
    -       way beyond the scope of this commit message.)
    -
    -       If the token cannot be validated, the authorized member of the
    -       ValidatorModuleResult struct is used to indicate failure.
    -       Further authentication/authorization is pointless if
    -       the bearer token wasn't issued by someone you trust.
    -
    -    3. Authenticate the user, authorize the user, or both:
    -
    -       a. To authenticate the user, use the bearer token to retrieve some
    -          trusted identifier string for the end user. The exact process for
    -          this is, again, issuer-dependent. The module wull return the
    -          authenticated identity in the authn_id member.
    -
    -       b. To optionally authorize the user, in combination with the HBA
    -          option trust_validator_authz=1 (see below).
    -
    -          The hard part is in determining whether the given token truly
    -          authorizes the client to use the given role, which must
    -          unfortunately be left as an exercise to the reader.
    -
    -          This obviously requires some care, as a poorly implemented token
    -          validator may silently open the entire database to anyone with a
    -          bearer token. But it may be a more portable approach, since OAuth
    -          is designed as an authorization framework, not an authentication
    -          framework. For example, the user's bearer token could carry an
    -          "allow_superuser_access" claim, which would authorize pseudonymous
    -          database access as any role. It's then up to the OAuth system
    -          administrators to ensure that allow_superuser_access is doled out
    -          only to the proper users.
    -
    -       c. It's possible that the user can be successfully authenticated but
    -          isn't authorized to connect. In this case, the validator module may
    -              return the authenticated ID and then fail with false authorized
    -              member.  (This can make it easier to see what's going on in the
    -              Postgres logs.)
    -
    -    = OAuth HBA Method =
    -
    -    The oauth method supports the following HBA options (but note that two
    -    of them are not optional, since we have no way of choosing sensible
    -    defaults):
    -
    -      issuer: Required. The URL of the OAuth issuing party, which the client
    -              must contact to receive a bearer token.
    -
    -              Some real-world examples as of time of writing:
    -              - https://accounts.google.com
    -              - https://login.microsoft.com/[tenant-id]/v2.0
    -
    -      scope:  Required. The OAuth scope(s) required for the server to
    -              authenticate and/or authorize the user. This is heavily
    -              deployment-specific, but a simple example is "openid email".
    -
    -      map:    Optional. Specify a standard PostgreSQL user map; this works
    -              the same as with other auth methods such as peer. If a map is
    -              not specified, the user ID returned by the token validator
    -              must exactly match the role that's being requested (but see
    -              trust_validator_authz, below).
    -
    -      trust_validator_authz:
    -              Optional. When set to 1, this allows the token validator to
    -              take full control of the authorization process. Standard user
    -              mapping is skipped: if the validator command succeeds, the
    -              client is allowed to connect under its desired role and no
    -              further checks are done.
    -
         Several TODOs:
    -    - don't retry forever if the server won't accept our token
         - perform several sanity checks on the OAuth issuer's responses
         - handle cases where the client has been set up with an issuer and
           scope, but the Postgres server wants to use something different
    @@ Commit message
         - fix libcurl initialization thread-safety
         - harden the libcurl flow implementation
         - figure out pgsocket/int difference on Windows
    -    - fix intermittent failure in the cleanup callback tests (race
    -      condition?)
    -    - support require_auth
         - fill in documentation stubs
         - support protocol "variants" implemented by major providers
         - implement more helpful handling of HBA misconfigurations
         - use logdetail during auth failures
    -    - allow passing the configured issuer to the oauth_validator_command, to
    -      deal with multi-issuer setups
    -    - fill in documentation stubs
         - ...and more.
     
         Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
    @@ src/backend/libpq/auth-oauth.c (new)
     + * The "credentials" construction is what we receive in our auth value.
     + *
     + * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
    -+ * header format; RFC 7235 Sec. 2), the "Bearer" scheme string must be
    -+ * compared case-insensitively. (This is not mentioned in RFC 6750, but
    -+ * it's pointed out in RFC 7628 Sec. 4.)
    ++ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
    ++ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
    ++ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
     + *
     + * Invalid formats are technically a protocol violation, but we shouldn't
     + * reflect any information about the sensitive Bearer token back to the
     + * client; log at COMMERROR instead.
    -+ *
    -+ * TODO: handle the Authorization spec, RFC 7235 Sec. 2.1.
     + */
     +static const char *
     +validate_token_format(const char *header)
    @@ src/backend/libpq/auth-oauth.c (new)
     +		return false;
     +	}
     +
    ++	/*
    ++	 * Log any authentication results even if the token isn't authorized; it
    ++	 * might be useful for auditing or troubleshooting.
    ++	 */
    ++	if (ret->authn_id)
    ++		set_authn_id(port, ret->authn_id);
    ++
     +	if (!ret->authorized)
     +	{
    ++		ereport(LOG,
    ++				errmsg("OAuth bearer authentication failed for user \"%s\"",
    ++					   port->user_name),
    ++				errdetail_log("Validator failed to authorize the provided token."));
    ++
     +		status = false;
     +		goto cleanup;
     +	}
     +
    -+	if (ret->authn_id)
    -+		set_authn_id(port, ret->authn_id);
    -+
     +	if (port->hba->oauth_skip_usermap)
     +	{
     +		/*
    @@ src/interfaces/libpq/Makefile: backend_src = $(top_srcdir)/src/backend
      endif
     
      ## src/interfaces/libpq/exports.txt ##
    -@@ src/interfaces/libpq/exports.txt: PQcancelFinish            202
    - PQsocketPoll              203
    +@@ src/interfaces/libpq/exports.txt: PQsocketPoll              203
      PQsetChunkedRowsMode      204
      PQgetCurrentTimeUSec      205
    -+PQsetAuthDataHook         206
    -+PQgetAuthDataHook         207
    -+PQdefaultAuthDataHook     208
    + PQservice                 206
    ++PQsetAuthDataHook         207
    ++PQgetAuthDataHook         208
    ++PQdefaultAuthDataHook     209
     
      ## src/interfaces/libpq/fe-auth-oauth-curl.c (new) ##
     @@
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +			++field;
     +		}
    ++
    ++		/*
    ++		 * We don't allow duplicate field names; error out if the target has
    ++		 * already been set.
    ++		 */
    ++		if (ctx->active)
    ++		{
    ++			field = ctx->active;
    ++
    ++			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
    ++				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
    ++			{
    ++				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
    ++									  field->name);
    ++				return JSON_SEM_ACTION_FAILED;
    ++			}
    ++		}
     +	}
     +
     +	return JSON_SUCCESS;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			return JSON_SEM_ACTION_FAILED;
     +		}
     +
    -+		/*
    -+		 * We don't allow duplicate field names; error out if the target has
    -+		 * already been set.
    -+		 */
    -+		if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
    -+			|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
    -+		{
    -+			oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
    -+								  field->name);
    -+			return JSON_SEM_ACTION_FAILED;
    -+		}
    -+
     +		if (field->type != JSON_TOKEN_ARRAY_START)
     +		{
     +			Assert(ctx->nested == 1);
    ++			Assert(!*field->target.scalar);
     +
     +			*field->target.scalar = strdup(token);
     +			if (!*field->target.scalar)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	/*
     +	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
    -+	 * to handle the possibility of SIGPIPE ourselves.
    -+	 *
    -+	 * TODO: handle SIGPIPE via pq_block_sigpipe(), or via a
    -+	 * CURLOPT_SOCKOPTFUNCTION maybe...
    ++	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
    ++	 * see pg_fe_run_oauth_flow().
     +	 */
     +	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
     +	if (!curl_info->ares_num)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +
     +/*
    -+ * The top-level, nonblocking entry point for the libcurl implementation. This
    -+ * will be called several times to pump the async engine.
    ++ * The core nonblocking libcurl implementation. This will be called several
    ++ * times to pump the async engine.
     + *
     + * The architecture is based on PQconnectPoll(). The first half drives the
     + * connection state forward as necessary, returning if we're not ready to
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
     + * provider.
     + */
    -+PostgresPollingStatusType
    -+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
    ++static PostgresPollingStatusType
    ++pg_fe_run_oauth_flow_impl(PGconn *conn, pgsocket *altsock)
     +{
     +	fe_oauth_state *state = conn->sasl_state;
     +	struct async_ctx *actx;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	appendPQExpBufferStr(&conn->errorMessage, "\n");
     +
     +	return PGRES_POLLING_FAILED;
    ++}
    ++
    ++/*
    ++ * The top-level entry point. This is a convenient place to put necessary
    ++ * wrapper logic before handing off to the true implementation, above.
    ++ */
    ++PostgresPollingStatusType
    ++pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
    ++{
    ++	PostgresPollingStatusType result;
    ++#ifndef WIN32
    ++	sigset_t	osigset;
    ++	bool		sigpipe_pending;
    ++	bool		masked;
    ++
    ++	/*---
    ++	 * Ignore SIGPIPE on this thread during all Curl processing.
    ++	 *
    ++	 * Because we support multiple threads, we have to set up libcurl with
    ++	 * CURLOPT_NOSIGNAL, which disables its default global handling of
    ++	 * SIGPIPE. From the Curl docs:
    ++	 *
    ++	 *     libcurl makes an effort to never cause such SIGPIPE signals to
    ++	 *     trigger, but some operating systems have no way to avoid them and
    ++	 *     even on those that have there are some corner cases when they may
    ++	 *     still happen, contrary to our desire.
    ++	 *
    ++	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
    ++	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
    ++	 * Modern platforms and libraries seem to get it right, so this is a
    ++	 * difficult corner case to exercise in practice, and unfortunately it's
    ++	 * not really clear whether it's necessary in all cases.
    ++	 */
    ++	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
    ++#endif
    ++
    ++	result = pg_fe_run_oauth_flow_impl(conn, altsock);
    ++
    ++#ifndef WIN32
    ++	if (masked)
    ++	{
    ++		/*
    ++		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
    ++		 * way of knowing at this level).
    ++		 */
    ++		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
    ++	}
    ++#endif
    ++
    ++	return result;
     +}
     
      ## src/interfaces/libpq/fe-auth-oauth.c (new) ##
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +	 */
     +	PostgresPollingStatusType (*async) (PGconn *conn,
     +										struct _PGoauthBearerRequest *request,
    -+										SOCKTYPE *altsock);
    ++										SOCKTYPE * altsock);
     +
     +	/*
     +	 * Callback to clean up custom allocations. A hook implementation may use
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +#include <stdio.h>
     +#include <stdlib.h>
     +
    ++#ifdef WIN32
    ++#include <winsock2.h>
    ++#else
    ++#include <sys/socket.h>
    ++#endif
    ++
     +#include "getopt_long.h"
     +#include "libpq-fe.h"
     +
     +static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
    ++static PostgresPollingStatusType async_cb(PGconn *conn,
    ++										  PGoauthBearerRequest *req,
    ++										  pgsocket *altsock);
     +
     +static void
     +usage(char *argv[])
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +	fprintf(stderr, " --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
     +	fprintf(stderr, " --expected-uri URI		fail if received configuration link does not match URI\n");
     +	fprintf(stderr, " --no-hook					don't install OAuth hooks (connection will fail)\n");
    ++	fprintf(stderr, " --hang-forever			don't ever return a token (combine with connect_timeout)\n");
     +	fprintf(stderr, " --token TOKEN				use the provided TOKEN value\n");
     +}
     +
    ++/* --options */
     +static bool no_hook = false;
    ++static bool hang_forever = false;
     +static const char *expected_uri = NULL;
     +static const char *expected_scope = NULL;
     +static char *token = NULL;
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +		{"expected-uri", required_argument, NULL, 1001},
     +		{"no-hook", no_argument, NULL, 1002},
     +		{"token", required_argument, NULL, 1003},
    ++		{"hang-forever", no_argument, NULL, 1004},
     +		{0}
     +	};
     +
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +				token = optarg;
     +				break;
     +
    ++			case 1004:			/* --hang-forever */
    ++				hang_forever = true;
    ++				break;
    ++
     +			default:
     +				usage(argv);
     +				return 1;
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
     +		return 0;
     +
    ++	if (hang_forever)
    ++	{
    ++		/* Start asynchronous processing. */
    ++		req->async = async_cb;
    ++		return 1;
    ++	}
    ++
     +	if (expected_uri)
     +	{
     +		if (!req->openid_configuration)
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +
     +	req->token = token;
     +	return 1;
    ++}
    ++
    ++static PostgresPollingStatusType
    ++async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
    ++{
    ++	if (hang_forever)
    ++	{
    ++		/*
    ++		 * This code tests that nothing is interfering with libpq's handling
    ++		 * of connect_timeout.
    ++		 */
    ++		static pgsocket sock = PGINVALID_SOCKET;
    ++
    ++		if (sock == PGINVALID_SOCKET)
    ++		{
    ++			/* First call. Create an unbound socket to wait on. */
    ++#ifdef WIN32
    ++			WSADATA		wsaData;
    ++			int			err;
    ++
    ++			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
    ++			if (err)
    ++			{
    ++				perror("WSAStartup failed");
    ++				return PGRES_POLLING_FAILED;
    ++			}
    ++#endif
    ++			sock = socket(AF_INET, SOCK_DGRAM, 0);
    ++			if (sock == PGINVALID_SOCKET)
    ++			{
    ++				perror("failed to create datagram socket");
    ++				return PGRES_POLLING_FAILED;
    ++			}
    ++		}
    ++
    ++		/* Make libpq wait on the (unreadable) socket. */
    ++		*altsock = sock;
    ++		return PGRES_POLLING_READING;
    ++	}
    ++
    ++	req->token = token;
    ++	return PGRES_POLLING_OK;
     +}
     
      ## src/test/modules/oauth_validator/t/001_server.pl (new) ##
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +});
     +$node->reload;
     +
    -+my ($log_start, $log_end);
    -+$log_start = $node->wait_for_log(qr/reloading configuration files/);
    ++my $log_start = $node->wait_for_log(qr/reloading configuration files/);
     +
     +
     +# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +$ENV{PGOAUTHDEBUG} = "UNSAFE";
     +
     +my $user = "test";
    -+if ($node->connect_ok(
    -+		"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
    -+		"connect",
    -+		expected_stderr =>
    -+		  qr@Visit https://example\.com/ and enter the code: postgresuser@))
    -+{
    -+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
    -+	$node->log_check(
    -+		"user $user: validator receives correct parameters",
    -+		$log_start,
    -+		log_like => [
    -+			qr/oauth_validator: token="9243959234", role="$user"/,
    -+			qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
    -+		]);
    -+	$node->log_check(
    -+		"user $user: validator sets authenticated identity",
    -+		$log_start,
    -+		log_like =>
    -+		  [ qr/connection authenticated: identity="test" method=oauth/, ]);
    -+	$log_start = $log_end;
    -+}
    ++$node->connect_ok(
    ++	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
    ++	"connect as test",
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
    ++	log_like => [
    ++		qr/oauth_validator: token="9243959234", role="$user"/,
    ++		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
    ++		qr/connection authenticated: identity="test" method=oauth/,
    ++		qr/connection authorized/,
    ++	]);
     +
     +# The /alternate issuer uses slightly different parameters, along with an
     +# OAuth-style discovery document.
     +$user = "testalt";
    -+if ($node->connect_ok(
    -+		"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
    -+		"connect",
    -+		expected_stderr =>
    -+		  qr@Visit https://example\.org/ and enter the code: postgresuser@))
    -+{
    -+	$log_end = $node->wait_for_log(qr/connection authorized/, $log_start);
    -+	$node->log_check(
    -+		"user $user: validator receives correct parameters",
    -+		$log_start,
    -+		log_like => [
    -+			qr/oauth_validator: token="9243959234-alt", role="$user"/,
    -+			qr|oauth_validator: issuer="\Q$issuer/alternate\E", scope="openid postgres alt"|,
    -+		]);
    -+	$node->log_check(
    -+		"user $user: validator sets authenticated identity",
    -+		$log_start,
    -+		log_like =>
    -+		  [ qr/connection authenticated: identity="testalt" method=oauth/, ]);
    -+	$log_start = $log_end;
    -+}
    ++$node->connect_ok(
    ++	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
    ++	"connect as testalt",
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
    ++	log_like => [
    ++		qr/oauth_validator: token="9243959234-alt", role="$user"/,
    ++		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
    ++		qr/connection authenticated: identity="testalt" method=oauth/,
    ++		qr/connection authorized/,
    ++	]);
     +
     +# The issuer linked by the server must match the client's oauth_issuer setting.
     +$node->connect_fails(
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +$common_connstr =
     +  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
     +
    ++# Misbehaving validators must fail shut.
     +$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
     +$node->reload;
     +$log_start =
     +  $node->wait_for_log(qr/reloading configuration files/, $log_start);
     +
    -+if ($node->connect_fails(
    -+		"$common_connstr user=test",
    -+		"validator must set authn_id",
    -+		expected_stderr => qr/OAuth bearer authentication failed/))
    -+{
    -+	$log_end =
    -+	  $node->wait_for_log(qr/FATAL:\s+OAuth bearer authentication failed/,
    -+		$log_start);
    -+
    -+	$node->log_check(
    -+		"validator must set authn_id: breadcrumbs are logged",
    -+		$log_start,
    -+		log_like => [
    -+			qr/connection authenticated: identity=""/,
    -+			qr/DETAIL:\s+Validator provided no identity/,
    -+			qr/FATAL:\s+OAuth bearer authentication failed/,
    -+		]);
    -+
    -+	$log_start = $log_end;
    -+}
    ++$node->connect_fails(
    ++	"$common_connstr user=test",
    ++	"validator must set authn_id",
    ++	expected_stderr => qr/OAuth bearer authentication failed/,
    ++	log_like => [
    ++		qr/connection authenticated: identity=""/,
    ++		qr/DETAIL:\s+Validator provided no identity/,
    ++		qr/FATAL:\s+OAuth bearer authentication failed/,
    ++	]);
    ++
    ++# Even if a validator authenticates the user, if the token isn't considered
    ++# valid, the connection fails.
    ++$bgconn->query_safe(
    ++	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
    ++$bgconn->query_safe(
    ++	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
    ++$node->reload;
    ++$log_start =
    ++  $node->wait_for_log(qr/reloading configuration files/, $log_start);
    ++
    ++$node->connect_fails(
    ++	"$common_connstr user=test",
    ++	"validator must authorize token explicitly",
    ++	expected_stderr => qr/OAuth bearer authentication failed/,
    ++	log_like => [
    ++		qr/connection authenticated: identity="test\@example\.org"/,
    ++		qr/DETAIL:\s+Validator failed to authorize the provided token/,
    ++		qr/FATAL:\s+OAuth bearer authentication failed/,
    ++	]);
     +
     +#
     +# Test user mapping.
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +
     +# To start, have the validator use the role names as authn IDs.
     +$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
    ++$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
     +
     +$node->reload;
     +$log_start =
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +
     +# The test user should work as before.
     +$user = "test";
    -+if ($node->connect_ok(
    -+		"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
    -+		"validator is used for $user",
    -+		expected_stderr =>
    -+		  qr@Visit https://example\.com/ and enter the code: postgresuser@))
    -+{
    -+	$log_start = $node->wait_for_log(qr/connection authorized/, $log_start);
    -+}
    ++$node->connect_ok(
    ++	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
    ++	"validator is used for $user",
    ++	expected_stderr =>
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
    ++	log_like => [qr/connection authorized/]);
     +
     +# testalt should be routed through the fail_validator.
     +$user = "testalt";
    @@ src/test/modules/oauth_validator/t/002_client.pl (new)
     +	);
     +}
     +
    ++# connect_timeout should work if the flow doesn't respond.
    ++$common_connstr = "$common_connstr connect_timeout=1";
    ++test(
    ++	"connect_timeout interrupts hung client flow",
    ++	flags => ["--hang-forever"],
    ++	expected_stderr => qr/failed: timeout expired/);
    ++
     +done_testing();
     
      ## src/test/modules/oauth_validator/t/OAuth/Server.pm (new) ##
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +            "response_types_supported": ["token"],
     +            "subject_types_supported": ["public"],
     +            "id_token_signing_alg_values_supported": ["RS256"],
    -+            "grant_types_supported": ["urn:ietf:params:oauth:grant-type:device_code"],
    ++            "grant_types_supported": [
    ++                "authorization_code",
    ++                "urn:ietf:params:oauth:grant-type:device_code",
    ++            ],
     +        }
     +
     +    @property
    @@ src/test/modules/oauth_validator/validator.c (new)
     +	.validate_cb = validate_token
     +};
     +
    ++/* GUCs */
     +static char *authn_id = NULL;
    ++static bool authorize_tokens = true;
     +
     +/*---
     + * Extension entry point. Sets up GUCs for use by tests:
    @@ src/test/modules/oauth_validator/validator.c (new)
     + * - oauth_validator.authn_id	Sets the user identifier to return during token
     + *								validation. Defaults to the username in the
     + *								startup packet.
    ++ *
    ++ * - oauth_validator.authorize_tokens
    ++ *								Sets whether to successfully validate incoming
    ++ *								tokens. Defaults to true.
     + */
     +void
     +_PG_init(void)
    @@ src/test/modules/oauth_validator/validator.c (new)
     +							   PGC_SIGHUP,
     +							   0,
     +							   NULL, NULL, NULL);
    ++	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
    ++							 "Should tokens be marked valid?",
    ++							 NULL,
    ++							 &authorize_tokens,
    ++							 true,
    ++							 PGC_SIGHUP,
    ++							 0,
    ++							 NULL, NULL, NULL);
     +
     +	MarkGUCPrefixReserved("oauth_validator");
     +}
    @@ src/test/modules/oauth_validator/validator.c (new)
     +}
     +
     +/*
    -+ * Validator implementation. Logs the incoming data and authorizes the token;
    -+ * the behavior can be modified via the module's GUC settings.
    ++ * Validator implementation. Logs the incoming data and authorizes the token by
    ++ * default; the behavior can be modified via the module's GUC settings.
     + */
     +static ValidatorModuleResult *
     +validate_token(ValidatorModuleState *state, const char *token, const char *role)
    @@ src/test/modules/oauth_validator/validator.c (new)
     +		 MyProcPort->hba->oauth_issuer,
     +		 MyProcPort->hba->oauth_scope);
     +
    -+	res->authorized = true;
    ++	res->authorized = authorize_tokens;
     +	if (authn_id)
     +		res->authn_id = pstrdup(authn_id);
     +	else
-:  ----------- > 2:  de155343c81 squash! Add OAUTHBEARER SASL mechanism
2:  566d90d30a7 ! 3:  661de01c4ed DO NOT MERGE: Add pytest suite for OAuth
    @@ src/test/python/client/test_oauth.py (new)
     +                "subject_types_supported": ["public"],
     +                "id_token_signing_alg_values_supported": ["RS256"],
     +                "grant_types_supported": [
    -+                    "urn:ietf:params:oauth:grant-type:device_code"
    ++                    "authorization_code",
    ++                    "urn:ietf:params:oauth:grant-type:device_code",
     +                ],
     +            }
     +
    @@ src/test/python/client/test_oauth.py (new)
     +                # that break the HTTP protocol. Just return and have the server
     +                # close the socket.
     +                return
    ++            except ssl.SSLError as err:
    ++                # FIXME OpenSSL 3.4 introduced an incompatibility with Python's
    ++                # TLS error handling, resulting in a bogus "[SYS] unknown error"
    ++                # on some platforms. Hopefully this is fixed in 2025's set of
    ++                # maintenance releases and this case can be removed.
    ++                #
    ++                #     https://github.com/python/cpython/issues/127257
    ++                #
    ++                if "[SYS] unknown error" in str(err):
    ++                    return
    ++                raise
     +
     +            super().shutdown_request(request)
     +
v40-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/x-patch; name=v40-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 7ee8628abac6e218e48c13e24116168ee445bc99 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v40 1/3] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   17 +-
 config/programs.m4                            |    2 -
 configure                                     |  213 ++
 configure.ac                                  |   32 +
 doc/src/sgml/client-auth.sgml                 |  250 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  363 +++
 doc/src/sgml/oauth-validators.sgml            |  388 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   23 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  860 ++++++
 src/backend/libpq/auth.c                      |   26 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |   17 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    7 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2529 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1005 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   46 +
 src/interfaces/libpq/fe-auth-sasl.h           |   10 +-
 src/interfaces/libpq/fe-auth-scram.c          |    6 +-
 src/interfaces/libpq/fe-auth.c                |  104 +-
 src/interfaces/libpq/fe-auth.h                |    6 +-
 src/interfaces/libpq/fe-connect.c             |   90 +-
 src/interfaces/libpq/fe-misc.c                |    7 +-
 src/interfaces/libpq/libpq-fe.h               |   88 +
 src/interfaces/libpq/libpq-int.h              |   15 +
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  228 ++
 .../modules/oauth_validator/t/001_server.pl   |  487 ++++
 .../modules/oauth_validator/t/002_client.pl   |  125 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  135 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   12 +
 59 files changed, 8020 insertions(+), 60 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 18e944ca89d..bb5b07db275 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -175,6 +175,7 @@ task:
         --buildtype=debug \
         -Dcassert=true -Dinjection_points=true \
         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
+        -Dlibcurl=enabled \
         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
         build
     EOF
@@ -219,6 +220,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -234,6 +236,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-zstd
 
 LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
+  -Dlibcurl=enabled
   -Dllvm=enabled
   -Duuid=e2fs
 
@@ -312,8 +315,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +694,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 490ec9fe9f5..d4ff8c82afc 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -142,8 +142,6 @@ if test "$pgac_cv_ldap_safe" != yes; then
 *** also uses LDAP will crash on exit.])
 fi])
 
-
-
 # PGAC_CHECK_READLINE
 # -------------------
 # Check for the readline library and dependent libraries, either
diff --git a/configure b/configure
index 518c33b73a9..0e812880c20 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support for OAuth client flows
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,144 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support for OAuth client flows" >&5
+$as_echo_n "checking whether to build with libcurl support for OAuth client flows... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12207,6 +12356,59 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-libcurl" "$LINENO" 5
+fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
@@ -13955,6 +14157,17 @@ fi
 
 done
 
+fi
+
+if test "$with_libcurl" = yes; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index 247ae97fa4c..4850a543292 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,27 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support for OAuth client flows])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support for OAuth client flows],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support for OAuth client flows. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1315,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-libcurl])])
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
@@ -1588,6 +1616,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_libcurl" = yes; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..7ce02481eea 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,240 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        The issuer identifier of the authorization server, as defined by its
+        discovery document, or a well-known URI pointing to that discovery
+        document. This parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a discovery document URI
+        will be constructed using the issuer identifier. By default, the URI
+        uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, the URI will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index fbdd6ce5740..0c83eb2f49c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index ebdb5b3bc2d..3fca2910dad 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1141,6 +1141,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2582,6 +2595,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 105b22b3171..93266c4a0d0 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2345,6 +2345,97 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of an issuer to contact if the server requests an OAuth
+        token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URI
+        providing a set of OAuth configuration parameters. The server must
+        provide a URI that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        This standard handshake requires two separate network connections to the
+        server per authentication attempt. To skip asking the server for a
+        discovery document URI, you may set <literal>oauth_issuer</literal> to a
+        <literal>/.well-known/</literal> URI used for OAuth discovery. (In this
+        case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth">custom OAuth
+        hook</link> is installed to provide one), then this parameter must be
+        set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9992,6 +10083,278 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URI */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..3e17805e53f
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,388 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in hung sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but note that implementers should consider negative
+       testing to be mandatory. It's trivial to design a module that lets
+       authorized users in; the whole point of the system is to keep
+       unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f7..ae4732df656 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index e5ce437a5c7..246210ad8a7 100644
--- a/meson.build
+++ b/meson.build
@@ -854,6 +854,24 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+  endif
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3030,6 +3048,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3698,6 +3720,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index 38935196394..a3d49e2261c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support for OAuth client flows')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index eac3d001211..5771983af93 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..a4d0a8a1bb5
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,860 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+typedef enum
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+} oauth_state;
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("Internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked via before_shmem_exit().
+ */
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 47e8c916060..0cf3e31c9fe 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
@@ -305,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -340,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -627,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 3104b871cf1..f35ba634662 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 7c65314512c..c85527fb018 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2024, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 8cf1afbad20..6f985e75824 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4813,6 +4814,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index a2ac7575ca7..f066d491614 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..8fe56267780
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf65..22f6ab9f1d8 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
@@ -23,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index 8ea837ae82a..fb333a15782 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..4fcdda74305
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..d65b6b06396 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -663,6 +666,10 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support for OAuth client flows.
+   (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index c1bf33dbdc7..5feec8738c5 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..21dc366c7ad
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2529 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
+ * during pqDropConnection() so that we don't leak resources even if
+ * PQconnectPoll() never calls us back.
+ *
+ * TODO: we should probably call this at the end of a successful authentication,
+ * too, to proactively free up resources.
+ */
+static void
+free_curl_async_ctx(PGconn *conn, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
+
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	if (!ctx->nested)
+		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+			Assert(!*field->target.scalar);
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+		state->free_async_ctx = free_curl_async_ctx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		*altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				*altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn, altsock);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..cb290c5c113
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1005 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	free(state->token);
+	if (state->async_ctx)
+		state->free_async_ctx(state->conn, state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the token pointer will be ignored and the initial
+ * response will instead contain a request for the server's required OAuth
+ * parameters (Sec. 4.3). Otherwise, a bearer token must be provided.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* We must have a token. */
+		if (!token)
+		{
+			/*
+			 * Either programmer error, or something went badly wrong during
+			 * the asynchronous fetch.
+			 *
+			 * TODO: users shouldn't see this; what action should they take if
+			 * they do?
+			 */
+			libpq_append_conn_error(conn,
+									"no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules state this must be
+	 * at the beginning of the path component, but OIDC defined it at the end
+	 * instead, so we have to search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection. conn->oauth_want_retry will be set if the error status is
+ * suitable for a second attempt.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	/* TODO: what if these override what the user already specified? */
+	/* TODO: what if there's no discovery URI? */
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/* The URI must correspond to our existing issuer, to avoid mix-ups. */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+		ctx.discovery_uri = NULL;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+		ctx.scope = NULL;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = strdup(request->token);
+		if (!state->token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+/*
+ * Cleanup callback for the user flow. Delegates most of its job to the
+ * user-provided cleanup implementation.
+ */
+static void
+free_request(PGconn *conn, void *vreq)
+{
+	PGoauthBearerRequest *request = vreq;
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+
+	free(request);
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			state->token = strdup(request.token);
+			if (!state->token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		state->async_ctx = request_copy;
+		state->free_async_ctx = free_request;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/*
+		 * Hand off to our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built using --with-libcurl");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->step = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly. This doesn't
+				 * require any asynchronous work.
+				 */
+				discover = true;
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, discover, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->step = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..f74aba80cea
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,46 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 258bfd0564f..b47011d077d 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d97..da168eb2f5d 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e943..6289b8b60a7 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -430,7 +432,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +450,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -578,26 +587,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +673,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +703,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1025,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1194,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1164,7 +1211,8 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			}
 			oldmsglen = conn->errorMessage.len;
 			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
+								 (areq == AUTH_REQ_SASL_FIN),
+								 async) != STATUS_OK)
 			{
 				/* Use this message if pg_SASL_continue didn't supply one */
 				if (conn->errorMessage.len == oldmsglen)
@@ -1493,3 +1541,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c5086882..1003fff042c 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,8 +18,12 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index ddcc7b60ab0..28a26a1d362 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -27,6 +27,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -366,6 +367,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -629,6 +647,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -2646,6 +2665,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3681,6 +3701,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3836,6 +3857,19 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -3869,7 +3903,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3906,6 +3950,41 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
+														 * this? */
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4587,6 +4666,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -4704,6 +4784,12 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -7236,6 +7322,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d1..e2ba483ea86 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 5947e7c766f..6764e1ea882 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -28,6 +28,10 @@ extern "C"
  */
 #include "postgres_ext.h"
 
+#ifdef WIN32
+#include <winsock2.h>			/* for SOCKET */
+#endif
+
 /*
  * These symbols may be used in compile-time #ifdef tests for the availability
  * of v14-and-newer libpq features.
@@ -59,6 +63,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -103,6 +109,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
@@ -184,6 +192,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -718,10 +733,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef WIN32
+#define SOCKTYPE SOCKET
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index dcebca98988..6e6ab2c1f51 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -433,6 +433,16 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -507,6 +517,11 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	/* Callback for external async authentication */
+	PostgresPollingStatusType (*async_auth) (PGconn *conn, pgsocket *altsock);
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index ed2a4048d18..dc6f3ecab89 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -4,6 +4,7 @@
 # args for executables (which depend on libpq).
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -40,6 +41,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index aba7411a1be..d84743990a6 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14b..bdfd5f1f8de 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index c829b619530..bd13e4afbd6 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..f297ed5c968
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..138a8104622
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..f77a3e115c6
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..4b78c90557c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..ada7789ad8d
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,228 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <stdio.h>
+#include <stdlib.h>
+
+#ifdef WIN32
+#include <winsock2.h>
+#else
+#include <sys/socket.h>
+#endif
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	fprintf(stderr, "usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	fprintf(stderr, "recognized flags:\n");
+	fprintf(stderr, " -h, --help				show this message\n");
+	fprintf(stderr, " --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	fprintf(stderr, " --expected-uri URI		fail if received configuration link does not match URI\n");
+	fprintf(stderr, " --no-hook					don't install OAuth hooks (connection will fail)\n");
+	fprintf(stderr, " --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	fprintf(stderr, " --token TOKEN				use the provided TOKEN value\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..10d2d3da929
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,487 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..d746acf323c
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,125 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built using --with-libcurl/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..f0f23d1d1a8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..8ec09102027
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..bf94f091def
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,135 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 508e5e3917a..8357272d678 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2513,6 +2513,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2556,7 +2561,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index e889af6b1e4..362b20a94f7 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -235,6 +235,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -249,6 +257,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index fbdb932e6b6..7dbc052c65f 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -369,6 +369,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1719,6 +1722,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1827,6 +1831,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1834,7 +1839,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1946,6 +1953,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3077,6 +3085,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3470,6 +3480,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
@@ -3674,6 +3685,7 @@ nsphash_hash
 ntile_context
 nullingrel_info
 numeric
+oauth_state
 object_access_hook_type
 object_access_hook_type_str
 off_t
-- 
2.34.1

v40-0002-squash-Add-OAUTHBEARER-SASL-mechanism.patchapplication/x-patch; name=v40-0002-squash-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From de155343c816fa9ee8301522060576ee796857a4 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 16 Dec 2024 13:57:14 -0800
Subject: [PATCH v40 2/3] squash! Add OAUTHBEARER SASL mechanism

Add require_auth=oauth support.
---
 doc/src/sgml/libpq.sgml                       |   9 +
 src/interfaces/libpq/fe-auth-oauth.c          |   7 +
 src/interfaces/libpq/fe-auth.c                |  22 +++
 src/interfaces/libpq/fe-connect.c             | 183 ++++++++++++++++--
 src/interfaces/libpq/libpq-int.h              |   2 +
 src/test/authentication/t/001_password.pl     |  18 +-
 .../modules/oauth_validator/t/001_server.pl   |  62 +++++-
 7 files changed, 278 insertions(+), 25 deletions(-)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 93266c4a0d0..fe0cbb0c800 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cb290c5c113..71940fb9f54 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -934,6 +934,13 @@ oauth_exchange(void *opaq, bool final,
 			*outputlen = strlen(*output);
 			state->step = FE_OAUTH_BEARER_SENT;
 
+			/*
+			 * For the purposes of require_auth, our side of authentication is
+			 * done at this point; the server will either accept the
+			 * connection or send an error. Unlike SCRAM, there is no
+			 * additional server data to check upon success.
+			 */
+			conn->client_finished_auth = true;
 			return SASL_CONTINUE;
 
 		case FE_OAUTH_BEARER_SENT:
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 6289b8b60a7..421278dacf8 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -552,6 +552,28 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 		goto error;
 	}
 
+	/* Make sure require_auth is satisfied. */
+	if (conn->require_auth)
+	{
+		bool		allowed = false;
+
+		for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		{
+			if (conn->sasl == conn->allowed_sasl_mechs[i])
+			{
+				allowed = true;
+				break;
+			}
+		}
+
+		if (!allowed)
+		{
+			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
+									conn->require_auth, selected_mechanism);
+			goto error;
+		}
+	}
+
 	if (conn->channel_binding[0] == 'r' &&	/* require */
 		strcmp(selected_mechanism, SCRAM_SHA_256_PLUS_NAME) != 0)
 	{
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 28a26a1d362..ff1322cb07d 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1129,6 +1129,57 @@ libpq_prng_init(PGconn *conn)
 	pg_prng_seed(&conn->prng_state, rseed);
 }
 
+/*
+ * Fills the connection's allowed_sasl_mechs list with all supported SASL
+ * mechanisms.
+ */
+static inline void
+fill_allowed_sasl_mechs(PGconn *conn)
+{
+	/*---
+	 * We only support two mechanisms at the moment, so rather than deal with a
+	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
+	 * rely on the compile-time assertion here to keep us honest.
+	 *
+	 * To add a new mechanism to require_auth,
+	 * - update the length of conn->allowed_sasl_mechs,
+	 * - add the new pg_fe_sasl_mech pointer to this function, and
+	 * - handle the new mechanism name in the require_auth portion of
+	 *   pqConnectOptions2(), below.
+	 */
+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 2,
+					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
+
+	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
+	conn->allowed_sasl_mechs[1] = &pg_oauth_mech;
+}
+
+/*
+ * Clears the connection's allowed_sasl_mechs list.
+ */
+static inline void
+clear_allowed_sasl_mechs(PGconn *conn)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		conn->allowed_sasl_mechs[i] = NULL;
+}
+
+/*
+ * Helper routine that searches the static allowed_sasl_mechs list for a
+ * specific mechanism.
+ */
+static inline int
+index_of_allowed_sasl_mech(PGconn *conn, const pg_fe_sasl_mech *mech)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+	{
+		if (conn->allowed_sasl_mechs[i] == mech)
+			return i;
+	}
+
+	return -1;
+}
+
 /*
  *		pqConnectOptions2
  *
@@ -1370,17 +1421,19 @@ pqConnectOptions2(PGconn *conn)
 		bool		negated = false;
 
 		/*
-		 * By default, start from an empty set of allowed options and add to
-		 * it.
+		 * By default, start from an empty set of allowed methods and
+		 * mechanisms, and add to it.
 		 */
 		conn->auth_required = true;
 		conn->allowed_auth_methods = 0;
+		clear_allowed_sasl_mechs(conn);
 
 		for (first = true, more = true; more; first = false)
 		{
 			char	   *method,
 					   *part;
-			uint32		bits;
+			uint32		bits = 0;
+			const pg_fe_sasl_mech *mech = NULL;
 
 			part = parse_comma_separated_list(&s, &more);
 			if (part == NULL)
@@ -1396,11 +1449,12 @@ pqConnectOptions2(PGconn *conn)
 				if (first)
 				{
 					/*
-					 * Switch to a permissive set of allowed options, and
-					 * subtract from it.
+					 * Switch to a permissive set of allowed methods and
+					 * mechanisms, and subtract from it.
 					 */
 					conn->auth_required = false;
 					conn->allowed_auth_methods = -1;
+					fill_allowed_sasl_mechs(conn);
 				}
 				else if (!negated)
 				{
@@ -1425,6 +1479,10 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
+			/*
+			 * First group: methods that can be handled solely with the
+			 * authentication request codes.
+			 */
 			if (strcmp(method, "password") == 0)
 			{
 				bits = (1 << AUTH_REQ_PASSWORD);
@@ -1443,13 +1501,26 @@ pqConnectOptions2(PGconn *conn)
 				bits = (1 << AUTH_REQ_SSPI);
 				bits |= (1 << AUTH_REQ_GSS_CONT);
 			}
+
+			/*
+			 * Next group: SASL mechanisms. All of these use the same request
+			 * codes, so the list of allowed mechanisms is tracked separately.
+			 *
+			 * fill_allowed_sasl_mechs() must be updated when adding a new
+			 * mechanism here!
+			 */
 			else if (strcmp(method, "scram-sha-256") == 0)
 			{
-				/* This currently assumes that SCRAM is the only SASL method. */
-				bits = (1 << AUTH_REQ_SASL);
-				bits |= (1 << AUTH_REQ_SASL_CONT);
-				bits |= (1 << AUTH_REQ_SASL_FIN);
+				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
+
+			/*
+			 * Final group: meta-options.
+			 */
 			else if (strcmp(method, "none") == 0)
 			{
 				/*
@@ -1485,20 +1556,68 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
-			/* Update the bitmask. */
-			if (negated)
+			if (mech)
 			{
-				if ((conn->allowed_auth_methods & bits) == 0)
-					goto duplicate;
+				/*
+				 * Update the mechanism set only. The method bitmask will be
+				 * updated for SASL further down.
+				 */
+				Assert(!bits);
 
-				conn->allowed_auth_methods &= ~bits;
+				if (negated)
+				{
+					/* Remove the existing mechanism from the list. */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i < 0)
+						goto duplicate;
+
+					conn->allowed_sasl_mechs[i] = NULL;
+				}
+				else
+				{
+					/*
+					 * Find a space to put the new mechanism (after making
+					 * sure it's not already there).
+					 */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i >= 0)
+						goto duplicate;
+
+					i = index_of_allowed_sasl_mech(conn, NULL);
+					if (i < 0)
+					{
+						/* Should not happen; the pointer list is corrupted. */
+						Assert(false);
+
+						conn->status = CONNECTION_BAD;
+						libpq_append_conn_error(conn,
+												"internal error: no space in allowed_sasl_mechs");
+						free(part);
+						return false;
+					}
+
+					conn->allowed_sasl_mechs[i] = mech;
+				}
 			}
 			else
 			{
-				if ((conn->allowed_auth_methods & bits) == bits)
-					goto duplicate;
+				/* Update the method bitmask. */
+				Assert(bits);
 
-				conn->allowed_auth_methods |= bits;
+				if (negated)
+				{
+					if ((conn->allowed_auth_methods & bits) == 0)
+						goto duplicate;
+
+					conn->allowed_auth_methods &= ~bits;
+				}
+				else
+				{
+					if ((conn->allowed_auth_methods & bits) == bits)
+						goto duplicate;
+
+					conn->allowed_auth_methods |= bits;
+				}
 			}
 
 			free(part);
@@ -1517,6 +1636,36 @@ pqConnectOptions2(PGconn *conn)
 			free(part);
 			return false;
 		}
+
+		/*
+		 * Finally, allow SASL authentication requests if (and only if) we've
+		 * allowed any mechanisms.
+		 */
+		{
+			bool		allowed = false;
+			const uint32 sasl_bits =
+				(1 << AUTH_REQ_SASL)
+				| (1 << AUTH_REQ_SASL_CONT)
+				| (1 << AUTH_REQ_SASL_FIN);
+
+			for (i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+			{
+				if (conn->allowed_sasl_mechs[i])
+				{
+					allowed = true;
+					break;
+				}
+			}
+
+			/*
+			 * For the standard case, add the SASL bits to the (default-empty)
+			 * set if needed. For the negated case, remove them.
+			 */
+			if (!negated && allowed)
+				conn->allowed_auth_methods |= sasl_bits;
+			else if (negated && !allowed)
+				conn->allowed_auth_methods &= ~sasl_bits;
+		}
 	}
 
 	/*
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 6e6ab2c1f51..fd571119033 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -511,6 +511,8 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
+													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 	char		current_auth_response;	/* used by pqTraceOutputMessage to
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 87e180af3d3..0838c080350 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -277,6 +277,16 @@ $node->connect_fails(
 	"require_auth methods cannot be duplicated, !none case",
 	expected_stderr =>
 	  qr/require_auth method "!none" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=scram-sha-256,scram-sha-256",
+	"require_auth methods cannot be duplicated, scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "scram-sha-256" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=!scram-sha-256,!scram-sha-256",
+	"require_auth methods cannot be duplicated, !scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "!scram-sha-256" is specified more than once/);
 
 # Unknown value defined in require_auth.
 $node->connect_fails(
@@ -394,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -455,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 10d2d3da929..77a106b2cc2 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -124,6 +124,60 @@ $node->connect_fails(
 	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
 );
 
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
 # Make sure the client_id and secret are correctly encoded. $vschars contains
 # every allowed character for a client_id/_secret (the "VSCHAR" class).
 # $vschars_esc is additionally backslash-escaped for inclusion in a
@@ -134,15 +188,15 @@ my $vschars_esc =
   " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
 
 $node->connect_ok(
-	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc'",
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
 	"escapable characters: client_id",
 	expected_stderr =>
-	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
 $node->connect_ok(
-	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
 	"escapable characters: client_id and secret",
 	expected_stderr =>
-	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
 
 #
 # Further tests rely on support for specific behaviors in oauth_server.py. To
-- 
2.34.1

v40-0003-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/x-patch; name=v40-0003-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 661de01c4ed0aeb62f1b3c0290a647c5fb75ed7a Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v40 3/3] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  195 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2507 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 ++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 +++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6287 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index bb5b07db275..dbc83df82fc 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -319,6 +319,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -403,8 +404,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 246210ad8a7..71a3d7d56f6 100644
--- a/meson.build
+++ b/meson.build
@@ -3361,6 +3361,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3527,6 +3530,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index 67376e4b7fd..c7fce098eb1 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 00000000000..0e8f027b2ec
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 00000000000..b0695b6287e
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 00000000000..acf339a5899
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 00000000000..9caa3a56d44
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 00000000000..8372376ede4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 00000000000..c61e8f0c760
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2507 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "authorization_code",
+                    "urn:ietf:params:oauth:grant-type:device_code",
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+            except ssl.SSLError as err:
+                # FIXME OpenSSL 3.4 introduced an incompatibility with Python's
+                # TLS error handling, resulting in a bogus "[SYS] unknown error"
+                # on some platforms. Hopefully this is fixed in 2025's set of
+                # maintenance releases and this case can be removed.
+                #
+                #     https://github.com/python/cpython/issues/127257
+                #
+                if "[SYS] unknown error" in str(err):
+                    return
+                raise
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PGPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PGOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PGOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PGOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PGPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PGOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    if server_discovery:
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+
+                # For discovery, the client should send an empty auth header.
+                # See RFC 7628, Sec. 4.3.
+                auth = get_auth_value(initial)
+                assert auth == b""
+
+                # Always fail the discovery exchange.
+                fail_oauth_handshake(
+                    conn,
+                    {
+                        "status": "invalid_token",
+                        "openid-configuration": discovery_uri,
+                    },
+                )
+
+        # Expect the client to connect again.
+        sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    if server_discovery:
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+
+                # For discovery, the client should send an empty auth header.
+                # See RFC 7628, Sec. 4.3.
+                auth = get_auth_value(initial)
+                assert auth == b""
+
+                # Always fail the discovery exchange.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": discovery_uri,
+                }
+                pq3.send(
+                    conn,
+                    pq3.types.AuthnRequest,
+                    type=pq3.authn.SASLContinue,
+                    body=json.dumps(resp).encode("utf-8"),
+                )
+
+                # FIXME: the client disconnects at this point; it'd be nicer if
+                # it completed the exchange.
+
+            # The client should not reconnect.
+
+    else:
+        expect_disconnected_handshake(sock)
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PGOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id="some-id",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": to_http(openid_provider.discovery_uri),
+            }
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=json.dumps(resp).encode("utf-8"),
+            )
+
+            # FIXME: the client disconnects at this point; it'd be nicer if
+            # it completed the exchange.
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 00000000000..1a73865ee47
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 00000000000..e137df852ef
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 00000000000..ef809e288af
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 00000000000..ab7a6e7fb96
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 00000000000..0dfcffb83e0
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 00000000000..42af80c73ee
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 00000000000..85534b9cc99
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 00000000000..415748b9a66
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 00000000000..2839343ffa1
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 00000000000..02126dba792
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 00000000000..dee4855fc0b
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 00000000000..7c6817de31c
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 00000000000..075c02c1ca6
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 00000000000..804307ee120
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba7..ffdf760d79a 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.34.1

#172Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#171)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 20 Dec 2024, at 02:00, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

Thanks for the new version, I was doing a v39 review but I'll roll that over
into a v40 review now.

As I was reading I was trying to identify parts can be broken out and committed
ahead of time. This not only to trim down size, but mostly to shape the final
commit into a coherent single commit that brings a single functionality
utilizing existing APIs. Basically I think we should keep generic
functionality out of the final commit and keep that focused on OAuth and the
required APIs and infra.

The async auth support seemed like a candidate to go in before the rest. While
there won't be any consumers of it, it's also not limited to OAuth. What do
you think about slicing that off and get in ahead of time? I took a small stab
at separating out the generic bits (it includes the PG_MAX_AUTH_TOKEN_LENGTH
move as well which is unrelated, but could also be committed ahead of time)
along with some small tweaks on it.

--
Daniel Gustafsson

Attachments:

async_auth_portion.diff.txttext/plain; name=async_auth_portion.diff.txt; x-unix-mode=0644Download
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 227b41daf6..01bc3c7250 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index 258bfd0564..b47011d077 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,17 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth appropriately before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 0bb820e0d9..da168eb2f5 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 20d3427e94..8aa972fdfb 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -430,7 +430,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +448,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -578,26 +578,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -642,7 +664,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -672,11 +694,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -984,12 +1016,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1147,7 +1185,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1156,23 +1194,33 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 
 		case AUTH_REQ_SASL_CONT:
 		case AUTH_REQ_SASL_FIN:
-			if (conn->sasl_state == NULL)
-			{
-				appendPQExpBufferStr(&conn->errorMessage,
-									 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
-				return STATUS_ERROR;
-			}
-			oldmsglen = conn->errorMessage.len;
-			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
 			{
-				/* Use this message if pg_SASL_continue didn't supply one */
-				if (conn->errorMessage.len == oldmsglen)
+				bool		final = false;
+
+				if (conn->sasl_state == NULL)
+				{
 					appendPQExpBufferStr(&conn->errorMessage,
-										 "fe_sendauth: error in SASL authentication\n");
-				return STATUS_ERROR;
+										 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
+					return STATUS_ERROR;
+				}
+				oldmsglen = conn->errorMessage.len;
+
+				if (areq == AUTH_REQ_SASL_FIN)
+					final = true;
+
+				if (pg_SASL_continue(conn, payloadlen, final, async) != STATUS_OK)
+				{
+					/*
+					 * Append a generic error message unless pg_SASL_continue
+					 * did set a more specific one already.
+					 */
+					if (conn->errorMessage.len == oldmsglen)
+						appendPQExpBufferStr(&conn->errorMessage,
+											 "fe_sendauth: error in SASL authentication\n");
+					return STATUS_ERROR;
+				}
+				break;
 			}
-			break;
 
 		default:
 			libpq_append_conn_error(conn, "authentication method %u not supported", areq);
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index a18c508688..286b25c941 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -19,7 +19,8 @@
 
 
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index aaf87e8e88..fb786a2feb 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -479,6 +479,14 @@ pqDropConnection(PGconn *conn, bool flushInput)
 		closesocket(conn->sock);
 	conn->sock = PGINVALID_SOCKET;
 
+	/*
+	 * The altsock used for asynchronous authentication should be closed in
+	 * case it's still open.
+	 */
+	if (conn->altsock != PGINVALID_SOCKET)
+		closesocket(conn->altsock);
+	conn->altsock = PGINVALID_SOCKET;
+
 	/* Optionally discard any unread data */
 	if (flushInput)
 		conn->inStart = conn->inCursor = conn->inEnd = 0;
@@ -2645,6 +2653,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3680,6 +3689,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -3868,7 +3878,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -3893,6 +3913,14 @@ keep_going:						/* We will come back to here until there is
 					/* We are done with authentication exchange */
 					conn->status = CONNECTION_AUTH_OK;
 
+					/*
+					 * Close the altsock which may have been used in case if
+					 * this was an asynchronous authentication.
+					 */
+					if (conn->altsock != PGINVALID_SOCKET)
+						closesocket(conn->altsock);
+					conn->altsock = PGINVALID_SOCKET;
+
 					/*
 					 * Set asyncStatus so that PQgetResult will think that
 					 * what comes back next is the result of a query.  See
@@ -3905,6 +3933,43 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+				pgsocket	altsock = PGINVALID_SOCKET;
+
+				if (!conn->async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				status = conn->async_auth(conn, &altsock);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+					if (conn->altsock != PGINVALID_SOCKET)
+						closesocket(conn->altsock);
+					conn->altsock = PGINVALID_SOCKET;
+
+					goto keep_going;
+				}
+
+				conn->altsock = altsock;
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4586,6 +4651,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -7227,6 +7293,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 928a47162d..e2ba483ea8 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1056,10 +1056,13 @@ static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
+	if (sock == PGINVALID_SOCKET)
 	{
 		libpq_append_conn_error(conn, "invalid socket");
 		return -1;
@@ -1076,7 +1079,7 @@ pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 15012c770c..8d4c4ffd4f 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -103,6 +103,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 08cc391cbd..427b538c0b 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -506,6 +506,11 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	/* Callback for external async authentication */
+	PostgresPollingStatusType (*async_auth) (PGconn *conn, pgsocket *altsock);
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
#173Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#171)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 20.12.24 02:00, Jacob Champion wrote:

v40 also contains:
- explicit testing for connect_timeout compatibility
- support for require_auth=oauth, including compatibility with
require_auth=!scram-sha-256
- the ability for a validator to set authn_id even if the token is not
authorized, for auditability in the logs
- the use of pq_block_sigpipe() for additional safety in the face of
CURLOPT_NOSIGNAL

I have split out the require_auth changes temporarily (0002) for ease
of review, and I plan to ping the last thread where SASL support in
require_auth was discussed [1].

Some review of v40:

General:

There is a mix of using "URL" and "URI" throughout the patch. I tried
to look up in the source material (RFCs) what the correct use would
be, but even they are mixing it in nonobvious ways. Maybe this is
just hopelessly confused, or maybe there is a system that I don't
recognize.

* .cirrus.tasks.yml

Since libcurl is an "auto" meson option, it doesn't need to be enabled
explicitly. At least that's how most of the other feature options are
handled. So probably better to stick to that pattern.

* config/programs.m4

Useless whitespace change.

* configure.ac

+AC_MSG_CHECKING([whether to build with libcurl support for OAuth client
flows])
etc.

Let's just write something like 'whether to build with libcurl
support' here. So we don't have to keep updating it if the scope of
the option changes.

* meson_options.txt

+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support for OAuth client flows')

Similarly, let's just write something like 'libcurl support' here.

* src/backend/libpq/auth-oauth.c

+typedef enum
+{
+   OAUTH_STATE_INIT = 0,
+   OAUTH_STATE_ERROR,
+   OAUTH_STATE_FINISHED,
+} oauth_state;
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+   oauth_state state;

This is the only use of that type definition. I think you can skip
the typedef and use the enum tag directly.

* src/interfaces/libpq/libpq-fe.h

+#ifdef WIN32
+#include <winsock2.h>          /* for SOCKET */
+#endif

Including a system header like that seems a bit unpleasant. AFAICT,
you only need it for this:

+   PostgresPollingStatusType (*async) (PGconn *conn,
+                                       struct _PGoauthBearerRequest 
*request,
+                                       SOCKTYPE * altsock);

But that function could already get the altsock handle via
conn->altsock. So maybe that is a way to avoid the whole socket type
dance in this header.

* src/test/authentication/t/001_password.pl

I suppose this file could be a separate commit? It just separates the
SASL/SCRAM terminology for existing functionality.

* src/test/modules/oauth_validator/fail_validator.c

+{
+   elog(FATAL, "fail_validator: sentinel error");
+   pg_unreachable();
+}

This pg_unreachable() is probably not necessary after elog(FATAL).

* .../modules/oauth_validator/oauth_hook_client.c

+#include <stdio.h>
+#include <stdlib.h>

These are generally not necessary, as they come in via c.h.

+#ifdef WIN32
+#include <winsock2.h>
+#else
+#include <sys/socket.h>
+#endif

I don't think this special Windows handling is necessary, since there
is src/include/port/win32/sys/socket.h.

+static void
+usage(char *argv[])
+{
+   fprintf(stderr, "usage: %s [flags] CONNINFO\n\n", argv[0]);

Help output should go to stdout.

With the above changes, I think this patch set is structurally okay now.
Now it just needs to do the right things. ;-)

#174Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#172)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Dec 20, 2024 at 2:21 PM Daniel Gustafsson <daniel@yesql.se> wrote:

On 20 Dec 2024, at 02:00, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

Thanks for the new version, I was doing a v39 review but I'll roll that over
into a v40 review now.

(Sorry for the rug pull!)

As I was reading I was trying to identify parts can be broken out and committed
ahead of time. This not only to trim down size, but mostly to shape the final
commit into a coherent single commit that brings a single functionality
utilizing existing APIs. Basically I think we should keep generic
functionality out of the final commit and keep that focused on OAuth and the
required APIs and infra.

Sounds good.

The async auth support seemed like a candidate to go in before the rest. While
there won't be any consumers of it, it's also not limited to OAuth. What do
you think about slicing that off and get in ahead of time? I took a small stab
at separating out the generic bits (it includes the PG_MAX_AUTH_TOKEN_LENGTH
move as well which is unrelated, but could also be committed ahead of time)
along with some small tweaks on it.

+1 to separating the PG_MAX_... macro move. I will take a closer look
at the async patch in isolation; there's some work I'm doing to fix a
bug Kashif (cc'd) found recently, and it has me a bit unsure about my
chosen order of operations in the async part of fe-connect.c. That
deserves its own email, but I need to investigate more.

Thanks!
--Jacob

#175Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#173)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Jan 2, 2025 at 2:11 AM Peter Eisentraut <peter@eisentraut.org> wrote:

There is a mix of using "URL" and "URI" throughout the patch. I tried
to look up in the source material (RFCs) what the correct use would
be, but even they are mixing it in nonobvious ways. Maybe this is
just hopelessly confused, or maybe there is a system that I don't
recognize.

Ugh, yeah. I think my "system" was whether the RFC I read most
recently had used "URL" or "URI" more in the text.

In an ideal world, I'd just switch to "URL" to avoid confusion. There
are some URNs in use as part of OAuth (e.g.
`urn:ietf:params:oauth:grant-type:device_code`) but I don't think I
refer to those as URIs anyway. And more esoteric forms of URI (data:)
are not allowed.

However... there are some phrases, like "Well-Known URI", where that's
just the Name of the Thing. Similarly, when the wire protocol itself
uses "URI" (say, in JSON field names), I'd rather be consistent to
make searching easier.

Is it enough to prefer "URL" in the user-facing documentation (at
least, when it doesn't conflict with other established naming
conventions), and accept both in the code?

* src/interfaces/libpq/libpq-fe.h

+#ifdef WIN32
+#include <winsock2.h>          /* for SOCKET */
+#endif

Including a system header like that seems a bit unpleasant. AFAICT,
you only need it for this:

+   PostgresPollingStatusType (*async) (PGconn *conn,
+                                       struct _PGoauthBearerRequest
*request,
+                                       SOCKTYPE * altsock);

But that function could already get the altsock handle via
conn->altsock. So maybe that is a way to avoid the whole socket type
dance in this header.

It'd also couple clients against libpq-int.h, so they'd have to
remember to recompile every release. I'm worried that'd cause a bunch
of ABI problems...

I could cheat and use uintptr_t instead of SOCKET on Windows, but that
seems like it might bite us in Win32-adjacent environments? It seems
to pass Cirrus okay. Other ideas?

* src/test/authentication/t/001_password.pl

I suppose this file could be a separate commit? It just separates the
SASL/SCRAM terminology for existing functionality.

The scram-sha-256 duplication tests could, I suppose. But the only
reason that's interesting to test now is because of the change to the
internals. The "server requested SCRAM-SHA-256 authentication" error
message change comes in with the new require_auth handling, so that
should all land in the same patch.

Along those lines, though, Michael Paquier suggested that maybe I
could pull the require_auth prefactoring up to the front of the
patchset. That might look a bit odd until OAuth support lands, since
it won't be adding any new useful value, but I will give it a shot.

* src/test/modules/oauth_validator/fail_validator.c

+{
+   elog(FATAL, "fail_validator: sentinel error");
+   pg_unreachable();
+}

This pg_unreachable() is probably not necessary after elog(FATAL).

Cirrus completes successfully with that, but MSVC starts complaining:

warning C4715: 'fail_token': not all control paths return a value

Is that expected?

--

All other suggestions will be addressed in the next patchset. Thanks!

--Jacob

#176Kashif Zeeshan
kashi.zeeshan@gmail.com
In reply to: Jacob Champion (#174)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Jan 8, 2025 at 3:21 AM Jacob Champion <
jacob.champion@enterprisedb.com> wrote:

On Fri, Dec 20, 2024 at 2:21 PM Daniel Gustafsson <daniel@yesql.se> wrote:

On 20 Dec 2024, at 02:00, Jacob Champion <

jacob.champion@enterprisedb.com> wrote:

Thanks for the new version, I was doing a v39 review but I'll roll that

over

into a v40 review now.

(Sorry for the rug pull!)

As I was reading I was trying to identify parts can be broken out and

committed

ahead of time. This not only to trim down size, but mostly to shape the

final

commit into a coherent single commit that brings a single functionality
utilizing existing APIs. Basically I think we should keep generic
functionality out of the final commit and keep that focused on OAuth and

the

required APIs and infra.

Sounds good.

The async auth support seemed like a candidate to go in before the

rest. While

there won't be any consumers of it, it's also not limited to OAuth.

What do

you think about slicing that off and get in ahead of time? I took a

small stab

at separating out the generic bits (it includes the

PG_MAX_AUTH_TOKEN_LENGTH

move as well which is unrelated, but could also be committed ahead of

time)

along with some small tweaks on it.

+1 to separating the PG_MAX_... macro move. I will take a closer look
at the async patch in isolation; there's some work I'm doing to fix a
bug Kashif (cc'd) found recently, and it has me a bit unsure about my
chosen order of operations in the async part of fe-connect.c. That
deserves its own email, but I need to investigate more.

Thanks Jacob
Most of the testing with psql is done and working on the remaining test
cases.

Show quoted text

Thanks!
--Jacob

#177Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#175)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Jan 7, 2025 at 2:24 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Along those lines, though, Michael Paquier suggested that maybe I
could pull the require_auth prefactoring up to the front of the
patchset. That might look a bit odd until OAuth support lands, since
it won't be adding any new useful value, but I will give it a shot.

While I take a look at the async patch from upthread, here is my
attempt at pulling the require_auth change out.

Note that there's a dead branch that cannot be exercised until OAuth
lands. We're not going to process the SASL mechanism name at all if no
mechanisms are allowed to begin with, and right now SASL is synonymous
with SCRAM. I can change that by always allowing AuthenticationSASL
messages -- even if none of the allowed authentication types use SASL
-- but that approach didn't seem to generate excitement on- or
off-list the last time I proposed it [1]/messages/by-id/CAAWbhmg+GzNMK5Li182BKSbzoFVaKk_dDJ628NnuV80GqYgFFg@mail.gmail.com.

Thanks,
--Jacob

[1]: /messages/by-id/CAAWbhmg+GzNMK5Li182BKSbzoFVaKk_dDJ628NnuV80GqYgFFg@mail.gmail.com

Attachments:

require_auth_portion.diff.txttext/plain; charset=US-ASCII; name=require_auth_portion.diff.txtDownload
commit f8bbb0f2a2e0f4840bdeb5c9a7b9a35797280aaf
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Date:   Mon Dec 16 13:57:14 2024 -0800

    require_auth: prepare for multiple SASL mechanisms
    
    Prior to this patch, the require_auth implementation assumed that the
    AuthenticationSASL protocol message was synonymous with SCRAM-SHA-256.
    In preparation for the OAUTHBEARER SASL mechanism, split the
    implementation into two tiers: the first checks the acceptable
    AUTH_REQ_* codes, and the second checks acceptable mechanisms if
    AUTH_REQ_SASL et al are permitted.
    
    conn->allowed_sasl_mechs is the list of pointers to acceptable
    mechanisms. (Since we'll support only a small number of mechanisms, this
    is an array of static length to minimize bookkeeping.) pg_SASL_init()
    will bail if the selected mechanism isn't contained in this array.
    
    Since there's only one mechansism supported right now, one branch of the
    second tier cannot be exercised yet (it's marked with Assert(false)).
    This assertion will need to be removed when the next mechanism is added.

diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 14a9a862f51..722bb47ee14 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -543,6 +543,35 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
+	/* Make sure require_auth is satisfied. */
+	if (conn->require_auth)
+	{
+		bool		allowed = false;
+
+		for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		{
+			if (conn->sasl == conn->allowed_sasl_mechs[i])
+			{
+				allowed = true;
+				break;
+			}
+		}
+
+		if (!allowed)
+		{
+			/*
+			 * TODO: this is dead code until a second SASL mechanism is added;
+			 * the connection can't have proceeded past check_expected_areq()
+			 * if no SASL methods are allowed.
+			 */
+			Assert(false);
+
+			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
+									conn->require_auth, selected_mechanism);
+			goto error;
+		}
+	}
+
 	if (conn->channel_binding[0] == 'r' &&	/* require */
 		strcmp(selected_mechanism, SCRAM_SHA_256_PLUS_NAME) != 0)
 	{
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 8f211821eb2..6f262706b0a 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1110,6 +1110,56 @@ libpq_prng_init(PGconn *conn)
 	pg_prng_seed(&conn->prng_state, rseed);
 }
 
+/*
+ * Fills the connection's allowed_sasl_mechs list with all supported SASL
+ * mechanisms.
+ */
+static inline void
+fill_allowed_sasl_mechs(PGconn *conn)
+{
+	/*---
+	 * We only support one mechanism at the moment, so rather than deal with a
+	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
+	 * rely on the compile-time assertion here to keep us honest.
+	 *
+	 * To add a new mechanism to require_auth,
+	 * - update the length of conn->allowed_sasl_mechs,
+	 * - add the new pg_fe_sasl_mech pointer to this function, and
+	 * - handle the new mechanism name in the require_auth portion of
+	 *   pqConnectOptions2(), below.
+	 */
+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
+					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
+
+	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
+}
+
+/*
+ * Clears the connection's allowed_sasl_mechs list.
+ */
+static inline void
+clear_allowed_sasl_mechs(PGconn *conn)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		conn->allowed_sasl_mechs[i] = NULL;
+}
+
+/*
+ * Helper routine that searches the static allowed_sasl_mechs list for a
+ * specific mechanism.
+ */
+static inline int
+index_of_allowed_sasl_mech(PGconn *conn, const pg_fe_sasl_mech *mech)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+	{
+		if (conn->allowed_sasl_mechs[i] == mech)
+			return i;
+	}
+
+	return -1;
+}
+
 /*
  *		pqConnectOptions2
  *
@@ -1351,17 +1401,19 @@ pqConnectOptions2(PGconn *conn)
 		bool		negated = false;
 
 		/*
-		 * By default, start from an empty set of allowed options and add to
-		 * it.
+		 * By default, start from an empty set of allowed methods and
+		 * mechanisms, and add to it.
 		 */
 		conn->auth_required = true;
 		conn->allowed_auth_methods = 0;
+		clear_allowed_sasl_mechs(conn);
 
 		for (first = true, more = true; more; first = false)
 		{
 			char	   *method,
 					   *part;
-			uint32		bits;
+			uint32		bits = 0;
+			const pg_fe_sasl_mech *mech = NULL;
 
 			part = parse_comma_separated_list(&s, &more);
 			if (part == NULL)
@@ -1377,11 +1429,12 @@ pqConnectOptions2(PGconn *conn)
 				if (first)
 				{
 					/*
-					 * Switch to a permissive set of allowed options, and
-					 * subtract from it.
+					 * Switch to a permissive set of allowed methods and
+					 * mechanisms, and subtract from it.
 					 */
 					conn->auth_required = false;
 					conn->allowed_auth_methods = -1;
+					fill_allowed_sasl_mechs(conn);
 				}
 				else if (!negated)
 				{
@@ -1406,6 +1459,10 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
+			/*
+			 * First group: methods that can be handled solely with the
+			 * authentication request codes.
+			 */
 			if (strcmp(method, "password") == 0)
 			{
 				bits = (1 << AUTH_REQ_PASSWORD);
@@ -1424,13 +1481,22 @@ pqConnectOptions2(PGconn *conn)
 				bits = (1 << AUTH_REQ_SSPI);
 				bits |= (1 << AUTH_REQ_GSS_CONT);
 			}
+
+			/*
+			 * Next group: SASL mechanisms. All of these use the same request
+			 * codes, so the list of allowed mechanisms is tracked separately.
+			 *
+			 * fill_allowed_sasl_mechs() must be updated when adding a new
+			 * mechanism here!
+			 */
 			else if (strcmp(method, "scram-sha-256") == 0)
 			{
-				/* This currently assumes that SCRAM is the only SASL method. */
-				bits = (1 << AUTH_REQ_SASL);
-				bits |= (1 << AUTH_REQ_SASL_CONT);
-				bits |= (1 << AUTH_REQ_SASL_FIN);
+				mech = &pg_scram_mech;
 			}
+
+			/*
+			 * Final group: meta-options.
+			 */
 			else if (strcmp(method, "none") == 0)
 			{
 				/*
@@ -1466,20 +1532,68 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
-			/* Update the bitmask. */
-			if (negated)
+			if (mech)
 			{
-				if ((conn->allowed_auth_methods & bits) == 0)
-					goto duplicate;
+				/*
+				 * Update the mechanism set only. The method bitmask will be
+				 * updated for SASL further down.
+				 */
+				Assert(!bits);
+
+				if (negated)
+				{
+					/* Remove the existing mechanism from the list. */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i < 0)
+						goto duplicate;
 
-				conn->allowed_auth_methods &= ~bits;
+					conn->allowed_sasl_mechs[i] = NULL;
+				}
+				else
+				{
+					/*
+					 * Find a space to put the new mechanism (after making
+					 * sure it's not already there).
+					 */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i >= 0)
+						goto duplicate;
+
+					i = index_of_allowed_sasl_mech(conn, NULL);
+					if (i < 0)
+					{
+						/* Should not happen; the pointer list is corrupted. */
+						Assert(false);
+
+						conn->status = CONNECTION_BAD;
+						libpq_append_conn_error(conn,
+												"internal error: no space in allowed_sasl_mechs");
+						free(part);
+						return false;
+					}
+
+					conn->allowed_sasl_mechs[i] = mech;
+				}
 			}
 			else
 			{
-				if ((conn->allowed_auth_methods & bits) == bits)
-					goto duplicate;
+				/* Update the method bitmask. */
+				Assert(bits);
+
+				if (negated)
+				{
+					if ((conn->allowed_auth_methods & bits) == 0)
+						goto duplicate;
+
+					conn->allowed_auth_methods &= ~bits;
+				}
+				else
+				{
+					if ((conn->allowed_auth_methods & bits) == bits)
+						goto duplicate;
 
-				conn->allowed_auth_methods |= bits;
+					conn->allowed_auth_methods |= bits;
+				}
 			}
 
 			free(part);
@@ -1498,6 +1612,36 @@ pqConnectOptions2(PGconn *conn)
 			free(part);
 			return false;
 		}
+
+		/*
+		 * Finally, allow SASL authentication requests if (and only if) we've
+		 * allowed any mechanisms.
+		 */
+		{
+			bool		allowed = false;
+			const uint32 sasl_bits =
+				(1 << AUTH_REQ_SASL)
+				| (1 << AUTH_REQ_SASL_CONT)
+				| (1 << AUTH_REQ_SASL_FIN);
+
+			for (i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+			{
+				if (conn->allowed_sasl_mechs[i])
+				{
+					allowed = true;
+					break;
+				}
+			}
+
+			/*
+			 * For the standard case, add the SASL bits to the (default-empty)
+			 * set if needed. For the negated case, remove them.
+			 */
+			if (!negated && allowed)
+				conn->allowed_auth_methods |= sasl_bits;
+			else if (negated && !allowed)
+				conn->allowed_auth_methods &= ~sasl_bits;
+		}
 	}
 
 	/*
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 4a5a7c8b5e3..d372276c486 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -501,6 +501,8 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
+	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 	char		current_auth_response;	/* used by pqTraceOutputMessage to
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 773238b76fd..1357f806b6f 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -277,6 +277,16 @@ $node->connect_fails(
 	"require_auth methods cannot be duplicated, !none case",
 	expected_stderr =>
 	  qr/require_auth method "!none" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=scram-sha-256,scram-sha-256",
+	"require_auth methods cannot be duplicated, scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "scram-sha-256" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=!scram-sha-256,!scram-sha-256",
+	"require_auth methods cannot be duplicated, !scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "!scram-sha-256" is specified more than once/);
 
 # Unknown value defined in require_auth.
 $node->connect_fails(
#178Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#175)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 07.01.25 23:24, Jacob Champion wrote:

On Thu, Jan 2, 2025 at 2:11 AM Peter Eisentraut <peter@eisentraut.org> wrote:

There is a mix of using "URL" and "URI" throughout the patch. I tried
to look up in the source material (RFCs) what the correct use would
be, but even they are mixing it in nonobvious ways. Maybe this is
just hopelessly confused, or maybe there is a system that I don't
recognize.

Ugh, yeah. I think my "system" was whether the RFC I read most
recently had used "URL" or "URI" more in the text.

In an ideal world, I'd just switch to "URL" to avoid confusion. There
are some URNs in use as part of OAuth (e.g.
`urn:ietf:params:oauth:grant-type:device_code`) but I don't think I
refer to those as URIs anyway. And more esoteric forms of URI (data:)
are not allowed.

However... there are some phrases, like "Well-Known URI", where that's
just the Name of the Thing. Similarly, when the wire protocol itself
uses "URI" (say, in JSON field names), I'd rather be consistent to
make searching easier.

Is it enough to prefer "URL" in the user-facing documentation (at
least, when it doesn't conflict with other established naming
conventions), and accept both in the code?

The above explanation makes sense to me. I don't know what you mean by
"accept in the code". I would agree with "tolerate some inconsistency"
in the code but not with, like, create alias names for all the interface
names.

* src/interfaces/libpq/libpq-fe.h

+#ifdef WIN32
+#include <winsock2.h>          /* for SOCKET */
+#endif

Including a system header like that seems a bit unpleasant. AFAICT,
you only need it for this:

+   PostgresPollingStatusType (*async) (PGconn *conn,
+                                       struct _PGoauthBearerRequest
*request,
+                                       SOCKTYPE * altsock);

But that function could already get the altsock handle via
conn->altsock. So maybe that is a way to avoid the whole socket type
dance in this header.

It'd also couple clients against libpq-int.h, so they'd have to
remember to recompile every release. I'm worried that'd cause a bunch
of ABI problems...

Couldn't that function use PQsocket() to get at the actual socket from
the PGconn handle?

* src/test/modules/oauth_validator/fail_validator.c

+{
+   elog(FATAL, "fail_validator: sentinel error");
+   pg_unreachable();
+}

This pg_unreachable() is probably not necessary after elog(FATAL).

Cirrus completes successfully with that, but MSVC starts complaining:

warning C4715: 'fail_token': not all control paths return a value

Is that expected?

Ah yes, because MSVC doesn't support the noreturn attribute. (See
</messages/by-id/pxr5b3z7jmkpenssra5zroxi7qzzp6eswuggokw64axmdixpnk@zbwxuq7gbbcw&gt;.)
So ok to leave as you had it for now.

#179Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#178)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Jan 8, 2025 at 11:37 AM Peter Eisentraut <peter@eisentraut.org> wrote:

I don't know what you mean by
"accept in the code". I would agree with "tolerate some inconsistency"
in the code but not with, like, create alias names for all the interface
names.

"Tolerate inconsistency" was what I had in mind. So I'll plan to do a
pass on the user documentation, but not a search-and-replace in the
code at this point.

It'd also couple clients against libpq-int.h, so they'd have to
remember to recompile every release. I'm worried that'd cause a bunch
of ABI problems...

Couldn't that function use PQsocket() to get at the actual socket from
the PGconn handle?

It's an output parameter (i.e. the async callback is responsible for
setting conn->altsock). Unless I've missed the point entirely, I don't
think PQsocket() helps here.

warning C4715: 'fail_token': not all control paths return a value

Is that expected?

Ah yes, because MSVC doesn't support the noreturn attribute. (See
</messages/by-id/pxr5b3z7jmkpenssra5zroxi7qzzp6eswuggokw64axmdixpnk@zbwxuq7gbbcw&gt;.)
So ok to leave as you had it for now.

Will do.

Thanks!
--Jacob

#180Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#179)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 08.01.25 21:29, Jacob Champion wrote:

It'd also couple clients against libpq-int.h, so they'd have to
remember to recompile every release. I'm worried that'd cause a bunch
of ABI problems...

Couldn't that function use PQsocket() to get at the actual socket from
the PGconn handle?

It's an output parameter (i.e. the async callback is responsible for
setting conn->altsock). Unless I've missed the point entirely, I don't
think PQsocket() helps here.

Maybe it would work to just use plain "int" as the type here. Any
socket number must fit into int anyway in order for PQsocket() to be
able to return it. The way I understand Windows socket handles, this
should work.

#181Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#180)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Jan 9, 2025 at 8:17 AM Peter Eisentraut <peter@eisentraut.org> wrote:

Maybe it would work to just use plain "int" as the type here. Any
socket number must fit into int anyway in order for PQsocket() to be
able to return it. The way I understand Windows socket handles, this
should work.

Looks like it should work for current Windows, yeah. This is the
approach taken by OpenSSL [1]https://docs.openssl.org/3.4/man3/SSL_set_fd/#notes.

It'd be sad to copy-paste the API bug into a new place, though. If
we're going to disconnect this API from SOCKET, can we use uintptr_t
instead on Windows? If someone eventually adds an alternative to
PQsocket(), as Tom suggested in [2]/messages/by-id/153442.1624889951@sss.pgh.pa.us, it'd be nice not to have to
duplicate this callback too.

--Jacob

[1]: https://docs.openssl.org/3.4/man3/SSL_set_fd/#notes
[2]: /messages/by-id/153442.1624889951@sss.pgh.pa.us

#182Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#171)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 20 Dec 2024, at 02:00, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

v40 also contains:

A few small comments on v40 rather than saving up for a longer email:

+ ereport(LOG, errmsg("Internal error in OAuth validator module"));
Tiny nitpick, the errmsg() should start with lowercase 'i'.

+ libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built using --with-libcurl");
Since we at some point will move away from autoconf I think we should avoid
such implementation details in error messages. How about "was not built with
libcurl support"?

+ * Find the start of the .well-known prefix. IETF rules state this must be
+ * at the beginning of the path component, but OIDC defined it at the end
+ * instead, so we have to search for it anywhere.
I was looking for a reference for OIDC defining the WK prefix placement but I
could only find it deferring to RFC5785 like how RFC8414 does.  Can you inject
a document reference for this?

+ if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
Shouldn't the scheme component really be compared case-insensitive, or has it
been normalized at this point? Not sure how much it matters in practice but if
not perhaps we should add a TODO marker there?

Support for oauth seems to be missing from pg_hba_file_rules() which should be
added in hbafuncs.c:get_hba_options().

--
Daniel Gustafsson

#183Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#182)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Jan 9, 2025 at 12:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:

+ * Find the start of the .well-known prefix. IETF rules state this must be
+ * at the beginning of the path component, but OIDC defined it at the end
+ * instead, so we have to search for it anywhere.
I was looking for a reference for OIDC defining the WK prefix placement but I
could only find it deferring to RFC5785 like how RFC8414 does.  Can you inject
a document reference for this?

I'll add a note in the comment. It's in Section 4 of OIDC Discovery
1.0: https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfig

(This references RFC 5785, but only to obliquely point out that it's
not actually compliant with RFC 5785, if the issuer ID has a path
component. Section 4.1 gives an example.)

+ if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
Shouldn't the scheme component really be compared case-insensitive, or has it
been normalized at this point? Not sure how much it matters in practice but if
not perhaps we should add a TODO marker there?

I don't think we should. While I've read some fights about the meaning
of "identical" in OIDCD Sec 4.3, IETF seems to be pushing hard for
exact equality of issuer IDs. RFC 9207 says [1]https://www.rfc-editor.org/rfc/rfc9207#section-2.4

This [issuer] comparison MUST use simple string comparison as
defined in Section 6.2.1 of [RFC3986].

(Simple string comparison being byte/character-wise rather than
performing a normalization step.) While RFC 9207 doesn't govern the
Device Authorization flow yet (maybe not ever?), the current OAuth 2.1
draft refers to its rules as a MUST [2]https://www.ietf.org/archive/id/draft-ietf-oauth-v2-1-12.html#section-7.13.1, and I think we should just be
strict for the safety of future flow implementations.

I'm sure someone's going to complain at some point, but IMNSHO, the
fix for them is just to use the same formatting and capitalization as
the discovery document, and move on.

--

I'll address the other comments in the upcoming v41.

Thanks!
--Jacob

[1]: https://www.rfc-editor.org/rfc/rfc9207#section-2.4
[2]: https://www.ietf.org/archive/id/draft-ietf-oauth-v2-1-12.html#section-7.13.1

#184Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#183)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 9 Jan 2025, at 23:35, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Thu, Jan 9, 2025 at 12:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:

+ * Find the start of the .well-known prefix. IETF rules state this must be
+ * at the beginning of the path component, but OIDC defined it at the end
+ * instead, so we have to search for it anywhere.
I was looking for a reference for OIDC defining the WK prefix placement but I
could only find it deferring to RFC5785 like how RFC8414 does.  Can you inject
a document reference for this?

I'll add a note in the comment. It's in Section 4 of OIDC Discovery
1.0: https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfig

(This references RFC 5785, but only to obliquely point out that it's
not actually compliant with RFC 5785, if the issuer ID has a path
component. Section 4.1 gives an example.)

Thanks!

+ if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
Shouldn't the scheme component really be compared case-insensitive, or has it
been normalized at this point? Not sure how much it matters in practice but if
not perhaps we should add a TODO marker there?

I don't think we should. While I've read some fights about the meaning
of "identical" in OIDCD Sec 4.3, IETF seems to be pushing hard for
exact equality of issuer IDs. RFC 9207 says [1]

This [issuer] comparison MUST use simple string comparison as
defined in Section 6.2.1 of [RFC3986].

(Simple string comparison being byte/character-wise rather than
performing a normalization step.) While RFC 9207 doesn't govern the
Device Authorization flow yet (maybe not ever?), the current OAuth 2.1
draft refers to its rules as a MUST [2], and I think we should just be
strict for the safety of future flow implementations.

I'm sure someone's going to complain at some point, but IMNSHO, the
fix for them is just to use the same formatting and capitalization as
the discovery document, and move on.

Fair enough, I buy that. Maybe the above could be de-opinionated slightly and added as a comment to help others reading the code down the line?

--
Daniel Gustafsson

#185Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#181)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 09.01.25 20:18, Jacob Champion wrote:

It'd be sad to copy-paste the API bug into a new place, though. If
we're going to disconnect this API from SOCKET, can we use uintptr_t
instead on Windows? If someone eventually adds an alternative to
PQsocket(), as Tom suggested in [2], it'd be nice not to have to
duplicate this callback too.

Assuming that uintptr_t is the right underlying type for SOCKET, that
seems ok.

But also note that

#ifdef WIN32

might not work because WIN32 is defined by the PostgreSQL build system,
not by the compiler (see src/include/port/win32.h).

#186Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#185)
8 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Jan 10, 2025 at 5:27 AM Peter Eisentraut <peter@eisentraut.org> wrote:

Assuming that uintptr_t is the right underlying type for SOCKET, that
seems ok.

Best I can tell, SOCKET is a UINT_PTR (distinct from PUINT) so I think
that's correct.

#ifdef WIN32

might not work because WIN32 is defined by the PostgreSQL build system,
not by the compiler (see src/include/port/win32.h).

Ah, thanks for catching that. Changed to _WIN32.

On Thu, Jan 9, 2025 at 2:44 PM Daniel Gustafsson <daniel@yesql.se> wrote:

On 9 Jan 2025, at 23:35, Jacob Champion <jacob.champion@enterprisedb.com> wrote:
I'm sure someone's going to complain at some point, but IMNSHO, the
fix for them is just to use the same formatting and capitalization as
the discovery document, and move on.

Fair enough, I buy that. Maybe the above could be de-opinionated slightly and added as a comment to help others reading the code down the line?

Done!

On Thu, Jan 9, 2025 at 12:40 PM Daniel Gustafsson <daniel@yesql.se> wrote:

Support for oauth seems to be missing from pg_hba_file_rules() which should be
added in hbafuncs.c:get_hba_options().

Done, and added a basic test for it too.

--

v41 handles the feedback since v40, continues fleshing out
documentation, and splits out three prefactoring patches:
- 0001 moves PG_MAX_AUTH_TOKEN_LENGTH, as discussed upthread
- 0002 handles the non-OAuth-specific changes to require_auth (0005
now highlights the OAuth-specific pieces)
- 0003 adds SASL_ASYNC and its handling code

When I applied Daniel's async_auth_portion patch from earlier, custom
flows began double-freeing their socket descriptors. That made logical
sense but it wasn't immediately clear to me how to fix it, until I
realized that "async authentication state" and "SASL mechanism state"
are two different things with different lifetimes, and my previous
attempts conflated them. 0003 introduces a cleanup_async_auth()
callback, to explicitly free the altsock and its related supporting
allocations as soon as it's no longer needed. Not only does this solve
the double-free, it removes an extra layer of indirection from 0004
and neatly fixes a TODO where the Curl handles were sticking around
for the lifetime of the Postgres connection. Assertions have been
added to keep the new internal API consistent.

pqSocketCheck() was returning ready if it found buffered SSL data,
even if an altsock had been set. I separated the two paths more
completely in 0003.

The FreeBSD 13.3 image started failing to correctly resolve libcurl
package dependencies, leading to missing libssh2 symbols at runtime.
And 13.3 went EOL at the end of 2024 -- which is possibly related to
the breakage? -- so I seemingly cannot perform a `pkg update` to try
to fix things. I've added a hack around this in 0006 that can
hopefully be removed again when our Cirrus images transition to 14.2
[1]: https://github.com/anarazel/pg-vm-images/pull/109

Next email will discuss the architectural bug that Kashif found.

Thanks,
--Jacob

[1]: https://github.com/anarazel/pg-vm-images/pull/109

Attachments:

since-v40.diff.txttext/plain; charset=US-ASCII; name=since-v40.diff.txtDownload
-:  ----------- > 1:  386e7c4df31 Move PG_MAX_AUTH_TOKEN_LENGTH to libpq/auth.h
2:  de155343c81 ! 2:  b829f7a8ac7 squash! Add OAUTHBEARER SASL mechanism
    @@ Metadata
     Author: Jacob Champion <jacob.champion@enterprisedb.com>
     
      ## Commit message ##
    -    squash! Add OAUTHBEARER SASL mechanism
    +    require_auth: prepare for multiple SASL mechanisms
     
    -    Add require_auth=oauth support.
    +    Prior to this patch, the require_auth implementation assumed that the
    +    AuthenticationSASL protocol message was synonymous with SCRAM-SHA-256.
    +    In preparation for the OAUTHBEARER SASL mechanism, split the
    +    implementation into two tiers: the first checks the acceptable
    +    AUTH_REQ_* codes, and the second checks acceptable mechanisms if
    +    AUTH_REQ_SASL et al are permitted.
     
    - ## doc/src/sgml/libpq.sgml ##
    -@@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
    -           </listitem>
    -          </varlistentry>
    - 
    -+         <varlistentry>
    -+          <term><literal>oauth</literal></term>
    -+          <listitem>
    -+           <para>
    -+            The server must request an OAuth bearer token from the client.
    -+           </para>
    -+          </listitem>
    -+         </varlistentry>
    -+
    -          <varlistentry>
    -           <term><literal>none</literal></term>
    -           <listitem>
    +    conn->allowed_sasl_mechs is the list of pointers to acceptable
    +    mechanisms. (Since we'll support only a small number of mechanisms, this
    +    is an array of static length to minimize bookkeeping.) pg_SASL_init()
    +    will bail if the selected mechanism isn't contained in this array.
     
    - ## src/interfaces/libpq/fe-auth-oauth.c ##
    -@@ src/interfaces/libpq/fe-auth-oauth.c: oauth_exchange(void *opaq, bool final,
    - 			*outputlen = strlen(*output);
    - 			state->step = FE_OAUTH_BEARER_SENT;
    - 
    -+			/*
    -+			 * For the purposes of require_auth, our side of authentication is
    -+			 * done at this point; the server will either accept the
    -+			 * connection or send an error. Unlike SCRAM, there is no
    -+			 * additional server data to check upon success.
    -+			 */
    -+			conn->client_finished_auth = true;
    - 			return SASL_CONTINUE;
    - 
    - 		case FE_OAUTH_BEARER_SENT:
    +    Since there's only one mechansism supported right now, one branch of the
    +    second tier cannot be exercised yet (it's marked with Assert(false)).
    +    This assertion will need to be removed when the next mechanism is added.
     
      ## src/interfaces/libpq/fe-auth.c ##
    -@@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
    +@@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      		goto error;
      	}
      
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen, bool
     +
     +		if (!allowed)
     +		{
    ++			/*
    ++			 * TODO: this is dead code until a second SASL mechanism is added;
    ++			 * the connection can't have proceeded past check_expected_areq()
    ++			 * if no SASL methods are allowed.
    ++			 */
    ++			Assert(false);
    ++
     +			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
     +									conn->require_auth, selected_mechanism);
     +			goto error;
    @@ src/interfaces/libpq/fe-connect.c: libpq_prng_init(PGconn *conn)
     +fill_allowed_sasl_mechs(PGconn *conn)
     +{
     +	/*---
    -+	 * We only support two mechanisms at the moment, so rather than deal with a
    ++	 * We only support one mechanism at the moment, so rather than deal with a
     +	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
     +	 * rely on the compile-time assertion here to keep us honest.
     +	 *
    @@ src/interfaces/libpq/fe-connect.c: libpq_prng_init(PGconn *conn)
     +	 * - handle the new mechanism name in the require_auth portion of
     +	 *   pqConnectOptions2(), below.
     +	 */
    -+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 2,
    ++	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
     +					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
     +
     +	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
    -+	conn->allowed_sasl_mechs[1] = &pg_oauth_mech;
     +}
     +
     +/*
    @@ src/interfaces/libpq/fe-connect.c: pqConnectOptions2(PGconn *conn)
     -				bits |= (1 << AUTH_REQ_SASL_FIN);
     +				mech = &pg_scram_mech;
      			}
    -+			else if (strcmp(method, "oauth") == 0)
    -+			{
    -+				mech = &pg_oauth_mech;
    -+			}
     +
     +			/*
     +			 * Final group: meta-options.
    @@ src/interfaces/libpq/fe-connect.c: pqConnectOptions2(PGconn *conn)
     +				 * updated for SASL further down.
     +				 */
     +				Assert(!bits);
    - 
    --				conn->allowed_auth_methods &= ~bits;
    ++
     +				if (negated)
     +				{
     +					/* Remove the existing mechanism from the list. */
     +					i = index_of_allowed_sasl_mech(conn, mech);
     +					if (i < 0)
     +						goto duplicate;
    -+
    + 
    +-				conn->allowed_auth_methods &= ~bits;
     +					conn->allowed_sasl_mechs[i] = NULL;
     +				}
     +				else
    @@ src/interfaces/libpq/fe-connect.c: pqConnectOptions2(PGconn *conn)
     -					goto duplicate;
     +				/* Update the method bitmask. */
     +				Assert(bits);
    - 
    --				conn->allowed_auth_methods |= bits;
    ++
     +				if (negated)
     +				{
     +					if ((conn->allowed_auth_methods & bits) == 0)
    @@ src/interfaces/libpq/fe-connect.c: pqConnectOptions2(PGconn *conn)
     +				{
     +					if ((conn->allowed_auth_methods & bits) == bits)
     +						goto duplicate;
    -+
    + 
    +-				conn->allowed_auth_methods |= bits;
     +					conn->allowed_auth_methods |= bits;
     +				}
      			}
    @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
      								 * the server? */
      	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
      										 * codes */
    -+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
    ++	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
     +													 * mechanisms */
      	bool		client_finished_auth;	/* have we finished our half of the
      										 * authentication exchange? */
    @@ src/test/authentication/t/001_password.pl: $node->connect_fails(
      
      # Unknown value defined in require_auth.
      $node->connect_fails(
    -@@ src/test/authentication/t/001_password.pl: $node->connect_fails(
    - $node->connect_fails(
    - 	"user=scram_role require_auth=!scram-sha-256",
    - 	"SCRAM authentication forbidden, fails with SCRAM auth",
    --	expected_stderr => qr/server requested SASL authentication/);
    -+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
    - $node->connect_fails(
    - 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
    - 	"multiple authentication types forbidden, fails with SCRAM auth",
    --	expected_stderr => qr/server requested SASL authentication/);
    -+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
    - 
    - # Test that bad passwords are rejected.
    - $ENV{"PGPASSWORD"} = 'badpass';
    -@@ src/test/authentication/t/001_password.pl: $node->connect_fails(
    - 	"user=scram_role require_auth=!scram-sha-256",
    - 	"password authentication forbidden, fails with SCRAM auth",
    - 	expected_stderr =>
    --	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
    -+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
    - );
    - $node->connect_fails(
    - 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
    - 	"multiple authentication types forbidden, fails with SCRAM auth",
    - 	expected_stderr =>
    --	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
    -+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
    - );
    - 
    - # Test SYSTEM_USER <> NULL with parallel workers.
    -
    - ## src/test/modules/oauth_validator/t/001_server.pl ##
    -@@ src/test/modules/oauth_validator/t/001_server.pl: $node->connect_fails(
    - 	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
    - );
    - 
    -+# Test require_auth settings against OAUTHBEARER.
    -+my @cases = (
    -+	{ require_auth => "oauth" },
    -+	{ require_auth => "oauth,scram-sha-256" },
    -+	{ require_auth => "password,oauth" },
    -+	{ require_auth => "none,oauth" },
    -+	{ require_auth => "!scram-sha-256" },
    -+	{ require_auth => "!none" },
    -+
    -+	{
    -+		require_auth => "!oauth",
    -+		failure => qr/server requested OAUTHBEARER authentication/
    -+	},
    -+	{
    -+		require_auth => "scram-sha-256",
    -+		failure => qr/server requested OAUTHBEARER authentication/
    -+	},
    -+	{
    -+		require_auth => "!password,!oauth",
    -+		failure => qr/server requested OAUTHBEARER authentication/
    -+	},
    -+	{
    -+		require_auth => "none",
    -+		failure => qr/server requested SASL authentication/
    -+	},
    -+	{
    -+		require_auth => "!oauth,!scram-sha-256",
    -+		failure => qr/server requested SASL authentication/
    -+	});
    -+
    -+$user = "test";
    -+foreach my $c (@cases)
    -+{
    -+	my $connstr =
    -+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
    -+
    -+	if (defined $c->{'failure'})
    -+	{
    -+		$node->connect_fails(
    -+			$connstr,
    -+			"require_auth=$c->{'require_auth'} fails",
    -+			expected_stderr => $c->{'failure'});
    -+	}
    -+	else
    -+	{
    -+		$node->connect_ok(
    -+			$connstr,
    -+			"require_auth=$c->{'require_auth'} succeeds",
    -+			expected_stderr =>
    -+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
    -+		);
    -+	}
    -+}
    -+
    - # Make sure the client_id and secret are correctly encoded. $vschars contains
    - # every allowed character for a client_id/_secret (the "VSCHAR" class).
    - # $vschars_esc is additionally backslash-escaped for inclusion in a
    -@@ src/test/modules/oauth_validator/t/001_server.pl: my $vschars_esc =
    -   " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
    - 
    - $node->connect_ok(
    --	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc'",
    -+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
    - 	"escapable characters: client_id",
    - 	expected_stderr =>
    --	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
    -+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
    - $node->connect_ok(
    --	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
    -+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
    - 	"escapable characters: client_id and secret",
    - 	expected_stderr =>
    --	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
    -+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
    - 
    - #
    - # Further tests rely on support for specific behaviors in oauth_server.py. To
-:  ----------- > 3:  f88f98df97d libpq: handle asynchronous actions during SASL
1:  7ee8628abac ! 4:  d96712cda1d Add OAUTHBEARER SASL mechanism
    @@ .cirrus.tasks.yml: task:
      
        # NB: Intentionally build without -Dllvm. The freebsd image size is already
        # large enough to make VM startup slow, and even without llvm freebsd
    -@@ .cirrus.tasks.yml: task:
    -         --buildtype=debug \
    -         -Dcassert=true -Dinjection_points=true \
    -         -Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
    -+        -Dlibcurl=enabled \
    -         -Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
    -         build
    -     EOF
     @@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
        --with-gssapi
        --with-icu
    @@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
        --with-libxml
        --with-libxslt
        --with-llvm
    -@@ .cirrus.tasks.yml: LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
    -   --with-zstd
    - 
    - LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
    -+  -Dlibcurl=enabled
    -   -Dllvm=enabled
    -   -Duuid=e2fs
    - 
     @@ .cirrus.tasks.yml: task:
          EOF
      
    @@ .cirrus.tasks.yml: task:
        ###
        # Test that code can be built with gcc/clang without warnings
     
    - ## config/programs.m4 ##
    -@@ config/programs.m4: if test "$pgac_cv_ldap_safe" != yes; then
    - *** also uses LDAP will crash on exit.])
    - fi])
    - 
    --
    --
    - # PGAC_CHECK_READLINE
    - # -------------------
    - # Check for the readline library and dependent libraries, either
    -
      ## configure ##
     @@ configure: XML2_LIBS
      XML2_CFLAGS
    @@ configure: Optional Packages:
                                prefer BSD Libedit over GNU Readline
        --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
        --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
    -+  --with-libcurl          build with libcurl support for OAuth client flows
    ++  --with-libcurl          build with libcurl support
        --with-libxml           build with XML support
        --with-libxslt          use XSLT support when building contrib/xml2
        --with-system-tzdata=DIR
    @@ configure: fi
     +#
     +# libcurl
     +#
    -+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support for OAuth client flows" >&5
    -+$as_echo_n "checking whether to build with libcurl support for OAuth client flows... " >&6; }
    ++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
    ++$as_echo_n "checking whether to build with libcurl support... " >&6; }
     +
     +
     +
    @@ configure.ac: fi
     +#
     +# libcurl
     +#
    -+AC_MSG_CHECKING([whether to build with libcurl support for OAuth client flows])
    -+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support for OAuth client flows],
    -+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support for OAuth client flows. (--with-libcurl)])])
    ++AC_MSG_CHECKING([whether to build with libcurl support])
    ++PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
    ++              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
     +AC_MSG_RESULT([$with_libcurl])
     +AC_SUBST(with_libcurl)
     +
    @@ doc/src/sgml/client-auth.sgml: host ... radius radiusservers="server1,server2" r
     +     </varlistentry>
     +
     +     <varlistentry>
    -+      <term>Issuer</term>
    ++      <term id="auth-oauth-issuer">Issuer</term>
     +      <listitem>
     +       <para>
     +        An identifier for an authorization server, printed as an
    @@ doc/src/sgml/client-auth.sgml: host ... radius radiusservers="server1,server2" r
     +      <term><literal>issuer</literal></term>
     +      <listitem>
     +       <para>
    -+        The issuer identifier of the authorization server, as defined by its
    -+        discovery document, or a well-known URI pointing to that discovery
    -+        document. This parameter is required.
    ++        An HTTPS URL which is either the exact
    ++        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
    ++        authorization server, as defined by its discovery document, or a
    ++        well-known URI that points directly to that discovery document. This
    ++        parameter is required.
     +       </para>
     +       <para>
    -+        When an OAuth client connects to the server, a discovery document URI
    -+        will be constructed using the issuer identifier. By default, the URI
    -+        uses the conventions of OpenID Connect Discovery: the path
    ++        When an OAuth client connects to the server, a URL for the discovery
    ++        document will be constructed using the issuer identifier. By default,
    ++        this URL uses the conventions of OpenID Connect Discovery: the path
     +        <literal>/.well-known/openid-configuration</literal> will be appended
    -+        to the issuer identifier. Alternatively, if the
    ++        to the end of the issuer identifier. Alternatively, if the
     +        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
    -+        path segment, the URI will be provided to the client as-is.
    ++        path segment, that URL will be provided to the client as-is.
     +       </para>
     +       <warning>
     +        <para>
    @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
     +      <term><literal>oauth_issuer</literal></term>
     +      <listitem>
     +       <para>
    -+        The HTTPS URL of an issuer to contact if the server requests an OAuth
    -+        token for the connection. This parameter is required for all OAuth
    ++        The HTTPS URL of a trusted issuer to contact if the server requests an
    ++        OAuth token for the connection. This parameter is required for all OAuth
     +        connections; it should exactly match the <literal>issuer</literal>
     +        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
     +       </para>
     +       <para>
     +        As part of the standard authentication handshake, <application>libpq</application>
    -+        will ask the server for a <emphasis>discovery document:</emphasis> a URI
    ++        will ask the server for a <emphasis>discovery document:</emphasis> a URL
     +        providing a set of OAuth configuration parameters. The server must
    -+        provide a URI that is directly constructed from the components of the
    ++        provide a URL that is directly constructed from the components of the
     +        <literal>oauth_issuer</literal>, and this value must exactly match the
     +        issuer identifier that is declared in the discovery document itself, or
     +        the connection will fail. This is required to prevent a class of "mix-up
    @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
     +       <para>
     +        This standard handshake requires two separate network connections to the
     +        server per authentication attempt. To skip asking the server for a
    -+        discovery document URI, you may set <literal>oauth_issuer</literal> to a
    ++        discovery document URL, you may set <literal>oauth_issuer</literal> to a
     +        <literal>/.well-known/</literal> URI used for OAuth discovery. (In this
     +        case, it is recommended that
     +        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
    @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
     +         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
     +        </itemizedlist>
     +       </para>
    ++       <warning>
    ++        <para>
    ++         Issuers are highly privileged during the OAuth connection handshake. As
    ++         a rule of thumb, if you would not trust the operator of a URL to handle
    ++         access to your servers, or to impersonate you directly, that URL should
    ++         not be trusted as an <literal>oauth_issuer</literal>.
    ++        </para>
    ++       </warning>
     +      </listitem>
     +     </varlistentry>
     +
    @@ doc/src/sgml/libpq.sgml: void PQinitSSL(int do_ssl);
     +typedef struct _PGoauthBearerRequest
     +{
     +    /* Hook inputs (constant across all calls) */
    -+    const char *const openid_configuration; /* OIDC discovery URI */
    ++    const char *const openid_configuration; /* OIDC discovery URL */
     +    const char *const scope;                /* required scope(s), or NULL */
     +
     +    /* Hook outputs */
    @@ doc/src/sgml/oauth-validators.sgml (new)
     +  <sect2 id="oauth-validator-design-guidelines">
     +   <title>General Coding Guidelines</title>
     +   <para>
    -+    TODO
    ++    Developers should keep the following in mind when implementing token
    ++    validation:
     +   </para>
     +   <variablelist>
     +    <varlistentry>
    @@ doc/src/sgml/oauth-validators.sgml (new)
     +       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
     +       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
     +       The same should be done during any long-running loops. Failure to follow
    -+       this guidance may result in hung sessions.
    ++       this guidance may result in unresponsive backend sessions.
     +      </para>
     +     </listitem>
     +    </varlistentry>
    @@ doc/src/sgml/oauth-validators.sgml (new)
     +     <listitem>
     +      <para>
     +       The breadth of testing an OAuth system is well beyond the scope of this
    -+       documentation, but note that implementers should consider negative
    -+       testing to be mandatory. It's trivial to design a module that lets
    -+       authorized users in; the whole point of the system is to keep
    -+       unauthorized users out.
    ++       documentation, but at minimum, negative testing should be considered
    ++       mandatory. It's trivial to design a module that lets authorized users in;
    ++       the whole point of the system is to keep unauthorized users out.
    ++      </para>
    ++     </listitem>
    ++    </varlistentry>
    ++    <varlistentry>
    ++     <term>Documentation</term>
    ++     <listitem>
    ++      <para>
    ++       Validator implementations should document the contents and format of the
    ++       authenticated ID that is reported to the server for each end user, since
    ++       DBAs may need to use this information to construct pg_ident maps. (For
    ++       instance, is it an email address? an organizational ID number? a UUID?)
    ++       They should also document whether or not it is safe to use the module in
    ++       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
    ++       configuration is required in order to do so.
     +      </para>
     +     </listitem>
     +    </varlistentry>
    @@ meson_options.txt: option('icu', type: 'feature', value: 'auto',
        description: 'LDAP support')
      
     +option('libcurl', type : 'feature', value: 'auto',
    -+  description: 'libcurl support for OAuth client flows')
    ++  description: 'libcurl support')
     +
      option('libedit_preferred', type: 'boolean', value: false,
        description: 'Prefer BSD Libedit over GNU Readline')
    @@ src/backend/libpq/auth-oauth.c (new)
     +};
     +
     +/* Valid states for the oauth_exchange() machine. */
    -+typedef enum
    ++enum oauth_state
     +{
     +	OAUTH_STATE_INIT = 0,
     +	OAUTH_STATE_ERROR,
     +	OAUTH_STATE_FINISHED,
    -+} oauth_state;
    ++};
     +
     +/* Mechanism callback state. */
     +struct oauth_ctx
     +{
    -+	oauth_state state;
    ++	enum oauth_state state;
     +	Port	   *port;
     +	const char *issuer;
     +	const char *scope;
    @@ src/backend/libpq/auth-oauth.c (new)
     +										  token, port->user_name);
     +	if (ret == NULL)
     +	{
    -+		ereport(LOG, errmsg("Internal error in OAuth validator module"));
    ++		ereport(LOG, errmsg("internal error in OAuth validator module"));
     +		return false;
     +	}
     +
    @@ src/backend/libpq/auth.c
      
      
      /*----------------------------------------------------------------
    -@@ src/backend/libpq/auth.c: static int	CheckRADIUSAuth(Port *port);
    - static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
    - 
    - 
    --/*
    -- * Maximum accepted size of GSS and SSPI authentication tokens.
    -- * We also use this as a limit on ordinary password packet lengths.
    -- *
    -- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
    -- * domain controllers include an authorization field known as the Privilege
    -- * Attribute Certificate (PAC), which contains the user's Windows permissions
    -- * (group memberships etc.). The PAC is copied into all tickets obtained on
    -- * the basis of this TGT (even those issued by Unix realms which the Windows
    -- * realm trusts), and can be several kB in size. The maximum token size
    -- * accepted by Windows systems is determined by the MaxAuthToken Windows
    -- * registry setting. Microsoft recommends that it is not set higher than
    -- * 65535 bytes, so that seems like a reasonable limit for us as well.
    -- */
    --#define PG_MAX_AUTH_TOKEN_LENGTH	65535
    --
    - /*----------------------------------------------------------------
    -  * Global authentication functions
    -  *----------------------------------------------------------------
     @@ src/backend/libpq/auth.c: auth_failed(Port *port, int status, const char *logdetail)
      		case uaRADIUS:
      			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
    @@ src/backend/libpq/hba.c: parse_hba_auth_opt(char *name, char *val, HbaLine *hbal
     
      ## src/backend/libpq/meson.build ##
     @@
    - # Copyright (c) 2022-2024, PostgreSQL Global Development Group
    + # Copyright (c) 2022-2025, PostgreSQL Global Development Group
      
      backend_sources += files(
     +  'auth-oauth.c',
    @@ src/backend/libpq/pg_hba.conf.sample
      #
      # OPTIONS are a set of options for the authentication in the format
     
    + ## src/backend/utils/adt/hbafuncs.c ##
    +@@ src/backend/utils/adt/hbafuncs.c: get_hba_options(HbaLine *hba)
    + 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
    + 	}
    + 
    ++	if (hba->auth_method == uaOAuth)
    ++	{
    ++		if (hba->oauth_issuer)
    ++			options[noptions++] =
    ++				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
    ++
    ++		if (hba->oauth_scope)
    ++			options[noptions++] =
    ++				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
    ++
    ++		if (hba->oauth_validator)
    ++			options[noptions++] =
    ++				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
    ++
    ++		if (hba->oauth_skip_usermap)
    ++			options[noptions++] =
    ++				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
    ++	}
    ++
    + 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
    + 	Assert(noptions <= MAX_HBA_OPTIONS);
    + 
    +
      ## src/backend/utils/misc/guc_tables.c ##
     @@
      #include "jit/jit.h"
    @@ src/include/common/oauth-common.h (new)
     +#endif							/* OAUTH_COMMON_H */
     
      ## src/include/libpq/auth.h ##
    -@@
    - 
    - #include "libpq/libpq-be.h"
    - 
    -+/*
    -+ * Maximum accepted size of GSS and SSPI authentication tokens.
    -+ * We also use this as a limit on ordinary password packet lengths.
    -+ *
    -+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
    -+ * domain controllers include an authorization field known as the Privilege
    -+ * Attribute Certificate (PAC), which contains the user's Windows permissions
    -+ * (group memberships etc.). The PAC is copied into all tickets obtained on
    -+ * the basis of this TGT (even those issued by Unix realms which the Windows
    -+ * realm trusts), and can be several kB in size. The maximum token size
    -+ * accepted by Windows systems is determined by the MaxAuthToken Windows
    -+ * registry setting. Microsoft recommends that it is not set higher than
    -+ * 65535 bytes, so that seems like a reasonable limit for us as well.
    -+ */
    -+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
    -+
    - extern PGDLLIMPORT char *pg_krb_server_keyfile;
    - extern PGDLLIMPORT bool pg_krb_caseins_users;
    - extern PGDLLIMPORT bool pg_gss_accept_delegation;
     @@ src/include/libpq/auth.h: extern PGDLLIMPORT bool pg_gss_accept_delegation;
      extern void ClientAuthentication(Port *port);
      extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
    @@ src/include/pg_config.h.in
      /* Define to 1 to build with LDAP support. (--with-ldap) */
      #undef USE_LDAP
      
    -+/* Define to 1 to build with libcurl support for OAuth client flows.
    -+   (--with-libcurl) */
    ++/* Define to 1 to build with libcurl support. (--with-libcurl) */
     +#undef USE_LIBCURL
     +
      /* Define to 1 to build with XML support. (--with-libxml) */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +};
     +
     +/*
    -+ * Frees the async_ctx, which is stored directly on the PGconn. This is called
    -+ * during pqDropConnection() so that we don't leak resources even if
    -+ * PQconnectPoll() never calls us back.
    -+ *
    -+ * TODO: we should probably call this at the end of a successful authentication,
    -+ * too, to proactively free up resources.
    ++ * Tears down the Curl handles and frees the async_ctx.
     + */
     +static void
    -+free_curl_async_ctx(PGconn *conn, void *ctx)
    ++free_async_ctx(PGconn *conn, struct async_ctx *actx)
     +{
    -+	struct async_ctx *actx = ctx;
    -+
    -+	Assert(actx);				/* oauth_free() shouldn't call us otherwise */
    -+
     +	/*
     +	 * TODO: in general, none of the error cases below should ever happen if
     +	 * we have no bugs above. But if we do hit them, surfacing those errors
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +}
     +
     +/*
    ++ * Release resources used for the asynchronous exchange and disconnect the
    ++ * altsock.
    ++ *
    ++ * This is called either at the end of a successful authentication, or during
    ++ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
    ++ * calls us back.
    ++ */
    ++void
    ++pg_fe_cleanup_oauth_flow(PGconn *conn)
    ++{
    ++	fe_oauth_state *state = conn->sasl_state;
    ++
    ++	if (state->async_ctx)
    ++	{
    ++		free_async_ctx(conn, state->async_ctx);
    ++		state->async_ctx = NULL;
    ++	}
    ++
    ++	conn->altsock = PGINVALID_SOCKET;
    ++}
    ++
    ++/*
     + * Macros for manipulating actx->errbuf. actx_error() translates and formats a
     + * string for you; actx_error_str() appends a string directly without
     + * translation.
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	return true;
     +#endif
     +
    -+	actx_error(actx, "here's a nickel kid, get yourself a better computer");
    ++	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
     +	return false;
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + * provider.
     + */
     +static PostgresPollingStatusType
    -+pg_fe_run_oauth_flow_impl(PGconn *conn, pgsocket *altsock)
    ++pg_fe_run_oauth_flow_impl(PGconn *conn)
     +{
     +	fe_oauth_state *state = conn->sasl_state;
     +	struct async_ctx *actx;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		actx->debugging = oauth_unsafe_debugging_enabled();
     +
     +		state->async_ctx = actx;
    -+		state->free_async_ctx = free_curl_async_ctx;
     +
     +		initPQExpBuffer(&actx->work_data);
     +		initPQExpBuffer(&actx->errbuf);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	do
     +	{
     +		/* By default, the multiplexer is the altsock. Reassign as desired. */
    -+		*altsock = actx->mux;
    ++		conn->altsock = actx->mux;
     +
     +		switch (actx->step)
     +		{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +				 * the client wait directly on the timerfd rather than the
     +				 * multiplexer. (This isn't possible for kqueue.)
     +				 */
    -+				*altsock = actx->timerfd;
    ++				conn->altsock = actx->timerfd;
     +#endif
     +
     +				actx->step = OAUTH_STEP_WAIT_INTERVAL;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + * wrapper logic before handing off to the true implementation, above.
     + */
     +PostgresPollingStatusType
    -+pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock)
    ++pg_fe_run_oauth_flow(PGconn *conn)
     +{
     +	PostgresPollingStatusType result;
     +#ifndef WIN32
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
     +#endif
     +
    -+	result = pg_fe_run_oauth_flow_impl(conn, altsock);
    ++	result = pg_fe_run_oauth_flow_impl(conn);
     +
     +#ifndef WIN32
     +	if (masked)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +/*
     + * Frees the state allocated by oauth_init().
    ++ *
    ++ * This handles only mechanism state tied to the connection lifetime; state
    ++ * stored in state->async_ctx is freed up either immediately after the
    ++ * authentication handshake succeeds, or before the mechanism is cleaned up on
    ++ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
     + */
     +static void
     +oauth_free(void *opaq)
     +{
     +	fe_oauth_state *state = opaq;
     +
    ++	/* Any async authentication state should have been cleaned up already. */
    ++	Assert(!state->async_ctx);
    ++
     +	free(state->token);
    -+	if (state->async_ctx)
    -+		state->free_async_ctx(state->conn, state->async_ctx);
    -+
     +	free(state);
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	}
     +
     +	/*
    -+	 * Find the start of the .well-known prefix. IETF rules state this must be
    -+	 * at the beginning of the path component, but OIDC defined it at the end
    -+	 * instead, so we have to search for it anywhere.
    ++	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
    ++	 * this must be at the beginning of the path component, but OIDC defined
    ++	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
    ++	 * search for it anywhere.
     +	 */
     +	wk_start = strstr(authority_start, WK_PREFIX);
     +	if (!wk_start)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	{
     +		char	   *discovery_issuer;
     +
    -+		/* The URI must correspond to our existing issuer, to avoid mix-ups. */
    ++		/*
    ++		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
    ++		 *
    ++		 * Issuer comparison is done byte-wise, rather than performing any URL
    ++		 * normalization; this follows the suggestions for issuer comparison
    ++		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
    ++		 * vastly simplifies things. Since this is the key protection against
    ++		 * a rogue server sending the client to an untrustworthy location,
    ++		 * simpler is better.
    ++		 */
     +		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
     +		if (!discovery_issuer)
     +			goto cleanup;		/* error message already set */
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     + * statuses for use by PQconnectPoll().
     + */
     +static PostgresPollingStatusType
    -+run_user_oauth_flow(PGconn *conn, pgsocket *altsock)
    ++run_user_oauth_flow(PGconn *conn)
     +{
     +	fe_oauth_state *state = conn->sasl_state;
     +	PGoauthBearerRequest *request = state->async_ctx;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		return PGRES_POLLING_FAILED;
     +	}
     +
    -+	status = request->async(conn, request, altsock);
    ++	status = request->async(conn, request, &conn->altsock);
     +	if (status == PGRES_POLLING_FAILED)
     +	{
     +		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +}
     +
     +/*
    -+ * Cleanup callback for the user flow. Delegates most of its job to the
    -+ * user-provided cleanup implementation.
    ++ * Cleanup callback for the async user flow. Delegates most of its job to the
    ++ * user-provided cleanup implementation, then disconnects the altsock.
     + */
     +static void
    -+free_request(PGconn *conn, void *vreq)
    ++cleanup_user_oauth_flow(PGconn *conn)
     +{
    -+	PGoauthBearerRequest *request = vreq;
    ++	fe_oauth_state *state = conn->sasl_state;
    ++	PGoauthBearerRequest *request = state->async_ctx;
    ++
    ++	Assert(request);
     +
     +	if (request->cleanup)
     +		request->cleanup(conn, request);
    ++	conn->altsock = PGINVALID_SOCKET;
     +
     +	free(request);
    ++	state->async_ctx = NULL;
     +}
     +
     +/*
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		memcpy(request_copy, &request, sizeof(request));
     +
     +		conn->async_auth = run_user_oauth_flow;
    ++		conn->cleanup_async_auth = cleanup_user_oauth_flow;
     +		state->async_ctx = request_copy;
    -+		state->free_async_ctx = free_request;
     +	}
     +	else if (res < 0)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		 * caching at the moment. (Custom flows might be more sophisticated.)
     +		 */
     +		conn->async_auth = pg_fe_run_oauth_flow;
    ++		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
     +		conn->oauth_want_retry = PG_BOOL_NO;
     +
     +#else
    -+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built using --with-libcurl");
    ++		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
     +		goto fail;
     +
     +#endif
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     +	char	   *token;
     +
     +	void	   *async_ctx;
    -+	void		(*free_async_ctx) (PGconn *conn, void *ctx);
     +} fe_oauth_state;
     +
    -+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn, pgsocket *altsock);
    ++extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
    ++extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
     +extern bool oauth_unsafe_debugging_enabled(void);
     +
     +/* Mechanisms in fe-auth-oauth.c */
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     +
     +#endif							/* FE_AUTH_OAUTH_H */
     
    - ## src/interfaces/libpq/fe-auth-sasl.h ##
    -@@ src/interfaces/libpq/fe-auth-sasl.h: typedef enum
    - 	SASL_COMPLETE = 0,
    - 	SASL_FAILED,
    - 	SASL_CONTINUE,
    -+	SASL_ASYNC,
    - } SASLStatus;
    - 
    - /*
    -@@ src/interfaces/libpq/fe-auth-sasl.h: typedef struct pg_fe_sasl_mech
    - 	 *
    - 	 *	state:	   The opaque mechanism state returned by init()
    - 	 *
    -+	 *	final:	   true if the server has sent a final exchange outcome
    -+	 *
    - 	 *	input:	   The challenge data sent by the server, or NULL when
    - 	 *			   generating a client-first initial response (that is, when
    - 	 *			   the server expects the client to send a message to start
    -@@ src/interfaces/libpq/fe-auth-sasl.h: typedef struct pg_fe_sasl_mech
    - 	 *
    - 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
    - 	 *					Additional server challenge is expected
    -+	 *	SASL_ASYNC:		Some asynchronous processing external to the
    -+	 *					connection needs to be done before a response can be
    -+	 *					generated. The mechanism is responsible for setting up
    -+	 *					conn->async_auth appropriately before returning.
    - 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
    - 	 *	SASL_FAILED:	The exchange has failed and the connection should be
    - 	 *					dropped.
    - 	 *--------
    - 	 */
    --	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
    -+	SASLStatus	(*exchange) (void *state, bool final,
    -+							 char *input, int inputlen,
    - 							 char **output, int *outputlen);
    - 
    - 	/*--------
    -
    - ## src/interfaces/libpq/fe-auth-scram.c ##
    -@@
    - /* The exported SCRAM callback mechanism. */
    - static void *scram_init(PGconn *conn, const char *password,
    - 						const char *sasl_mechanism);
    --static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
    -+static SASLStatus scram_exchange(void *opaq, bool final,
    -+								 char *input, int inputlen,
    - 								 char **output, int *outputlen);
    - static bool scram_channel_bound(void *opaq);
    - static void scram_free(void *opaq);
    -@@ src/interfaces/libpq/fe-auth-scram.c: scram_free(void *opaq)
    -  * Exchange a SCRAM message with backend.
    -  */
    - static SASLStatus
    --scram_exchange(void *opaq, char *input, int inputlen,
    -+scram_exchange(void *opaq, bool final,
    -+			   char *input, int inputlen,
    - 			   char **output, int *outputlen)
    - {
    - 	fe_scram_state *state = (fe_scram_state *) opaq;
    -
      ## src/interfaces/libpq/fe-auth.c ##
     @@
      #endif
    @@ src/interfaces/libpq/fe-auth.c
      #include "libpq-fe.h"
      
      #ifdef ENABLE_GSS
    -@@ src/interfaces/libpq/fe-auth.c: pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
    -  * Initialize SASL authentication exchange.
    -  */
    - static int
    --pg_SASL_init(PGconn *conn, int payloadlen)
    -+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
    - {
    - 	char	   *initialresponse = NULL;
    - 	int			initialresponselen;
    -@@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
    - 		goto error;
    - 	}
    - 
    --	if (conn->sasl_state)
    -+	if (conn->sasl_state && !conn->async_auth)
    - 	{
    - 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
    - 		goto error;
    -@@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
    +@@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
      			conn->sasl = &pg_scram_mech;
      			conn->password_needed = true;
      		}
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      	}
      
      	if (!selected_mechanism)
    -@@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
    - 
    - 	Assert(conn->sasl);
    - 
    --	/*
    --	 * Initialize the SASL state information with all the information gathered
    --	 * during the initial exchange.
    --	 *
    --	 * Note: Only tls-unique is supported for the moment.
    --	 */
    --	conn->sasl_state = conn->sasl->init(conn,
    --										password,
    --										selected_mechanism);
    - 	if (!conn->sasl_state)
    --		goto oom_error;
    -+	{
    -+		/*
    -+		 * Initialize the SASL state information with all the information
    -+		 * gathered during the initial exchange.
    -+		 *
    -+		 * Note: Only tls-unique is supported for the moment.
    -+		 */
    -+		conn->sasl_state = conn->sasl->init(conn,
    -+											password,
    -+											selected_mechanism);
    -+		if (!conn->sasl_state)
    -+			goto oom_error;
    -+	}
    -+	else
    -+	{
    -+		/*
    -+		 * This is only possible if we're returning from an async loop.
    -+		 * Disconnect it now.
    -+		 */
    -+		Assert(conn->async_auth);
    -+		conn->async_auth = NULL;
    -+	}
    - 
    - 	/* Get the mechanism-specific Initial Client Response, if any */
    --	status = conn->sasl->exchange(conn->sasl_state,
    -+	status = conn->sasl->exchange(conn->sasl_state, false,
    - 								  NULL, -1,
    - 								  &initialresponse, &initialresponselen);
    - 
    - 	if (status == SASL_FAILED)
    - 		goto error;
    - 
    -+	if (status == SASL_ASYNC)
    -+	{
    -+		/*
    -+		 * The mechanism should have set up the necessary callbacks; all we
    -+		 * need to do is signal the caller.
    -+		 */
    -+		*async = true;
    -+		return STATUS_OK;
    -+	}
    -+
    - 	/*
    - 	 * Build a SASLInitialResponse message, and send it.
    - 	 */
    -@@ src/interfaces/libpq/fe-auth.c: oom_error:
    -  * the protocol.
    -  */
    - static int
    --pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
    -+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
    - {
    - 	char	   *output;
    - 	int			outputlen;
    -@@ src/interfaces/libpq/fe-auth.c: pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
    - 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
    - 	challenge[payloadlen] = '\0';
    - 
    --	status = conn->sasl->exchange(conn->sasl_state,
    -+	status = conn->sasl->exchange(conn->sasl_state, final,
    - 								  challenge, payloadlen,
    - 								  &output, &outputlen);
    - 	free(challenge);			/* don't need the input anymore */
    - 
    -+	if (status == SASL_ASYNC)
    -+	{
    -+		/*
    -+		 * The mechanism should have set up the necessary callbacks; all we
    -+		 * need to do is signal the caller.
    -+		 */
    -+		*async = true;
    -+		return STATUS_OK;
    -+	}
    -+
    - 	if (final && status == SASL_CONTINUE)
    - 	{
    - 		if (outputlen != 0)
    -@@ src/interfaces/libpq/fe-auth.c: check_expected_areq(AuthRequest areq, PGconn *conn)
    -  * it. We are responsible for reading any remaining extra data, specific
    -  * to the authentication method. 'payloadlen' is the remaining length in
    -  * the message.
    -+ *
    -+ * If *async is set to true on return, the client doesn't yet have enough
    -+ * information to respond, and the caller must temporarily switch to
    -+ * conn->async_auth() to continue driving the exchange.
    -  */
    - int
    --pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
    -+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
    - {
    - 	int			oldmsglen;
    - 
    -+	*async = false;
    -+
    - 	if (!check_expected_areq(areq, conn))
    - 		return STATUS_ERROR;
    - 
    -@@ src/interfaces/libpq/fe-auth.c: pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
    - 			 * The request contains the name (as assigned by IANA) of the
    - 			 * authentication mechanism.
    - 			 */
    --			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
    -+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
    - 			{
    - 				/* pg_SASL_init already set the error message */
    - 				return STATUS_ERROR;
    -@@ src/interfaces/libpq/fe-auth.c: pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
    - 			}
    - 			oldmsglen = conn->errorMessage.len;
    - 			if (pg_SASL_continue(conn, payloadlen,
    --								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
    -+								 (areq == AUTH_REQ_SASL_FIN),
    -+								 async) != STATUS_OK)
    - 			{
    - 				/* Use this message if pg_SASL_continue didn't supply one */
    - 				if (conn->errorMessage.len == oldmsglen)
     @@ src/interfaces/libpq/fe-auth.c: PQchangePassword(PGconn *conn, const char *user, const char *passwd)
      		}
      	}
    @@ src/interfaces/libpq/fe-auth.h
     +
     +
      /* Prototypes for functions in fe-auth.c */
    --extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
    -+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
    -+						   bool *async);
    - extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
    - extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
    - 
    + extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
    + 						   bool *async);
     
      ## src/interfaces/libpq/fe-connect.c ##
     @@
    @@ src/interfaces/libpq/fe-connect.c: pqDropServerData(PGconn *conn)
      
      	/*
      	 * Cancel connections need to retain their be_pid and be_key across
    -@@ src/interfaces/libpq/fe-connect.c: PQconnectPoll(PGconn *conn)
    - 		case CONNECTION_NEEDED:
    - 		case CONNECTION_GSS_STARTUP:
    - 		case CONNECTION_CHECK_TARGET:
    -+		case CONNECTION_AUTHENTICATING:
    - 			break;
    - 
    - 		default:
    -@@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here until there is
    - 				int			avail;
    - 				AuthRequest areq;
    - 				int			res;
    -+				bool		async;
    - 
    - 				/*
    - 				 * Scan the message from current point (note that if we find
     @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here until there is
      					/* Check to see if we should mention pgpassfile */
      					pgpassfileWarning(conn);
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
      					CONNECTION_FAILED();
      				}
      				else if (beresp == PqMsg_NegotiateProtocolVersion)
    -@@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here until there is
    - 				 * Note that conn->pghost must be non-NULL if we are going to
    - 				 * avoid the Kerberos code doing a hostname look-up.
    - 				 */
    --				res = pg_fe_sendauth(areq, msgLength, conn);
    -+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
    -+
    -+				if (async && (res == STATUS_OK))
    -+				{
    -+					/*
    -+					 * We'll come back later once we're ready to respond.
    -+					 * Don't consume the request yet.
    -+					 */
    -+					conn->status = CONNECTION_AUTHENTICATING;
    -+					goto keep_going;
    -+				}
    - 
    - 				/*
    - 				 * OK, we have processed the message; mark data consumed.  We
    -@@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here until there is
    - 				goto keep_going;
    - 			}
    - 
    -+		case CONNECTION_AUTHENTICATING:
    -+			{
    -+				PostgresPollingStatusType status;
    -+				pgsocket	altsock = PGINVALID_SOCKET;
    -+
    -+				if (!conn->async_auth)
    -+				{
    -+					/* programmer error; should not happen */
    -+					libpq_append_conn_error(conn, "async authentication has no handler");
    -+					goto error_return;
    -+				}
    -+
    -+				status = conn->async_auth(conn, &altsock);
    -+
    -+				if (status == PGRES_POLLING_FAILED)
    -+					goto error_return;
    -+
    -+				if (status == PGRES_POLLING_OK)
    -+				{
    -+					/*
    -+					 * Reenter the authentication exchange with the server. We
    -+					 * didn't consume the message that started external
    -+					 * authentication, so it'll be reprocessed as if we just
    -+					 * received it.
    -+					 */
    -+					conn->status = CONNECTION_AWAITING_RESPONSE;
    -+					conn->altsock = PGINVALID_SOCKET;	/* TODO: what frees
    -+														 * this? */
    -+					goto keep_going;
    -+				}
    -+
    -+				conn->altsock = altsock;
    -+				return status;
    -+			}
    -+
    - 		case CONNECTION_AUTH_OK:
    - 			{
    - 				/*
    -@@ src/interfaces/libpq/fe-connect.c: pqMakeEmptyPGconn(void)
    - 	conn->verbosity = PQERRORS_DEFAULT;
    - 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
    - 	conn->sock = PGINVALID_SOCKET;
    -+	conn->altsock = PGINVALID_SOCKET;
    - 	conn->Pfdebug = NULL;
    - 
    - 	/*
     @@ src/interfaces/libpq/fe-connect.c: freePGconn(PGconn *conn)
      	free(conn->rowBuf);
      	free(conn->target_session_attrs);
    @@ src/interfaces/libpq/fe-connect.c: freePGconn(PGconn *conn)
      	termPQExpBuffer(&conn->errorMessage);
      	termPQExpBuffer(&conn->workBuffer);
      
    -@@ src/interfaces/libpq/fe-connect.c: PQsocket(const PGconn *conn)
    - {
    - 	if (!conn)
    - 		return -1;
    -+	if (conn->altsock != PGINVALID_SOCKET)
    -+		return conn->altsock;
    - 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
    - }
    - 
    -
    - ## src/interfaces/libpq/fe-misc.c ##
    -@@ src/interfaces/libpq/fe-misc.c: static int
    - pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
    - {
    - 	int			result;
    -+	pgsocket	sock;
    - 
    - 	if (!conn)
    - 		return -1;
    --	if (conn->sock == PGINVALID_SOCKET)
    -+
    -+	sock = (conn->altsock != PGINVALID_SOCKET) ? conn->altsock : conn->sock;
    -+	if (sock == PGINVALID_SOCKET)
    - 	{
    - 		libpq_append_conn_error(conn, "invalid socket");
    - 		return -1;
    -@@ src/interfaces/libpq/fe-misc.c: pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
    - 
    - 	/* We will retry as long as we get EINTR */
    - 	do
    --		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
    -+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
    - 	while (result < 0 && SOCK_ERRNO == EINTR);
    - 
    - 	if (result < 0)
     
      ## src/interfaces/libpq/libpq-fe.h ##
    -@@ src/interfaces/libpq/libpq-fe.h: extern "C"
    -  */
    - #include "postgres_ext.h"
    - 
    -+#ifdef WIN32
    -+#include <winsock2.h>			/* for SOCKET */
    -+#endif
    -+
    - /*
    -  * These symbols may be used in compile-time #ifdef tests for the availability
    -  * of v14-and-newer libpq features.
     @@ src/interfaces/libpq/libpq-fe.h: extern "C"
      /* Features added in PostgreSQL v18: */
      /* Indicates presence of PQfullProtocolVersion */
    @@ src/interfaces/libpq/libpq-fe.h: extern "C"
      
      /*
       * Option flags for PQcopyResult
    -@@ src/interfaces/libpq/libpq-fe.h: typedef enum
    - 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
    - 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
    - 								 * started.  */
    -+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
    -+								 * external system. */
    - } ConnStatusType;
    - 
    - typedef enum
     @@ src/interfaces/libpq/libpq-fe.h: typedef enum
      	PQ_PIPELINE_ABORTED
      } PGpipelineStatus;
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +} PGpromptOAuthDevice;
     +
     +/* for PGoauthBearerRequest.async() */
    -+#ifdef WIN32
    -+#define SOCKTYPE SOCKET
    ++#ifdef _WIN32
    ++#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
     +#else
     +#define SOCKTYPE int
     +#endif
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +	 * Callback implementing a custom asynchronous OAuth flow.
     +	 *
     +	 * The callback may return
    -+	 * - PGRES_POLLING_READING/WRITING, to indicate that a file descriptor
    ++	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
     +	 *   has been stored in *altsock and libpq should wait until it is
     +	 *   readable or writable before calling back;
     +	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
    @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
      	/* Optional file to write trace info to */
      	FILE	   *Pfdebug;
      	int			traceFlags;
    -@@ src/interfaces/libpq/libpq-int.h: struct pg_conn
    - 										 * know which auth response we're
    - 										 * sending */
    - 
    -+	/* Callback for external async authentication */
    -+	PostgresPollingStatusType (*async_auth) (PGconn *conn, pgsocket *altsock);
    -+	pgsocket	altsock;		/* alternative socket for client to poll */
    -+
    -+
    - 	/* Transient state needed while establishing connection */
    - 	PGTargetServerType target_server_type;	/* desired session properties */
    - 	PGLoadBalanceType load_balance_type;	/* desired load balancing
     
      ## src/interfaces/libpq/meson.build ##
     @@
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +
     +#include "postgres_fe.h"
     +
    -+#include <stdio.h>
    -+#include <stdlib.h>
    -+
    -+#ifdef WIN32
    -+#include <winsock2.h>
    -+#else
     +#include <sys/socket.h>
    -+#endif
     +
     +#include "getopt_long.h"
     +#include "libpq-fe.h"
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +static void
     +usage(char *argv[])
     +{
    -+	fprintf(stderr, "usage: %s [flags] CONNINFO\n\n", argv[0]);
    ++	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
     +
    -+	fprintf(stderr, "recognized flags:\n");
    -+	fprintf(stderr, " -h, --help				show this message\n");
    -+	fprintf(stderr, " --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
    -+	fprintf(stderr, " --expected-uri URI		fail if received configuration link does not match URI\n");
    -+	fprintf(stderr, " --no-hook					don't install OAuth hooks (connection will fail)\n");
    -+	fprintf(stderr, " --hang-forever			don't ever return a token (combine with connect_timeout)\n");
    -+	fprintf(stderr, " --token TOKEN				use the provided TOKEN value\n");
    ++	printf("recognized flags:\n");
    ++	printf(" -h, --help				show this message\n");
    ++	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
    ++	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
    ++	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
    ++	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
    ++	printf(" --token TOKEN			use the provided TOKEN value\n");
     +}
     +
     +/* --options */
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +
     +my $log_start = $node->wait_for_log(qr/reloading configuration files/);
     +
    ++# Check pg_hba_file_rules() support.
    ++my $contents = $bgconn->query_safe(
    ++	qq(SELECT rule_number, auth_method, options
    ++		 FROM pg_hba_file_rules
    ++		 ORDER BY rule_number;));
    ++is( $contents,
    ++	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
    ++2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
    ++3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
    ++	"pg_hba_file_rules recreates OAuth HBA settings");
     +
     +# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
     +# first, check to make sure the client refuses such connections by default.
    @@ src/test/modules/oauth_validator/t/002_client.pl (new)
     +		"fails without custom hook installed",
     +		flags => ["--no-hook"],
     +		expected_stderr =>
    -+		  qr/no custom OAuth flows are available, and libpq was not built using --with-libcurl/
    ++		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
     +	);
     +}
     +
    @@ src/tools/pgindent/typedefs.list: explain_get_index_name_hook_type
      fe_scram_state
      fe_scram_state_enum
      fetch_range_request
    -@@ src/tools/pgindent/typedefs.list: nsphash_hash
    - ntile_context
    - nullingrel_info
    - numeric
    -+oauth_state
    - object_access_hook_type
    - object_access_hook_type_str
    - off_t
-:  ----------- > 5:  18507c6978b squash! Add OAUTHBEARER SASL mechanism
-:  ----------- > 6:  8e82059700b XXX fix libcurl link error
3:  661de01c4ed = 7:  5339b3f2617 DO NOT MERGE: Add pytest suite for OAuth
v41-0001-Move-PG_MAX_AUTH_TOKEN_LENGTH-to-libpq-auth.h.patchapplication/octet-stream; name=v41-0001-Move-PG_MAX_AUTH_TOKEN_LENGTH-to-libpq-auth.h.patchDownload
From 386e7c4df31487b28758c76de503c71a05e6ef85 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 8 Jan 2025 09:30:05 -0800
Subject: [PATCH v41 1/7] Move PG_MAX_AUTH_TOKEN_LENGTH to libpq/auth.h

OAUTHBEARER would like to use this as a limit on Bearer token messages
coming from the client, so promote it to the header file.
---
 src/backend/libpq/auth.c | 16 ----------------
 src/include/libpq/auth.h | 16 ++++++++++++++++
 2 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 46facc275ef..d6ef32cc823 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 9157dbe6092..902c5f6de32 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
-- 
2.34.1

v41-0002-require_auth-prepare-for-multiple-SASL-mechanism.patchapplication/octet-stream; name=v41-0002-require_auth-prepare-for-multiple-SASL-mechanism.patchDownload
From b829f7a8ac759c58a763d76382cb957370d410b9 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 16 Dec 2024 13:57:14 -0800
Subject: [PATCH v41 2/7] require_auth: prepare for multiple SASL mechanisms

Prior to this patch, the require_auth implementation assumed that the
AuthenticationSASL protocol message was synonymous with SCRAM-SHA-256.
In preparation for the OAUTHBEARER SASL mechanism, split the
implementation into two tiers: the first checks the acceptable
AUTH_REQ_* codes, and the second checks acceptable mechanisms if
AUTH_REQ_SASL et al are permitted.

conn->allowed_sasl_mechs is the list of pointers to acceptable
mechanisms. (Since we'll support only a small number of mechanisms, this
is an array of static length to minimize bookkeeping.) pg_SASL_init()
will bail if the selected mechanism isn't contained in this array.

Since there's only one mechansism supported right now, one branch of the
second tier cannot be exercised yet (it's marked with Assert(false)).
This assertion will need to be removed when the next mechanism is added.
---
 src/interfaces/libpq/fe-auth.c            |  29 ++++
 src/interfaces/libpq/fe-connect.c         | 178 +++++++++++++++++++---
 src/interfaces/libpq/libpq-int.h          |   2 +
 src/test/authentication/t/001_password.pl |  10 ++
 4 files changed, 202 insertions(+), 17 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 14a9a862f51..722bb47ee14 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -543,6 +543,35 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
+	/* Make sure require_auth is satisfied. */
+	if (conn->require_auth)
+	{
+		bool		allowed = false;
+
+		for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		{
+			if (conn->sasl == conn->allowed_sasl_mechs[i])
+			{
+				allowed = true;
+				break;
+			}
+		}
+
+		if (!allowed)
+		{
+			/*
+			 * TODO: this is dead code until a second SASL mechanism is added;
+			 * the connection can't have proceeded past check_expected_areq()
+			 * if no SASL methods are allowed.
+			 */
+			Assert(false);
+
+			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
+									conn->require_auth, selected_mechanism);
+			goto error;
+		}
+	}
+
 	if (conn->channel_binding[0] == 'r' &&	/* require */
 		strcmp(selected_mechanism, SCRAM_SHA_256_PLUS_NAME) != 0)
 	{
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 8f211821eb2..6f262706b0a 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1110,6 +1110,56 @@ libpq_prng_init(PGconn *conn)
 	pg_prng_seed(&conn->prng_state, rseed);
 }
 
+/*
+ * Fills the connection's allowed_sasl_mechs list with all supported SASL
+ * mechanisms.
+ */
+static inline void
+fill_allowed_sasl_mechs(PGconn *conn)
+{
+	/*---
+	 * We only support one mechanism at the moment, so rather than deal with a
+	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
+	 * rely on the compile-time assertion here to keep us honest.
+	 *
+	 * To add a new mechanism to require_auth,
+	 * - update the length of conn->allowed_sasl_mechs,
+	 * - add the new pg_fe_sasl_mech pointer to this function, and
+	 * - handle the new mechanism name in the require_auth portion of
+	 *   pqConnectOptions2(), below.
+	 */
+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
+					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
+
+	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
+}
+
+/*
+ * Clears the connection's allowed_sasl_mechs list.
+ */
+static inline void
+clear_allowed_sasl_mechs(PGconn *conn)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		conn->allowed_sasl_mechs[i] = NULL;
+}
+
+/*
+ * Helper routine that searches the static allowed_sasl_mechs list for a
+ * specific mechanism.
+ */
+static inline int
+index_of_allowed_sasl_mech(PGconn *conn, const pg_fe_sasl_mech *mech)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+	{
+		if (conn->allowed_sasl_mechs[i] == mech)
+			return i;
+	}
+
+	return -1;
+}
+
 /*
  *		pqConnectOptions2
  *
@@ -1351,17 +1401,19 @@ pqConnectOptions2(PGconn *conn)
 		bool		negated = false;
 
 		/*
-		 * By default, start from an empty set of allowed options and add to
-		 * it.
+		 * By default, start from an empty set of allowed methods and
+		 * mechanisms, and add to it.
 		 */
 		conn->auth_required = true;
 		conn->allowed_auth_methods = 0;
+		clear_allowed_sasl_mechs(conn);
 
 		for (first = true, more = true; more; first = false)
 		{
 			char	   *method,
 					   *part;
-			uint32		bits;
+			uint32		bits = 0;
+			const pg_fe_sasl_mech *mech = NULL;
 
 			part = parse_comma_separated_list(&s, &more);
 			if (part == NULL)
@@ -1377,11 +1429,12 @@ pqConnectOptions2(PGconn *conn)
 				if (first)
 				{
 					/*
-					 * Switch to a permissive set of allowed options, and
-					 * subtract from it.
+					 * Switch to a permissive set of allowed methods and
+					 * mechanisms, and subtract from it.
 					 */
 					conn->auth_required = false;
 					conn->allowed_auth_methods = -1;
+					fill_allowed_sasl_mechs(conn);
 				}
 				else if (!negated)
 				{
@@ -1406,6 +1459,10 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
+			/*
+			 * First group: methods that can be handled solely with the
+			 * authentication request codes.
+			 */
 			if (strcmp(method, "password") == 0)
 			{
 				bits = (1 << AUTH_REQ_PASSWORD);
@@ -1424,13 +1481,22 @@ pqConnectOptions2(PGconn *conn)
 				bits = (1 << AUTH_REQ_SSPI);
 				bits |= (1 << AUTH_REQ_GSS_CONT);
 			}
+
+			/*
+			 * Next group: SASL mechanisms. All of these use the same request
+			 * codes, so the list of allowed mechanisms is tracked separately.
+			 *
+			 * fill_allowed_sasl_mechs() must be updated when adding a new
+			 * mechanism here!
+			 */
 			else if (strcmp(method, "scram-sha-256") == 0)
 			{
-				/* This currently assumes that SCRAM is the only SASL method. */
-				bits = (1 << AUTH_REQ_SASL);
-				bits |= (1 << AUTH_REQ_SASL_CONT);
-				bits |= (1 << AUTH_REQ_SASL_FIN);
+				mech = &pg_scram_mech;
 			}
+
+			/*
+			 * Final group: meta-options.
+			 */
 			else if (strcmp(method, "none") == 0)
 			{
 				/*
@@ -1466,20 +1532,68 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
-			/* Update the bitmask. */
-			if (negated)
+			if (mech)
 			{
-				if ((conn->allowed_auth_methods & bits) == 0)
-					goto duplicate;
+				/*
+				 * Update the mechanism set only. The method bitmask will be
+				 * updated for SASL further down.
+				 */
+				Assert(!bits);
+
+				if (negated)
+				{
+					/* Remove the existing mechanism from the list. */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i < 0)
+						goto duplicate;
 
-				conn->allowed_auth_methods &= ~bits;
+					conn->allowed_sasl_mechs[i] = NULL;
+				}
+				else
+				{
+					/*
+					 * Find a space to put the new mechanism (after making
+					 * sure it's not already there).
+					 */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i >= 0)
+						goto duplicate;
+
+					i = index_of_allowed_sasl_mech(conn, NULL);
+					if (i < 0)
+					{
+						/* Should not happen; the pointer list is corrupted. */
+						Assert(false);
+
+						conn->status = CONNECTION_BAD;
+						libpq_append_conn_error(conn,
+												"internal error: no space in allowed_sasl_mechs");
+						free(part);
+						return false;
+					}
+
+					conn->allowed_sasl_mechs[i] = mech;
+				}
 			}
 			else
 			{
-				if ((conn->allowed_auth_methods & bits) == bits)
-					goto duplicate;
+				/* Update the method bitmask. */
+				Assert(bits);
+
+				if (negated)
+				{
+					if ((conn->allowed_auth_methods & bits) == 0)
+						goto duplicate;
+
+					conn->allowed_auth_methods &= ~bits;
+				}
+				else
+				{
+					if ((conn->allowed_auth_methods & bits) == bits)
+						goto duplicate;
 
-				conn->allowed_auth_methods |= bits;
+					conn->allowed_auth_methods |= bits;
+				}
 			}
 
 			free(part);
@@ -1498,6 +1612,36 @@ pqConnectOptions2(PGconn *conn)
 			free(part);
 			return false;
 		}
+
+		/*
+		 * Finally, allow SASL authentication requests if (and only if) we've
+		 * allowed any mechanisms.
+		 */
+		{
+			bool		allowed = false;
+			const uint32 sasl_bits =
+				(1 << AUTH_REQ_SASL)
+				| (1 << AUTH_REQ_SASL_CONT)
+				| (1 << AUTH_REQ_SASL_FIN);
+
+			for (i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+			{
+				if (conn->allowed_sasl_mechs[i])
+				{
+					allowed = true;
+					break;
+				}
+			}
+
+			/*
+			 * For the standard case, add the SASL bits to the (default-empty)
+			 * set if needed. For the negated case, remove them.
+			 */
+			if (!negated && allowed)
+				conn->allowed_auth_methods |= sasl_bits;
+			else if (negated && !allowed)
+				conn->allowed_auth_methods &= ~sasl_bits;
+		}
 	}
 
 	/*
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 4a5a7c8b5e3..d372276c486 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -501,6 +501,8 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
+	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 	char		current_auth_response;	/* used by pqTraceOutputMessage to
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 773238b76fd..1357f806b6f 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -277,6 +277,16 @@ $node->connect_fails(
 	"require_auth methods cannot be duplicated, !none case",
 	expected_stderr =>
 	  qr/require_auth method "!none" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=scram-sha-256,scram-sha-256",
+	"require_auth methods cannot be duplicated, scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "scram-sha-256" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=!scram-sha-256,!scram-sha-256",
+	"require_auth methods cannot be duplicated, !scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "!scram-sha-256" is specified more than once/);
 
 # Unknown value defined in require_auth.
 $node->connect_fails(
-- 
2.34.1

v41-0003-libpq-handle-asynchronous-actions-during-SASL.patchapplication/octet-stream; name=v41-0003-libpq-handle-asynchronous-actions-during-SASL.patchDownload
From f88f98df97dc501623c6493278480c2c9f8a15da Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 8 Jan 2025 09:30:05 -0800
Subject: [PATCH v41 3/7] libpq: handle asynchronous actions during SASL

This adds the ability for a SASL mechanism to signal to PQconnectPoll()
that some arbitrary work must be done, external to the Postgres
connection, before authentication can continue. The intent is for the
upcoming OAUTHBEARER mechanism to make use of this functionality.

To ensure that threads are not blocked waiting for the SASL mechanism to
make long-running calls, the mechanism communicates with the top-level
client via the "altsock": a file or socket descriptor, opaque to this
layer of libpq, which is signaled when work is ready to be done again.
This socket temporarily takes the place of the standard connection
descriptor, so PQsocket() clients should continue to operate correctly
using their existing polling implementations.

A mechanism should set an authentication callback (conn->async_auth())
and a cleanup callback (conn->cleanup_async_auth()), return SASL_ASYNC
during the exchange, and assign conn->altsock during the first call to
async_auth(). When the cleanup callback is called, either because
authentication has succeeded or because the connection is being
dropped, the altsock must be released and disconnected from the PGconn.
---
 src/interfaces/libpq/fe-auth-sasl.h  |  11 ++-
 src/interfaces/libpq/fe-auth-scram.c |   6 +-
 src/interfaces/libpq/fe-auth.c       | 110 +++++++++++++++++++--------
 src/interfaces/libpq/fe-auth.h       |   3 +-
 src/interfaces/libpq/fe-connect.c    |  73 +++++++++++++++++-
 src/interfaces/libpq/fe-misc.c       |  35 +++++----
 src/interfaces/libpq/libpq-fe.h      |   2 +
 src/interfaces/libpq/libpq-int.h     |   6 ++
 8 files changed, 197 insertions(+), 49 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index f0c62139092..f06f547c07d 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,18 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth/cleanup_async_auth appropriately
+	 *					before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 59bf87d2213..9001317c996 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -202,7 +203,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 722bb47ee14..597956a0d0b 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -430,7 +430,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +448,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -607,26 +607,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -671,7 +693,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -701,11 +723,21 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -1013,12 +1045,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1176,7 +1214,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1185,23 +1223,33 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 
 		case AUTH_REQ_SASL_CONT:
 		case AUTH_REQ_SASL_FIN:
-			if (conn->sasl_state == NULL)
-			{
-				appendPQExpBufferStr(&conn->errorMessage,
-									 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
-				return STATUS_ERROR;
-			}
-			oldmsglen = conn->errorMessage.len;
-			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
 			{
-				/* Use this message if pg_SASL_continue didn't supply one */
-				if (conn->errorMessage.len == oldmsglen)
+				bool		final = false;
+
+				if (conn->sasl_state == NULL)
+				{
 					appendPQExpBufferStr(&conn->errorMessage,
-										 "fe_sendauth: error in SASL authentication\n");
-				return STATUS_ERROR;
+										 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
+					return STATUS_ERROR;
+				}
+				oldmsglen = conn->errorMessage.len;
+
+				if (areq == AUTH_REQ_SASL_FIN)
+					final = true;
+
+				if (pg_SASL_continue(conn, payloadlen, final, async) != STATUS_OK)
+				{
+					/*
+					 * Append a generic error message unless pg_SASL_continue
+					 * did set a more specific one already.
+					 */
+					if (conn->errorMessage.len == oldmsglen)
+						appendPQExpBufferStr(&conn->errorMessage,
+											 "fe_sendauth: error in SASL authentication\n");
+					return STATUS_ERROR;
+				}
+				break;
 			}
-			break;
 
 		default:
 			libpq_append_conn_error(conn, "authentication method %u not supported", areq);
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index df0a68b0b21..1d4991f8996 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -19,7 +19,8 @@
 
 
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 6f262706b0a..196e553bbef 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -494,6 +494,19 @@ pqDropConnection(PGconn *conn, bool flushInput)
 	conn->cmd_queue_recycle = NULL;
 
 	/* Free authentication/encryption state */
+	if (conn->cleanup_async_auth)
+	{
+		/*
+		 * Any in-progress async authentication should be torn down first so
+		 * that cleanup_async_auth() can depend on the other authentication
+		 * state if necessary.
+		 */
+		conn->cleanup_async_auth(conn);
+		conn->cleanup_async_auth = NULL;
+	}
+	conn->async_auth = NULL;
+	conn->altsock = PGINVALID_SOCKET;	/* cleanup_async_auth() should have
+										 * done this, but make sure. */
 #ifdef ENABLE_GSS
 	{
 		OM_uint32	min_s;
@@ -2790,6 +2803,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3825,6 +3839,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -4013,7 +4028,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -4050,6 +4075,49 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+
+				if (!conn->async_auth || !conn->cleanup_async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				/* Drive some external authentication work. */
+				status = conn->async_auth(conn);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/* Done. Tear down the async implementation. */
+					conn->cleanup_async_auth(conn);
+					conn->cleanup_async_auth = NULL;
+					Assert(conn->altsock == PGINVALID_SOCKET);
+
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+
+					goto keep_going;
+				}
+
+				/*
+				 * Caller needs to poll some more. conn->async_auth() should
+				 * have assigned an altsock to poll on.
+				 */
+				Assert(conn->altsock != PGINVALID_SOCKET);
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4731,6 +4799,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -7380,6 +7449,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 2c60eb5b569..d78445c70af 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1049,34 +1049,43 @@ pqWriteReady(PGconn *conn)
  * or both.  Returns >0 if one or more conditions are met, 0 if it timed
  * out, -1 if an error occurred.
  *
- * If SSL is in use, the SSL buffer is checked prior to checking the socket
- * for read data directly.
+ * If an altsock is set for asynchronous authentication, that will be used in
+ * preference to the "server" socket. Otherwise, if SSL is in use, the SSL
+ * buffer is checked prior to checking the socket for read data directly.
  */
 static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	if (conn->altsock != PGINVALID_SOCKET)
+		sock = conn->altsock;
+	else
 	{
-		libpq_append_conn_error(conn, "invalid socket");
-		return -1;
-	}
+		sock = conn->sock;
+		if (sock == PGINVALID_SOCKET)
+		{
+			libpq_append_conn_error(conn, "invalid socket");
+			return -1;
+		}
 
 #ifdef USE_SSL
-	/* Check for SSL library buffering read bytes */
-	if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
-	{
-		/* short-circuit the select */
-		return 1;
-	}
+		/* Check for SSL library buffering read bytes */
+		if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
+		{
+			/* short-circuit the select */
+			return 1;
+		}
 #endif
+	}
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index cce9ce60c55..a3491faf0c3 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -103,6 +103,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index d372276c486..c7e92f9d49c 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -509,6 +509,12 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	/* Callbacks for external async authentication */
+	PostgresPollingStatusType (*async_auth) (PGconn *conn);
+	void		(*cleanup_async_auth) (PGconn *conn);
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
-- 
2.34.1

v41-0004-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v41-0004-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From d96712cda1d4931ed1ab1125ac80abd151b7a2c6 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v41 4/7] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- handle cases where the client has been set up with an issuer and
  scope, but the Postgres server wants to use something different
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- figure out pgsocket/int difference on Windows
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 configure                                     |  213 ++
 configure.ac                                  |   32 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  371 +++
 doc/src/sgml/oauth-validators.sgml            |  402 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   23 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  860 ++++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    6 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2541 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1026 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   46 +
 src/interfaces/libpq/fe-auth.c                |   29 +
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   38 +
 src/interfaces/libpq/libpq-fe.h               |   82 +
 src/interfaces/libpq/libpq-int.h              |   10 +
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  221 ++
 .../modules/oauth_validator/t/001_server.pl   |  497 ++++
 .../modules/oauth_validator/t/002_client.pl   |  125 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  135 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 56 files changed, 7940 insertions(+), 17 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 18e944ca89d..8c518c317e7 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -219,6 +219,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -312,8 +313,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +692,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/configure b/configure
index a0b5e10ca39..e6b329ad2fe 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,144 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12207,6 +12356,59 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-libcurl" "$LINENO" 5
+fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
@@ -13955,6 +14157,17 @@ fi
 
 done
 
+fi
+
+if test "$with_libcurl" = yes; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index d713360f340..b13fee83701 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,27 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1315,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-libcurl])])
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
@@ -1588,6 +1616,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_libcurl" = yes; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..f84085dbac4 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 3f41a17b1fe..745da5b7f4a 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index ebdb5b3bc2d..3fca2910dad 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1141,6 +1141,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2582,6 +2595,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 105b22b3171..aa92baf1fb3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2345,6 +2345,105 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        This standard handshake requires two separate network connections to the
+        server per authentication attempt. To skip asking the server for a
+        discovery document URL, you may set <literal>oauth_issuer</literal> to a
+        <literal>/.well-known/</literal> URI used for OAuth discovery. (In this
+        case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth">custom OAuth
+        hook</link> is installed to provide one), then this parameter must be
+        set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -9992,6 +10091,278 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..d0bca9196d9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,402 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f7..ae4732df656 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index cfd654d2916..842559ac3ac 100644
--- a/meson.build
+++ b/meson.build
@@ -854,6 +854,24 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+  endif
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3034,6 +3052,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3702,6 +3724,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index eac3d001211..5771983af93 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..6155d63a116
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,860 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked via before_shmem_exit().
+ */
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index c9d8cd796a8..19fa78b7f8c 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4822,6 +4823,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 079efa1baa7..378aa8438d6 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..8fe56267780
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..4fcdda74305
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..9b1ed7996d3 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -663,6 +666,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 6a0def7273c..e9422888e3e 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..ff8f4df9b48
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2541 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	if (!ctx->nested)
+		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+			Assert(!*field->target.scalar);
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &state->token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (state->token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				conn->altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!state->token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..f9133ad57e8
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1026 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state->token);
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the token pointer will be ignored and the initial
+ * response will instead contain a request for the server's required OAuth
+ * parameters (Sec. 4.3). Otherwise, a bearer token must be provided.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover, const char *token)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* We must have a token. */
+		if (!token)
+		{
+			/*
+			 * Either programmer error, or something went badly wrong during
+			 * the asynchronous fetch.
+			 *
+			 * TODO: users shouldn't see this; what action should they take if
+			 * they do?
+			 */
+			libpq_append_conn_error(conn,
+									"no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection. conn->oauth_want_retry will be set if the error status is
+ * suitable for a second attempt.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	/* TODO: what if these override what the user already specified? */
+	/* TODO: what if there's no discovery URI? */
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (conn->oauth_discovery_uri)
+			free(conn->oauth_discovery_uri);
+
+		conn->oauth_discovery_uri = ctx.discovery_uri;
+		ctx.discovery_uri = NULL;
+	}
+
+	if (ctx.scope)
+	{
+		if (conn->oauth_scope)
+			free(conn->oauth_scope);
+
+		conn->oauth_scope = ctx.scope;
+		ctx.scope = NULL;
+	}
+	/* TODO: missing error scope should clear any existing connection scope */
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") == 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for,
+		 * but only if we have enough information to do so and we haven't
+		 * already retried this connection once.
+		 */
+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
+			conn->oauth_want_retry = PG_BOOL_YES;
+	}
+	/* TODO: include status in hard failure message */
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the state. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		state->token = strdup(request->token);
+		if (!state->token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* TODO: what if no altsock was set? */
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the state. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			state->token = strdup(request.token);
+			if (!state->token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/*
+		 * Hand off to our built-in OAuth flow.
+		 *
+		 * Only allow one try per connection, since we're not performing any
+		 * caching at the moment. (Custom flows might be more sophisticated.)
+		 */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+		conn->oauth_want_retry = PG_BOOL_NO;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * Decide whether we're using a user-provided OAuth flow, or
+				 * the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (state->token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached). In that case, we can just fall
+					 * through.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we have to hand the connection over to our
+					 * OAuth implementation. This involves a number of HTTP
+					 * connections and timed waits, so we escape the
+					 * synchronous auth processing and tell PQconnectPoll to
+					 * transfer control to our async implementation.
+					 */
+					Assert(conn->async_auth);	/* should have been set
+												 * already */
+					state->step = FE_OAUTH_REQUESTING_TOKEN;
+					return SASL_ASYNC;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a discovery URI to be able to request a
+				 * token, we ask the server for one explicitly. This doesn't
+				 * require any asynchronous work.
+				 */
+				discover = true;
+			}
+
+			/* fall through */
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+			/* We should still be in the initial response phase. */
+			Assert(inputlen == -1);
+
+			*output = client_initial_response(conn, discover, state->token);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/* TODO: ensure there is no message content here. */
+				return SASL_COMPLETE;
+			}
+
+			/*
+			 * Error message sent by the server.
+			 */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			/*
+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			state->step = FE_OAUTH_SERVER_ERROR;
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..6a93b3062d7
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,46 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	char	   *token;
+
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 597956a0d0b..dc7b1a2e725 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -1570,3 +1579,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 196e553bbef..d2d967b86d5 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -27,6 +27,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -366,6 +367,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
 	offsetof(struct pg_conn, load_balance_hosts)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -642,6 +660,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	/* conn->oauth_want_retry = false; TODO */
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -3995,6 +4014,19 @@ keep_going:						/* We will come back to here until there is
 					/* Check to see if we should mention pgpassfile */
 					pgpassfileWarning(conn);
 
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech
+						&& conn->oauth_want_retry == PG_BOOL_YES)
+					{
+						/* Only allow retry once. */
+						conn->oauth_want_retry = PG_BOOL_NO;
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					CONNECTION_FAILED();
 				}
 				else if (beresp == PqMsg_NegotiateProtocolVersion)
@@ -4917,6 +4949,12 @@ freePGconn(PGconn *conn)
 	free(conn->rowBuf);
 	free(conn->target_session_attrs);
 	free(conn->load_balance_hosts);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..5f8d608261e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index c7e92f9d49c..68645076227 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -433,6 +433,16 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 1a5a223e1af..4180e35f8cf 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -4,6 +4,7 @@
 # args for executables (which depend on libpq).
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -40,6 +41,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14b..bdfd5f1f8de 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 4f544a042d4..0c2ccc75a63 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..f297ed5c968
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..138a8104622
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..f77a3e115c6
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..4b78c90557c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..861ac586c45
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,221 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..98dd532e133
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,497 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..312404f3430
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,125 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..f0f23d1d1a8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..8ec09102027
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..bf94f091def
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,135 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 08b89a4cdff..9240d408713 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2514,6 +2514,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2557,7 +2562,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index eb93debe108..f32f9c83369 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -370,6 +370,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1721,6 +1724,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1829,6 +1833,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1836,7 +1841,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1948,6 +1955,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3079,6 +3087,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3472,6 +3482,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v41-0005-squash-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v41-0005-squash-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 18507c6978b7c357cb7d3d28743f37eef65643fa Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 16 Dec 2024 13:57:14 -0800
Subject: [PATCH v41 5/7] squash! Add OAUTHBEARER SASL mechanism

Add require_auth=oauth support.
---
 doc/src/sgml/libpq.sgml                       |  9 +++
 src/interfaces/libpq/fe-auth-oauth.c          |  7 +++
 src/interfaces/libpq/fe-auth.c                |  7 ---
 src/interfaces/libpq/fe-connect.c             |  9 ++-
 src/interfaces/libpq/libpq-int.h              |  2 +-
 src/test/authentication/t/001_password.pl     |  8 +--
 .../modules/oauth_validator/t/001_server.pl   | 62 +++++++++++++++++--
 7 files changed, 86 insertions(+), 18 deletions(-)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index aa92baf1fb3..9d132c6c4bd 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index f9133ad57e8..7a94de9c034 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -955,6 +955,13 @@ oauth_exchange(void *opaq, bool final,
 			*outputlen = strlen(*output);
 			state->step = FE_OAUTH_BEARER_SENT;
 
+			/*
+			 * For the purposes of require_auth, our side of authentication is
+			 * done at this point; the server will either accept the
+			 * connection or send an error. Unlike SCRAM, there is no
+			 * additional server data to check upon success.
+			 */
+			conn->client_finished_auth = true;
 			return SASL_CONTINUE;
 
 		case FE_OAUTH_BEARER_SENT:
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index dc7b1a2e725..27868506350 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -568,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index d2d967b86d5..6ba6432d750 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1150,7 +1150,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1160,10 +1160,11 @@ fill_allowed_sasl_mechs(PGconn *conn)
 	 * - handle the new mechanism name in the require_auth portion of
 	 *   pqConnectOptions2(), below.
 	 */
-	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 2,
 					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
 
 	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
+	conn->allowed_sasl_mechs[1] = &pg_oauth_mech;
 }
 
 /*
@@ -1525,6 +1526,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 68645076227..7f0fcb9ee5a 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -511,7 +511,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 98dd532e133..96040e5ba95 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -134,6 +134,60 @@ $node->connect_fails(
 	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
 );
 
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
 # Make sure the client_id and secret are correctly encoded. $vschars contains
 # every allowed character for a client_id/_secret (the "VSCHAR" class).
 # $vschars_esc is additionally backslash-escaped for inclusion in a
@@ -144,15 +198,15 @@ my $vschars_esc =
   " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
 
 $node->connect_ok(
-	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc'",
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
 	"escapable characters: client_id",
 	expected_stderr =>
-	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
 $node->connect_ok(
-	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
 	"escapable characters: client_id and secret",
 	expected_stderr =>
-	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
 
 #
 # Further tests rely on support for specific behaviors in oauth_server.py. To
-- 
2.34.1

v41-0006-XXX-fix-libcurl-link-error.patchapplication/octet-stream; name=v41-0006-XXX-fix-libcurl-link-error.patchDownload
From 8e82059700bb460c5ed94c273824e446a9c161b6 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 13 Jan 2025 12:31:59 -0800
Subject: [PATCH v41 6/7] XXX fix libcurl link error

The ftp/curl port appears to be missing a minimum version dependency on
libssh2, so the following starts showing up after upgrading to curl
8.11.1_1:

    libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

But 13.3 is EOL, so it's not clear if anyone would be interested in a
bug report, and a FreeBSD 14 Cirrus image is in progress. Hack past it
for now.
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 8c518c317e7..97bb38c72c6 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -165,6 +165,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.34.1

v41-0007-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v41-0007-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 5339b3f2617e8ed667d79e32323a0f097d25fc84 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v41 7/7] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  195 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2507 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 ++++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 +++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6287 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 97bb38c72c6..a6fab60bfd8 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -318,6 +318,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -402,8 +403,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 842559ac3ac..1c4333214d6 100644
--- a/meson.build
+++ b/meson.build
@@ -3365,6 +3365,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3531,6 +3534,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..236057cd99e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 00000000000..0e8f027b2ec
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 00000000000..b0695b6287e
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 00000000000..acf339a5899
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 00000000000..9caa3a56d44
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 00000000000..8372376ede4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 00000000000..c61e8f0c760
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2507 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "authorization_code",
+                    "urn:ietf:params:oauth:grant-type:device_code",
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+            except ssl.SSLError as err:
+                # FIXME OpenSSL 3.4 introduced an incompatibility with Python's
+                # TLS error handling, resulting in a bogus "[SYS] unknown error"
+                # on some platforms. Hopefully this is fixed in 2025's set of
+                # maintenance releases and this case can be removed.
+                #
+                #     https://github.com/python/cpython/issues/127257
+                #
+                if "[SYS] unknown error" in str(err):
+                    return
+                raise
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PGPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PGOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PGOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PGOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PGPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PGOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    if server_discovery:
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+
+                # For discovery, the client should send an empty auth header.
+                # See RFC 7628, Sec. 4.3.
+                auth = get_auth_value(initial)
+                assert auth == b""
+
+                # Always fail the discovery exchange.
+                fail_oauth_handshake(
+                    conn,
+                    {
+                        "status": "invalid_token",
+                        "openid-configuration": discovery_uri,
+                    },
+                )
+
+        # Expect the client to connect again.
+        sock, client = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    if server_discovery:
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+
+                # For discovery, the client should send an empty auth header.
+                # See RFC 7628, Sec. 4.3.
+                auth = get_auth_value(initial)
+                assert auth == b""
+
+                # Always fail the discovery exchange.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": discovery_uri,
+                }
+                pq3.send(
+                    conn,
+                    pq3.types.AuthnRequest,
+                    type=pq3.authn.SASLContinue,
+                    body=json.dumps(resp).encode("utf-8"),
+                )
+
+                # FIXME: the client disconnects at this point; it'd be nicer if
+                # it completed the exchange.
+
+            # The client should not reconnect.
+
+    else:
+        expect_disconnected_handshake(sock)
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake.
+            startup = pq3.recv1(conn, cls=pq3.Startup)
+            assert startup.proto == pq3.protocol(3, 0)
+
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASL,
+                body=[b"OAUTHBEARER", b""],
+            )
+
+            # The client should disconnect at this point.
+            assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in the above endpoints
+            # being called.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PGOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # For discovery, the client should send an empty auth header. See
+            # RFC 7628, Sec. 4.3.
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Always fail the first SASL exchange.
+            fail_oauth_handshake(conn, fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    expect_disconnected_handshake(sock)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id="some-id",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an invalid
+            # one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    expect_disconnected_handshake(sock)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            resp = {
+                "status": "invalid_token",
+                "openid-configuration": to_http(openid_provider.discovery_uri),
+            }
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=json.dumps(resp).encode("utf-8"),
+            )
+
+            # FIXME: the client disconnects at this point; it'd be nicer if
+            # it completed the exchange.
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 00000000000..1a73865ee47
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 00000000000..e137df852ef
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 00000000000..ef809e288af
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 00000000000..ab7a6e7fb96
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 00000000000..0dfcffb83e0
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 00000000000..42af80c73ee
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 00000000000..85534b9cc99
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 00000000000..415748b9a66
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 00000000000..2839343ffa1
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 00000000000..02126dba792
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 00000000000..dee4855fc0b
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 00000000000..7c6817de31c
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 00000000000..075c02c1ca6
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 00000000000..804307ee120
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba7..ffdf760d79a 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.34.1

#187Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#186)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Jan 13, 2025 at 3:21 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Next email will discuss the architectural bug that Kashif found.

Okay, here goes. A standard OAuth connection attempt looks like this
(oh, I hope Gmail doesn't mangle it):

Issuer User libpq Backend
| | |
| x -----> x -----> o [1] Startup Packet
| | | |
| | x <----- x [2] OAUTHBEARER Request
| | | |
| | x -----> x [3] Parameter Discovery
| | | |
| | x <----- o [4] Parameters Stored
| | |
| | |
| | |
| | x -----> o [5] New Startup Packet
| | | |
| | x <----- x [6] OAUTHBEARER Request
| | | |
x <----- x <----> x |
x <----- x <----> x | [7] OAuth Handshake
x <----- x <----> x |
| | | |
o | x -----> x [8] Send Token
| | |
| <----- x <----- x [9] Connection Established
| | |
x <----> x <----> x
x <----> x <----> x [10] Use the DB
. . .
. . .
. . .

When the server first asks for a token via OAUTHBEARER (step 2), the
client doesn't necessarily know what the server's requirements are for
a given user. It uses the rest of the doomed OAUTHBEARER exchange to
store the issuer and scope information in the PGconn (step 3-4), then
disconnects and sets need_new_connection in PQconnectPoll() so that a
second connection is immediately opened (step 5). When the OAUTHBEARER
mechanism takes control the second time, it has everything it needs to
conduct the login flow with the issuer (step 7). It then sends the
obtained token to establish a connection (steps 8 onward).

The problem is that step 7 is consuming the authentication_timeout for
the backend. I'm very good at completing these flows quickly, but if
you can't complete the browser prompts in time, you will simply not be
able to log into the server. Which is harsh to say the least. (Imagine
the pain if the standard psql password prompt timed out.) DBAs can get
around it by increasing the timeout, obviously, but that doesn't feel
very good as a solution.

Last week I looked into a fix where libpq would simply try again with
the stored token if the backend hangs up on it during the handshake,
but I think that will end up making the UX worse. The token validation
on the server side isn't going to be instantaneous, so if the client
is able to complete the token exchange in 59 seconds and send it to
the backend, there's an excellent chance that the connection is still
going to be torn down in a way that's indistinguishable from a crash.
We don't want the two sides to fight for time.

So I think what I'm going to need to do is modify v41-0003 to allow
the mechanism to politely hang up the connection while the flow is in
progress. This further decouples the lifetimes of the mechanism and
the async auth -- the async state now has to live outside of the SASL
exchange -- but I think it's probably more architecturally sound. Yell
at me if that sounds unmaintainable or if there's a more obvious fix
I'm missing.

Huge thanks to Kashif for pointing this out!

--Jacob

#188Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#187)
7 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Jan 13, 2025 at 5:00 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

So I think what I'm going to need to do is modify v41-0003 to allow
the mechanism to politely hang up the connection while the flow is in
progress.

This is done in v42. The rough conversation now looks like this:

Issuer User libpq Backend
| | |
| x -----> x -----> o [1] Startup Packet
| | | |
| | x <----- x [2] OAUTHBEARER Request
| | | |
| | x -----> x [3] Parameter Discovery
| | | |
| | x <----- x [4] Parameters Stored
| | | |
| | x -----> x [5] Finish OAUTHBEARER
x <----> x <----> x | [6] OAuth Handshake
x <----> x <----> x <----- o [7] Server Hangs Up
x <----> x <----> x
| | |
| | x -----> o [8] New Startup Packet
| | | |
| | x <----- x [9] OAUTHBEARER Request
| | | |
o | x -----> x [10] Send Token
| | |
| <----- x <----- x [11] Connection Established
| | |
x <----> x <----> x
x <----> x <----> x [11] Use the DB
. . .
. . .
. . .

The key change is that the client sends the final OAUTHBEARER response
_before_ beginning the OAuth flow, allowing the server to concurrently
close its side of the discovery connection (steps 5-7). This requires
only a single change to the proposed SASL_ASYNC feature in v41-0003:
when a mechanism returns SASL_ASYNC during exchange(), it may also
specify an output response packet to send before switching over to
async auth. (This is similar to how mechanisms may send a response
packet when returning SASL_FAILED.)

That change simplifies the description of the flow a bit, too, and
I've updated the documentation. If an OAuth flow needs to run, two
connections to the server will be made. The only way to skip the
discovery connection now is if a (custom) hook has a token cached.

This further decouples the lifetimes of the mechanism and
the async auth -- the async state now has to live outside of the SASL
exchange --

The only part of the state that I had to move was the token itself,
which now lives in conn->oauth_token. This is cleaned up with a new
pqClear- function, so that it can live across connection attempts and
be proactively cleared from memory after a successful connection.

but I think it's probably more architecturally sound.

As evidence, this change flushed out a few bugs and provided the basis
to fix every TODO in fe-auth-oauth.c, so I'm pretty happy with it:
- an AuthenticationSASLFinal message is not allowed, as OAUTHBEARER
does not specify any additional server data (this bug goes all the way
back to v1)
- a server is not allowed to switch discovery URLs on a client between
connection attempts
- a server is not allowed to override a previously determined oauth_scope
- the connection is retried only if a conn->oauth_token was not
initially set (this simplifies conn->oauth_want_retry considerably)
- require_auth=oauth will complain if a discovery connection lets the client in

The existing Perl tests were not affected by this refactoring, other
than a latent test bug that got caught with the fallout. The advisory
Python tests (which pin behavior on the wire) needed more changes.
I've also added some tests to 002_client.pl which have the custom hook
misbehave in various ways and pin the expected error messages.

v41-0005 has probably outlived its usefulness by now, and I've folded
those changes into v42-0004.

Thanks!
--Jacob

Attachments:

since-v41.diff.txttext/plain; charset=US-ASCII; name=since-v41.diff.txtDownload
1:  386e7c4df31 = 1:  1f38ec8039b Move PG_MAX_AUTH_TOKEN_LENGTH to libpq/auth.h
2:  b829f7a8ac7 = 2:  9f87ffea1c7 require_auth: prepare for multiple SASL mechanisms
3:  f88f98df97d ! 3:  bda684d19cc libpq: handle asynchronous actions during SASL
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_continue(PGconn *conn, int payloadlen, b
     +		 * need to do is signal the caller.
     +		 */
     +		*async = true;
    -+		return STATUS_OK;
    ++
    ++		/*
    ++		 * The mechanism may optionally generate some output to send before
    ++		 * switching over to async auth, so continue onwards.
    ++		 */
     +	}
     +
      	if (final && status == SASL_CONTINUE)
    @@ src/interfaces/libpq/fe-auth.c: pg_fe_sendauth(AuthRequest areq, int payloadlen,
      		case AUTH_REQ_SASL_CONT:
      		case AUTH_REQ_SASL_FIN:
     -			if (conn->sasl_state == NULL)
    --			{
    + 			{
     -				appendPQExpBufferStr(&conn->errorMessage,
     -									 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
     -				return STATUS_ERROR;
    @@ src/interfaces/libpq/fe-auth.c: pg_fe_sendauth(AuthRequest areq, int payloadlen,
     -			oldmsglen = conn->errorMessage.len;
     -			if (pg_SASL_continue(conn, payloadlen,
     -								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
    - 			{
    +-			{
     -				/* Use this message if pg_SASL_continue didn't supply one */
     -				if (conn->errorMessage.len == oldmsglen)
     +				bool		final = false;
4:  d96712cda1d ! 4:  3dc6dd3433c Add OAUTHBEARER SASL mechanism
    @@ Commit message
     
         Several TODOs:
         - perform several sanity checks on the OAuth issuer's responses
    -    - handle cases where the client has been set up with an issuer and
    -      scope, but the Postgres server wants to use something different
         - improve error debuggability during the OAuth handshake
         - fix libcurl initialization thread-safety
         - harden the libcurl flow implementation
    -    - figure out pgsocket/int difference on Windows
         - fill in documentation stubs
         - support protocol "variants" implemented by major providers
         - implement more helpful handling of HBA misconfigurations
    @@ doc/src/sgml/installation.sgml: ninja install
            <listitem>
     
      ## doc/src/sgml/libpq.sgml ##
    +@@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
    +           </listitem>
    +          </varlistentry>
    + 
    ++         <varlistentry>
    ++          <term><literal>oauth</literal></term>
    ++          <listitem>
    ++           <para>
    ++            The server must request an OAuth bearer token from the client.
    ++           </para>
    ++          </listitem>
    ++         </varlistentry>
    ++
    +          <varlistentry>
    +           <term><literal>none</literal></term>
    +           <listitem>
     @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
             </para>
            </listitem>
    @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
     +        attacks" on OAuth clients.
     +       </para>
     +       <para>
    -+        This standard handshake requires two separate network connections to the
    -+        server per authentication attempt. To skip asking the server for a
    -+        discovery document URL, you may set <literal>oauth_issuer</literal> to a
    -+        <literal>/.well-known/</literal> URI used for OAuth discovery. (In this
    -+        case, it is recommended that
    ++        You may also explicitly set <literal>oauth_issuer</literal> to the
    ++        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
    ++        case, if the server asks for a different URL, the connection will fail,
    ++        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
    ++        may be able to speed up the standard handshake by using previously
    ++        cached tokens. (In this case, it is recommended that
     +        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
     +        client will not have a chance to ask the server for a correct scope
     +        setting, and the default scopes for a token may not be sufficient to
    @@ doc/src/sgml/libpq.sgml: postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
     +        An OAuth 2.0 client identifier, as issued by the authorization server.
     +        If the <productname>PostgreSQL</productname> server
     +        <link linkend="auth-oauth">requests an OAuth token</link> for the
    -+        connection (and if no <link linkend="libpq-oauth">custom OAuth
    -+        hook</link> is installed to provide one), then this parameter must be
    -+        set; otherwise, the connection will fail.
    ++        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
    ++        OAuth hook</link> is installed to provide one), then this parameter must
    ++        be set; otherwise, the connection will fail.
     +       </para>
     +      </listitem>
     +     </varlistentry>
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +				break;
     +
     +			case OAUTH_STEP_TOKEN_REQUEST:
    -+				if (!handle_token_response(actx, &state->token))
    ++				if (!handle_token_response(actx, &conn->oauth_token))
     +					goto error_return;
     +
     +				if (!actx->user_prompted)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +					actx->user_prompted = true;
     +				}
     +
    -+				if (state->token)
    ++				if (conn->oauth_token)
     +					break;		/* done! */
     +
     +				/*
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 * point, actx->running will be set. But there are some corner cases
     +		 * where we can immediately loop back around; see start_request().
     +		 */
    -+	} while (!state->token && !actx->running);
    ++	} while (!conn->oauth_token && !actx->running);
     +
     +	/* If we've stored a token, we're done. Otherwise come back later. */
    -+	return state->token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
    ++	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
     +
     +error_return:
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	/* Any async authentication state should have been cleaned up already. */
     +	Assert(!state->async_ctx);
     +
    -+	free(state->token);
     +	free(state);
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +/*
     + * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
     + *
    -+ * If discover is true, the token pointer will be ignored and the initial
    -+ * response will instead contain a request for the server's required OAuth
    -+ * parameters (Sec. 4.3). Otherwise, a bearer token must be provided.
    ++ * If discover is true, the initial response will contain a request for the
    ++ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
    ++ * be set; it will be sent as the connection's bearer token.
     + *
     + * Returns the response as a null-terminated string, or NULL on error.
     + */
     +static char *
    -+client_initial_response(PGconn *conn, bool discover, const char *token)
    ++client_initial_response(PGconn *conn, bool discover)
     +{
     +	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
     +
     +	PQExpBufferData buf;
     +	const char *authn_scheme;
     +	char	   *response = NULL;
    ++	const char *token = conn->oauth_token;
     +
     +	if (discover)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		 */
     +		authn_scheme = "Bearer ";
     +
    -+		/* We must have a token. */
    ++		/* conn->token must have been set in this case. */
     +		if (!token)
     +		{
    -+			/*
    -+			 * Either programmer error, or something went badly wrong during
    -+			 * the asynchronous fetch.
    -+			 *
    -+			 * TODO: users shouldn't see this; what action should they take if
    -+			 * they do?
    -+			 */
    ++			Assert(false);
     +			libpq_append_conn_error(conn,
    -+									"no OAuth token was set for the connection");
    ++									"internal error: no OAuth token was set for the connection");
     +			return NULL;
     +		}
     +	}
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +/*
     + * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
     + * stores any discovered openid_configuration and scope settings for the
    -+ * connection. conn->oauth_want_retry will be set if the error status is
    -+ * suitable for a second attempt.
    ++ * connection.
     + */
     +static bool
     +handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	if (errmsg)
     +		goto cleanup;
     +
    -+	/* TODO: what if these override what the user already specified? */
    -+	/* TODO: what if there's no discovery URI? */
     +	if (ctx.discovery_uri)
     +	{
     +		char	   *discovery_issuer;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +		free(discovery_issuer);
     +
    -+		if (conn->oauth_discovery_uri)
    -+			free(conn->oauth_discovery_uri);
    -+
    -+		conn->oauth_discovery_uri = ctx.discovery_uri;
    -+		ctx.discovery_uri = NULL;
    ++		if (!conn->oauth_discovery_uri)
    ++		{
    ++			conn->oauth_discovery_uri = ctx.discovery_uri;
    ++			ctx.discovery_uri = NULL;
    ++		}
    ++		else
    ++		{
    ++			/* This must match the URI we'd previously determined. */
    ++			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
    ++			{
    ++				libpq_append_conn_error(conn,
    ++										"server's discovery document has moved to %s (previous location was %s)",
    ++										ctx.discovery_uri,
    ++										conn->oauth_discovery_uri);
    ++				goto cleanup;
    ++			}
    ++		}
     +	}
     +
     +	if (ctx.scope)
     +	{
    -+		if (conn->oauth_scope)
    -+			free(conn->oauth_scope);
    -+
    -+		conn->oauth_scope = ctx.scope;
    -+		ctx.scope = NULL;
    ++		/* Servers may not override a previously set oauth_scope. */
    ++		if (!conn->oauth_scope)
    ++		{
    ++			conn->oauth_scope = ctx.scope;
    ++			ctx.scope = NULL;
    ++		}
     +	}
    -+	/* TODO: missing error scope should clear any existing connection scope */
     +
     +	if (!ctx.status)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		goto cleanup;
     +	}
     +
    -+	if (strcmp(ctx.status, "invalid_token") == 0)
    ++	if (strcmp(ctx.status, "invalid_token") != 0)
     +	{
     +		/*
    -+		 * invalid_token is the only error code we'll automatically retry for,
    -+		 * but only if we have enough information to do so and we haven't
    -+		 * already retried this connection once.
    ++		 * invalid_token is the only error code we'll automatically retry for;
    ++		 * otherwise, just bail out now.
     +		 */
    -+		if (conn->oauth_discovery_uri && conn->oauth_want_retry == PG_BOOL_UNKNOWN)
    -+			conn->oauth_want_retry = PG_BOOL_YES;
    ++		libpq_append_conn_error(conn,
    ++								"server rejected OAuth bearer token: %s",
    ++								ctx.status);
    ++		goto cleanup;
     +	}
    -+	/* TODO: include status in hard failure message */
     +
     +	success = true;
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	else if (status == PGRES_POLLING_OK)
     +	{
     +		/*
    -+		 * We already have a token, so copy it into the state. (We can't hold
    ++		 * We already have a token, so copy it into the conn. (We can't hold
     +		 * onto the original string, since it may not be safe for us to free()
     +		 * it.)
     +		 */
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			return PGRES_POLLING_FAILED;
     +		}
     +
    -+		state->token = strdup(request->token);
    -+		if (!state->token)
    ++		conn->oauth_token = strdup(request->token);
    ++		if (!conn->oauth_token)
     +		{
     +			libpq_append_conn_error(conn, "out of memory");
     +			return PGRES_POLLING_FAILED;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		return PGRES_POLLING_OK;
     +	}
     +
    -+	/* TODO: what if no altsock was set? */
    ++	/* The hook wants the client to poll the altsock. Make sure it set one. */
    ++	if (conn->altsock == PGINVALID_SOCKET)
    ++	{
    ++		libpq_append_conn_error(conn,
    ++								"user-defined OAuth flow did not provide a socket for polling");
    ++		return PGRES_POLLING_FAILED;
    ++	}
    ++
     +	return status;
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +		if (request.token)
     +		{
     +			/*
    -+			 * We already have a token, so copy it into the state. (We can't
    ++			 * We already have a token, so copy it into the conn. (We can't
     +			 * hold onto the original string, since it may not be safe for us
     +			 * to free() it.)
     +			 */
    -+			state->token = strdup(request.token);
    -+			if (!state->token)
    ++			conn->oauth_token = strdup(request.token);
    ++			if (!conn->oauth_token)
     +			{
     +				libpq_append_conn_error(conn, "out of memory");
     +				goto fail;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +	else
     +	{
     +#if USE_LIBCURL
    -+		/*
    -+		 * Hand off to our built-in OAuth flow.
    -+		 *
    -+		 * Only allow one try per connection, since we're not performing any
    -+		 * caching at the moment. (Custom flows might be more sophisticated.)
    -+		 */
    ++		/* Hand off to our built-in OAuth flow. */
     +		conn->async_auth = pg_fe_run_oauth_flow;
     +		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
    -+		conn->oauth_want_retry = PG_BOOL_NO;
     +
     +#else
     +		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			if (!setup_oauth_parameters(conn))
     +				return SASL_FAILED;
     +
    -+			if (conn->oauth_discovery_uri)
    ++			if (conn->oauth_token)
     +			{
     +				/*
    -+				 * Decide whether we're using a user-provided OAuth flow, or
    -+				 * the one we have built in.
    ++				 * A previous connection already fetched the token; we'll use
    ++				 * it below.
    ++				 */
    ++			}
    ++			else if (conn->oauth_discovery_uri)
    ++			{
    ++				/*
    ++				 * We don't have a token, but we have a discovery URI already
    ++				 * stored. Decide whether we're using a user-provided OAuth
    ++				 * flow or the one we have built in.
     +				 */
     +				if (!setup_token_request(conn, state))
     +					return SASL_FAILED;
     +
    -+				if (state->token)
    ++				if (conn->oauth_token)
     +				{
     +					/*
     +					 * A really smart user implementation may have already
     +					 * given us the token (e.g. if there was an unexpired copy
    -+					 * already cached). In that case, we can just fall
    -+					 * through.
    ++					 * already cached), and we can use it immediately.
     +					 */
     +				}
     +				else
     +				{
     +					/*
    -+					 * Otherwise, we have to hand the connection over to our
    -+					 * OAuth implementation. This involves a number of HTTP
    -+					 * connections and timed waits, so we escape the
    -+					 * synchronous auth processing and tell PQconnectPoll to
    -+					 * transfer control to our async implementation.
    ++					 * Otherwise, we'll have to hand the connection over to
    ++					 * our OAuth implementation.
    ++					 *
    ++					 * This could take a while, since it generally involves a
    ++					 * user in the loop. To avoid consuming the server's
    ++					 * authentication timeout, we'll continue this handshake
    ++					 * to the end, so that the server can close its side of
    ++					 * the connection. We'll open a second connection later
    ++					 * once we've retrieved a token.
     +					 */
    -+					Assert(conn->async_auth);	/* should have been set
    -+												 * already */
    -+					state->step = FE_OAUTH_REQUESTING_TOKEN;
    -+					return SASL_ASYNC;
    ++					discover = true;
     +				}
     +			}
     +			else
     +			{
     +				/*
    -+				 * If we don't have a discovery URI to be able to request a
    -+				 * token, we ask the server for one explicitly. This doesn't
    -+				 * require any asynchronous work.
    ++				 * If we don't have a token, and we don't have a discovery URI
    ++				 * to be able to request a token, we ask the server for one
    ++				 * explicitly.
     +				 */
     +				discover = true;
     +			}
     +
    -+			/* fall through */
    -+
    -+		case FE_OAUTH_REQUESTING_TOKEN:
    -+			/* We should still be in the initial response phase. */
    -+			Assert(inputlen == -1);
    -+
    -+			*output = client_initial_response(conn, discover, state->token);
    ++			/*
    ++			 * Generate an initial response. This either contains a token, if
    ++			 * we have one, or an empty discovery response which is doomed to
    ++			 * fail.
    ++			 */
    ++			*output = client_initial_response(conn, discover);
     +			if (!*output)
     +				return SASL_FAILED;
     +
     +			*outputlen = strlen(*output);
     +			state->step = FE_OAUTH_BEARER_SENT;
     +
    ++			if (conn->oauth_token)
    ++			{
    ++				/*
    ++				 * For the purposes of require_auth, our side of
    ++				 * authentication is done at this point; the server will
    ++				 * either accept the connection or send an error. Unlike
    ++				 * SCRAM, there is no additional server data to check upon
    ++				 * success.
    ++				 */
    ++				conn->client_finished_auth = true;
    ++			}
    ++
     +			return SASL_CONTINUE;
     +
     +		case FE_OAUTH_BEARER_SENT:
     +			if (final)
     +			{
    -+				/* TODO: ensure there is no message content here. */
    -+				return SASL_COMPLETE;
    -+			}
    -+
    -+			/*
    -+			 * Error message sent by the server.
    -+			 */
    -+			if (!handle_oauth_sasl_error(conn, input, inputlen))
    ++				/*
    ++				 * OAUTHBEARER does not make use of additional data with a
    ++				 * successful SASL exchange, so we shouldn't get an
    ++				 * AuthenticationSASLFinal message.
    ++				 */
    ++				libpq_append_conn_error(conn,
    ++										"server sent unexpected additional OAuth data");
     +				return SASL_FAILED;
    ++			}
     +
     +			/*
    -+			 * Respond with the required dummy message (RFC 7628, sec. 3.2.3).
    ++			 * An error message was sent by the server. Respond with the
    ++			 * required dummy message (RFC 7628, sec. 3.2.3).
     +			 */
     +			*output = strdup(kvsep);
     +			if (unlikely(!*output))
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			}
     +			*outputlen = strlen(*output);	/* == 1 */
     +
    -+			state->step = FE_OAUTH_SERVER_ERROR;
    -+			return SASL_CONTINUE;
    ++			/* Grab the settings from discovery. */
    ++			if (!handle_oauth_sasl_error(conn, input, inputlen))
    ++				return SASL_FAILED;
    ++
    ++			if (conn->oauth_token)
    ++			{
    ++				/*
    ++				 * The server rejected our token. Continue onwards towards the
    ++				 * expected FATAL message, but mark our state to catch any
    ++				 * unexpected "success" from the server.
    ++				 */
    ++				state->step = FE_OAUTH_SERVER_ERROR;
    ++				return SASL_CONTINUE;
    ++			}
    ++
    ++			if (!conn->async_auth)
    ++			{
    ++				/*
    ++				 * No OAuth flow is set up yet. Did we get enough information
    ++				 * from the server to create one?
    ++				 */
    ++				if (!conn->oauth_discovery_uri)
    ++				{
    ++					libpq_append_conn_error(conn,
    ++											"server requires OAuth authentication, but no discovery metadata was provided");
    ++					return SASL_FAILED;
    ++				}
    ++
    ++				/* Yes. Set up the flow now. */
    ++				if (!setup_token_request(conn, state))
    ++					return SASL_FAILED;
    ++
    ++				if (conn->oauth_token)
    ++				{
    ++					/*
    ++					 * A token was available in a custom flow's cache. Skip
    ++					 * the asynchronous processing.
    ++					 */
    ++					goto reconnect;
    ++				}
    ++			}
    ++
    ++			/*
    ++			 * Time to retrieve a token. This involves a number of HTTP
    ++			 * connections and timed waits, so we escape the synchronous auth
    ++			 * processing and tell PQconnectPoll to transfer control to our
    ++			 * async implementation.
    ++			 */
    ++			Assert(conn->async_auth);	/* should have been set already */
    ++			state->step = FE_OAUTH_REQUESTING_TOKEN;
    ++			return SASL_ASYNC;
    ++
    ++		case FE_OAUTH_REQUESTING_TOKEN:
    ++
    ++			/*
    ++			 * We've returned successfully from token retrieval. Double-check
    ++			 * that we have what we need for the next connection.
    ++			 */
    ++			if (!conn->oauth_token)
    ++			{
    ++				Assert(false);	/* should have failed before this point! */
    ++				libpq_append_conn_error(conn,
    ++										"internal error: OAuth flow did not set a token");
    ++				return SASL_FAILED;
    ++			}
    ++
    ++			goto reconnect;
     +
     +		case FE_OAUTH_SERVER_ERROR:
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +			break;
     +	}
     +
    ++	Assert(false);				/* should never get here */
    ++	return SASL_FAILED;
    ++
    ++reconnect:
    ++
    ++	/*
    ++	 * Despite being a failure from the point of view of SASL, we have enough
    ++	 * information to restart with a new connection.
    ++	 */
    ++	libpq_append_conn_error(conn, "retrying connection with new bearer token");
    ++	conn->oauth_want_retry = true;
     +	return SASL_FAILED;
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +}
     +
     +/*
    ++ * Fully clears out any stored OAuth token. This is done proactively upon
    ++ * successful connection as well as during pqClosePGconn().
    ++ */
    ++void
    ++pqClearOAuthToken(PGconn *conn)
    ++{
    ++	if (!conn->oauth_token)
    ++		return;
    ++
    ++	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
    ++	free(conn->oauth_token);
    ++	conn->oauth_token = NULL;
    ++}
    ++
    ++/*
     + * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
     + */
     +bool
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     +enum fe_oauth_step
     +{
     +	FE_OAUTH_INIT,
    -+	FE_OAUTH_REQUESTING_TOKEN,
     +	FE_OAUTH_BEARER_SENT,
    ++	FE_OAUTH_REQUESTING_TOKEN,
     +	FE_OAUTH_SERVER_ERROR,
     +};
     +
    @@ src/interfaces/libpq/fe-auth-oauth.h (new)
     +	enum fe_oauth_step step;
     +
     +	PGconn	   *conn;
    -+	char	   *token;
    -+
     +	void	   *async_ctx;
     +} fe_oauth_state;
     +
     +extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
     +extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
    ++extern void pqClearOAuthToken(PGconn *conn);
     +extern bool oauth_unsafe_debugging_enabled(void);
     +
     +/* Mechanisms in fe-auth-oauth.c */
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen, bool
      	}
      
      	if (!selected_mechanism)
    +@@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
    + 
    + 		if (!allowed)
    + 		{
    +-			/*
    +-			 * TODO: this is dead code until a second SASL mechanism is added;
    +-			 * the connection can't have proceeded past check_expected_areq()
    +-			 * if no SASL methods are allowed.
    +-			 */
    +-			Assert(false);
    +-
    + 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
    + 									conn->require_auth, selected_mechanism);
    + 			goto error;
     @@ src/interfaces/libpq/fe-auth.c: PQchangePassword(PGconn *conn, const char *user, const char *passwd)
      		}
      	}
    @@ src/interfaces/libpq/fe-connect.c
      #include "libpq-int.h"
      #include "mb/pg_wchar.h"
     @@ src/interfaces/libpq/fe-connect.c: static const internalPQconninfoOption PQconninfoOptions[] = {
    - 		"Load-Balance-Hosts", "", 8,	/* sizeof("disable") = 8 */
    - 	offsetof(struct pg_conn, load_balance_hosts)},
    + 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
    + 	offsetof(struct pg_conn, scram_server_key)},
      
     +	/* OAuth v2 */
     +	{"oauth_issuer", NULL, NULL, NULL,
    @@ src/interfaces/libpq/fe-connect.c: pqDropServerData(PGconn *conn)
      	conn->write_failed = false;
      	free(conn->write_err_msg);
      	conn->write_err_msg = NULL;
    -+	/* conn->oauth_want_retry = false; TODO */
    ++	conn->oauth_want_retry = false;
      
      	/*
      	 * Cancel connections need to retain their be_pid and be_key across
    +@@ src/interfaces/libpq/fe-connect.c: static inline void
    + fill_allowed_sasl_mechs(PGconn *conn)
    + {
    + 	/*---
    +-	 * We only support one mechanism at the moment, so rather than deal with a
    ++	 * We only support two mechanisms at the moment, so rather than deal with a
    + 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
    + 	 * rely on the compile-time assertion here to keep us honest.
    + 	 *
    +@@ src/interfaces/libpq/fe-connect.c: fill_allowed_sasl_mechs(PGconn *conn)
    + 	 * - handle the new mechanism name in the require_auth portion of
    + 	 *   pqConnectOptions2(), below.
    + 	 */
    +-	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
    ++	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 2,
    + 					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
    + 
    + 	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
    ++	conn->allowed_sasl_mechs[1] = &pg_oauth_mech;
    + }
    + 
    + /*
    +@@ src/interfaces/libpq/fe-connect.c: pqConnectOptions2(PGconn *conn)
    + 			{
    + 				mech = &pg_scram_mech;
    + 			}
    ++			else if (strcmp(method, "oauth") == 0)
    ++			{
    ++				mech = &pg_oauth_mech;
    ++			}
    + 
    + 			/*
    + 			 * Final group: meta-options.
     @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here until there is
    - 					/* Check to see if we should mention pgpassfile */
    - 					pgpassfileWarning(conn);
    + 				conn->inStart = conn->inCursor;
      
    + 				if (res != STATUS_OK)
    ++				{
     +					/*
     +					 * OAuth connections may perform two-step discovery, where
     +					 * the first connection is a dummy.
     +					 */
    -+					if (conn->sasl == &pg_oauth_mech
    -+						&& conn->oauth_want_retry == PG_BOOL_YES)
    ++					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
     +					{
    -+						/* Only allow retry once. */
    -+						conn->oauth_want_retry = PG_BOOL_NO;
     +						need_new_connection = true;
     +						goto keep_going;
     +					}
     +
    - 					CONNECTION_FAILED();
    + 					goto error_return;
    ++				}
    + 
    + 				/*
    + 				 * Just make sure that any data sent by pg_fe_sendauth is
    +@@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here until there is
    + 					}
      				}
    - 				else if (beresp == PqMsg_NegotiateProtocolVersion)
    + 
    ++				/* Don't hold onto any OAuth tokens longer than necessary. */
    ++				pqClearOAuthToken(conn);
    ++
    + 				/*
    + 				 * For non cancel requests we can release the address list
    + 				 * now. For cancel requests we never actually resolve
     @@ src/interfaces/libpq/fe-connect.c: freePGconn(PGconn *conn)
    - 	free(conn->rowBuf);
    - 	free(conn->target_session_attrs);
      	free(conn->load_balance_hosts);
    + 	free(conn->scram_client_key);
    + 	free(conn->scram_server_key);
     +	free(conn->oauth_issuer);
     +	free(conn->oauth_issuer_id);
     +	free(conn->oauth_discovery_uri);
    @@ src/interfaces/libpq/fe-connect.c: freePGconn(PGconn *conn)
      	termPQExpBuffer(&conn->errorMessage);
      	termPQExpBuffer(&conn->workBuffer);
      
    +@@ src/interfaces/libpq/fe-connect.c: pqClosePGconn(PGconn *conn)
    + 	conn->asyncStatus = PGASYNC_IDLE;
    + 	conn->xactStatus = PQTRANS_IDLE;
    + 	conn->pipelineStatus = PQ_PIPELINE_OFF;
    ++	pqClearOAuthToken(conn);
    + 	pqClearAsyncResult(conn);	/* deallocate result */
    + 	pqClearConnErrorState(conn);
    + 
     
      ## src/interfaces/libpq/libpq-fe.h ##
     @@ src/interfaces/libpq/libpq-fe.h: extern "C"
    @@ src/interfaces/libpq/libpq-int.h: struct pg_conn
     +	char	   *oauth_client_id;	/* client identifier */
     +	char	   *oauth_client_secret;	/* client secret */
     +	char	   *oauth_scope;	/* access token scope */
    -+	PGTernaryBool oauth_want_retry; /* should we retry on failure? */
    ++	char	   *oauth_token;	/* access token */
    ++	bool		oauth_want_retry;	/* should we retry on failure? */
     +
      	/* Optional file to write trace info to */
      	FILE	   *Pfdebug;
      	int			traceFlags;
    +@@ src/interfaces/libpq/libpq-int.h: struct pg_conn
    + 								 * the server? */
    + 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
    + 										 * codes */
    +-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
    ++	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
    + 													 * mechanisms */
    + 	bool		client_finished_auth;	/* have we finished our half of the
    + 										 * authentication exchange? */
     
      ## src/interfaces/libpq/meson.build ##
     @@
    @@ src/makefiles/meson.build: pgxs_deps = {
        'libxslt': libxslt,
        'llvm': llvm,
     
    + ## src/test/authentication/t/001_password.pl ##
    +@@ src/test/authentication/t/001_password.pl: $node->connect_fails(
    + $node->connect_fails(
    + 	"user=scram_role require_auth=!scram-sha-256",
    + 	"SCRAM authentication forbidden, fails with SCRAM auth",
    +-	expected_stderr => qr/server requested SASL authentication/);
    ++	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
    + $node->connect_fails(
    + 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
    + 	"multiple authentication types forbidden, fails with SCRAM auth",
    +-	expected_stderr => qr/server requested SASL authentication/);
    ++	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
    + 
    + # Test that bad passwords are rejected.
    + $ENV{"PGPASSWORD"} = 'badpass';
    +@@ src/test/authentication/t/001_password.pl: $node->connect_fails(
    + 	"user=scram_role require_auth=!scram-sha-256",
    + 	"password authentication forbidden, fails with SCRAM auth",
    + 	expected_stderr =>
    +-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
    ++	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
    + );
    + $node->connect_fails(
    + 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
    + 	"multiple authentication types forbidden, fails with SCRAM auth",
    + 	expected_stderr =>
    +-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
    ++	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
    + );
    + 
    + # Test SYSTEM_USER <> NULL with parallel workers.
    +
      ## src/test/modules/Makefile ##
     @@ src/test/modules/Makefile: SUBDIRS = \
      		  dummy_index_am \
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +static PostgresPollingStatusType async_cb(PGconn *conn,
     +										  PGoauthBearerRequest *req,
     +										  pgsocket *altsock);
    ++static PostgresPollingStatusType misbehave_cb(PGconn *conn,
    ++											  PGoauthBearerRequest *req,
    ++											  pgsocket *altsock);
     +
     +static void
     +usage(char *argv[])
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +	printf(" -h, --help				show this message\n");
     +	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
     +	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
    ++	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
    ++		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
     +	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
     +	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
     +	printf(" --token TOKEN			use the provided TOKEN value\n");
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +static bool hang_forever = false;
     +static const char *expected_uri = NULL;
     +static const char *expected_scope = NULL;
    ++static const char *misbehave_mode = NULL;
     +static char *token = NULL;
     +
     +int
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +		{"no-hook", no_argument, NULL, 1002},
     +		{"token", required_argument, NULL, 1003},
     +		{"hang-forever", no_argument, NULL, 1004},
    ++		{"misbehave", required_argument, NULL, 1005},
     +		{0}
     +	};
     +
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +				hang_forever = true;
     +				break;
     +
    ++			case 1005:			/* --misbehave */
    ++				misbehave_mode = optarg;
    ++				break;
    ++
     +			default:
     +				usage(argv);
     +				return 1;
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +		return 1;
     +	}
     +
    ++	if (misbehave_mode)
    ++	{
    ++		if (strcmp(misbehave_mode, "no-hook") != 0)
    ++			req->async = misbehave_cb;
    ++		return 1;
    ++	}
    ++
     +	if (expected_uri)
     +	{
     +		if (!req->openid_configuration)
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +
     +	req->token = token;
     +	return PGRES_POLLING_OK;
    ++}
    ++
    ++static PostgresPollingStatusType
    ++misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
    ++{
    ++	if (strcmp(misbehave_mode, "fail-async") == 0)
    ++	{
    ++		/* Just fail "normally". */
    ++		return PGRES_POLLING_FAILED;
    ++	}
    ++	else if (strcmp(misbehave_mode, "no-token") == 0)
    ++	{
    ++		/* Callbacks must assign req->token before returning OK. */
    ++		return PGRES_POLLING_OK;
    ++	}
    ++	else if (strcmp(misbehave_mode, "no-socket") == 0)
    ++	{
    ++		/* Callbacks must assign *altsock before asking for polling. */
    ++		return PGRES_POLLING_READING;
    ++	}
    ++	else
    ++	{
    ++		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
    ++		exit(1);
    ++	}
     +}
     
      ## src/test/modules/oauth_validator/t/001_server.pl (new) ##
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
     +);
     +
    ++# Test require_auth settings against OAUTHBEARER.
    ++my @cases = (
    ++	{ require_auth => "oauth" },
    ++	{ require_auth => "oauth,scram-sha-256" },
    ++	{ require_auth => "password,oauth" },
    ++	{ require_auth => "none,oauth" },
    ++	{ require_auth => "!scram-sha-256" },
    ++	{ require_auth => "!none" },
    ++
    ++	{
    ++		require_auth => "!oauth",
    ++		failure => qr/server requested OAUTHBEARER authentication/
    ++	},
    ++	{
    ++		require_auth => "scram-sha-256",
    ++		failure => qr/server requested OAUTHBEARER authentication/
    ++	},
    ++	{
    ++		require_auth => "!password,!oauth",
    ++		failure => qr/server requested OAUTHBEARER authentication/
    ++	},
    ++	{
    ++		require_auth => "none",
    ++		failure => qr/server requested SASL authentication/
    ++	},
    ++	{
    ++		require_auth => "!oauth,!scram-sha-256",
    ++		failure => qr/server requested SASL authentication/
    ++	});
    ++
    ++$user = "test";
    ++foreach my $c (@cases)
    ++{
    ++	my $connstr =
    ++	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
    ++
    ++	if (defined $c->{'failure'})
    ++	{
    ++		$node->connect_fails(
    ++			$connstr,
    ++			"require_auth=$c->{'require_auth'} fails",
    ++			expected_stderr => $c->{'failure'});
    ++	}
    ++	else
    ++	{
    ++		$node->connect_ok(
    ++			$connstr,
    ++			"require_auth=$c->{'require_auth'} succeeds",
    ++			expected_stderr =>
    ++			  qr@Visit https://example\.com/ and enter the code: postgresuser@
    ++		);
    ++	}
    ++}
    ++
     +# Make sure the client_id and secret are correctly encoded. $vschars contains
     +# every allowed character for a client_id/_secret (the "VSCHAR" class).
     +# $vschars_esc is additionally backslash-escaped for inclusion in a
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
     +
     +$node->connect_ok(
    -+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc'",
    ++	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
     +	"escapable characters: client_id",
     +	expected_stderr =>
    -+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +$node->connect_ok(
    -+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
    ++	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
     +	"escapable characters: client_id and secret",
     +	expected_stderr =>
    -+	  qr@Visit https://example\.org/ and enter the code: postgresuser@);
    ++	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
     +
     +#
     +# Further tests rely on support for specific behaviors in oauth_server.py. To
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +$node->append_conf(
     +	'pg_hba.conf', qq{
     +local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
    -+local all testalt oauth validator=fail_validator issuer="$issuer/alternate" scope="openid postgres alt"
    ++local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
     +});
     +$node->restart;
     +
    @@ src/test/modules/oauth_validator/t/002_client.pl (new)
     +	flags => ["--hang-forever"],
     +	expected_stderr => qr/failed: timeout expired/);
     +
    ++# Test various misbehaviors of the client hook.
    ++my @cases = (
    ++	{
    ++		flag => "--misbehave=no-hook",
    ++		expected_error =>
    ++		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
    ++	},
    ++	{
    ++		flag => "--misbehave=fail-async",
    ++		expected_error => qr/user-defined OAuth flow failed/,
    ++	},
    ++	{
    ++		flag => "--misbehave=no-token",
    ++		expected_error => qr/user-defined OAuth flow did not provide a token/,
    ++	},
    ++	{
    ++		flag => "--misbehave=no-socket",
    ++		expected_error =>
    ++		  qr/user-defined OAuth flow did not provide a socket for polling/,
    ++	});
    ++
    ++foreach my $c (@cases)
    ++{
    ++	test(
    ++		"hook misbehavior: $c->{'flag'}",
    ++		flags => [ $c->{'flag'} ],
    ++		expected_stderr => $c->{'expected_error'});
    ++}
    ++
     +done_testing();
     
      ## src/test/modules/oauth_validator/t/OAuth/Server.pm (new) ##
5:  18507c6978b < -:  ----------- squash! Add OAUTHBEARER SASL mechanism
6:  8e82059700b = 5:  97e0a2aae26 XXX fix libcurl link error
7:  5339b3f2617 ! 6:  db0167009b9 DO NOT MERGE: Add pytest suite for OAuth
    @@ src/test/python/client/conftest.py (new)
     +            client.start()
     +
     +        sock, _ = server_socket.accept()
    ++        sock.settimeout(BLOCKING_TIMEOUT)
     +        return sock, client
     +
     +    yield factory
    @@ src/test/python/client/test_oauth.py (new)
     +    )
     +
     +
    ++def handle_discovery_connection(sock, discovery=None, *, response=None):
    ++    """
    ++    Helper for all tests that expect an initial discovery connection from the
    ++    client. The provided discovery URI will be used in a standard error response
    ++    from the server (or response may be set, to provide a custom dictionary),
    ++    and the SASL exchange will be failed.
    ++
    ++    By default, the client is expected to complete the entire handshake. Set
    ++    finish to False if the client should immediately disconnect when it receives
    ++    the error response.
    ++    """
    ++    if response is None:
    ++        response = {"status": "invalid_token"}
    ++        if discovery is not None:
    ++            response["openid-configuration"] = discovery
    ++
    ++    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    ++        # Initiate a handshake.
    ++        initial = start_oauth_handshake(conn)
    ++
    ++        # For discovery, the client should send an empty auth header. See RFC
    ++        # 7628, Sec. 4.3.
    ++        auth = get_auth_value(initial)
    ++        assert auth == b""
    ++
    ++        # The discovery handshake is doomed to fail.
    ++        fail_oauth_handshake(conn, response)
    ++
    ++
     +class RawResponse(str):
     +    """
     +    Returned by registered endpoint callbacks to take full control of the
    @@ src/test/python/client/test_oauth.py (new)
     +        libpq.PQsetAuthDataHook(None)
     +
     +
    -+@pytest.mark.parametrize("success", [True, False])
    ++@pytest.mark.parametrize(
    ++    "success, abnormal_failure",
    ++    [
    ++        pytest.param(True, False, id="success"),
    ++        pytest.param(False, False, id="normal failure"),
    ++        pytest.param(False, True, id="abnormal failure"),
    ++    ],
    ++)
     +@pytest.mark.parametrize("secret", [None, "", "hunter2"])
     +@pytest.mark.parametrize("scope", [None, "", "openid email"])
     +@pytest.mark.parametrize("retries", [0, 1])
    @@ src/test/python/client/test_oauth.py (new)
     +    secret,
     +    auth_data_cb,
     +    success,
    ++    abnormal_failure,
     +):
     +    client_id = secrets.token_hex()
     +    openid_provider.content_type = content_type
    @@ src/test/python/client/test_oauth.py (new)
     +        "token_endpoint", "POST", "/token", token_endpoint
     +    )
     +
    ++    # First connection is a discovery request, which should result in the above
    ++    # endpoints being called.
    ++    with sock:
    ++        handle_discovery_connection(sock, openid_provider.discovery_uri)
    ++
    ++    # Client should reconnect.
    ++    sock, _ = accept()
     +    with sock:
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    -+            # Initiate a handshake, which should result in the above endpoints
    -+            # being called.
     +            initial = start_oauth_handshake(conn)
     +
     +            # Validate and accept the token.
    @@ src/test/python/client/test_oauth.py (new)
     +            assert auth == f"Bearer {access_token}".encode("ascii")
     +
     +            if success:
    -+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
     +                finish_handshake(conn)
     +
    ++            elif abnormal_failure:
    ++                # Send an empty error response, which should result in a
    ++                # mechanism-level failure in the client. This test ensures that
    ++                # the client doesn't try a third connection for this case.
    ++                expected_error = "server sent error response without a status"
    ++                fail_oauth_handshake(conn, {})
    ++
     +            else:
     +                # Simulate token validation failure.
     +                resp = {
    @@ src/test/python/client/test_oauth.py (new)
     +                    "openid-configuration": openid_provider.discovery_uri,
     +                }
     +                expected_error = "test token validation failure"
    -+
     +                fail_oauth_handshake(conn, resp, errmsg=expected_error)
     +
     +    if retries:
    @@ src/test/python/client/test_oauth.py (new)
     +
     +    sock, client = accept(**kwargs)
     +
    -+    if server_discovery:
    -+        with sock:
    -+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    -+                initial = start_oauth_handshake(conn)
    -+
    -+                # For discovery, the client should send an empty auth header.
    -+                # See RFC 7628, Sec. 4.3.
    -+                auth = get_auth_value(initial)
    -+                assert auth == b""
    -+
    -+                # Always fail the discovery exchange.
    -+                fail_oauth_handshake(
    -+                    conn,
    -+                    {
    -+                        "status": "invalid_token",
    -+                        "openid-configuration": discovery_uri,
    -+                    },
    -+                )
    -+
    -+        # Expect the client to connect again.
    -+        sock, client = accept()
    ++    with sock:
    ++        handle_discovery_connection(sock, discovery_uri)
     +
    ++    # Expect the client to connect again.
    ++    sock, _ = accept()
     +    with sock:
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
     +            initial = start_oauth_handshake(conn)
    @@ src/test/python/client/test_oauth.py (new)
     +            auth = get_auth_value(initial)
     +            assert auth == f"Bearer {access_token}".encode("ascii")
     +
    -+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
     +            finish_handshake(conn)
     +
     +
    @@ src/test/python/client/test_oauth.py (new)
     +            "{issuer}",
     +            "/.well-known/oauth-authorization-server/",
     +            None,
    -+            id="extra empty segment",
    ++            id="extra empty segment (no path)",
    ++        ),
    ++        pytest.param(
    ++            "{issuer}/path",
    ++            "/.well-known/oauth-authorization-server/path/",
    ++            None,
    ++            id="extra empty segment (with path)",
     +        ),
     +        pytest.param(
     +            "{issuer}",
    @@ src/test/python/client/test_oauth.py (new)
     +        kwargs.update(oauth_issuer=discovery_uri)
     +
     +    sock, client = accept(**kwargs)
    -+
    -+    if server_discovery:
    -+        with sock:
    -+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    -+                initial = start_oauth_handshake(conn)
    -+
    -+                # For discovery, the client should send an empty auth header.
    -+                # See RFC 7628, Sec. 4.3.
    -+                auth = get_auth_value(initial)
    -+                assert auth == b""
    -+
    -+                # Always fail the discovery exchange.
    -+                resp = {
    -+                    "status": "invalid_token",
    -+                    "openid-configuration": discovery_uri,
    -+                }
    -+                pq3.send(
    -+                    conn,
    -+                    pq3.types.AuthnRequest,
    -+                    type=pq3.authn.SASLContinue,
    -+                    body=json.dumps(resp).encode("utf-8"),
    -+                )
    -+
    -+                # FIXME: the client disconnects at this point; it'd be nicer if
    -+                # it completed the exchange.
    -+
    -+            # The client should not reconnect.
    -+
    -+    else:
    -+        expect_disconnected_handshake(sock)
    ++    with sock:
    ++        if expected_error and not server_discovery:
    ++            # If the client already knows the URL, it should disconnect as soon
    ++            # as it realizes it's not valid.
    ++            expect_disconnected_handshake(sock)
    ++        else:
    ++            # Otherwise, it should complete the connection.
    ++            handle_discovery_connection(sock, discovery_uri)
    ++
    ++    # The client should not reconnect.
     +
     +    if expected_error is None:
     +        if server_discovery:
    @@ src/test/python/client/test_oauth.py (new)
     +    the client to have an oauth_issuer set so that it doesn't try to go through
     +    discovery.
     +    """
    -+    with sock:
    -+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    -+            # Initiate a handshake.
    -+            startup = pq3.recv1(conn, cls=pq3.Startup)
    -+            assert startup.proto == pq3.protocol(3, 0)
    ++    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    ++        # Initiate a handshake.
    ++        startup = pq3.recv1(conn, cls=pq3.Startup)
    ++        assert startup.proto == pq3.protocol(3, 0)
     +
    -+            pq3.send(
    -+                conn,
    -+                pq3.types.AuthnRequest,
    -+                type=pq3.authn.SASL,
    -+                body=[b"OAUTHBEARER", b""],
    -+            )
    ++        pq3.send(
    ++            conn,
    ++            pq3.types.AuthnRequest,
    ++            type=pq3.authn.SASL,
    ++            body=[b"OAUTHBEARER", b""],
    ++        )
     +
    -+            # The client should disconnect at this point.
    -+            assert not conn.read(1), "client sent unexpected data"
    ++        # The client should disconnect at this point.
    ++        assert not conn.read(1), "client sent unexpected data"
     +
     +
     +@pytest.mark.parametrize(
    @@ src/test/python/client/test_oauth.py (new)
     +        del params[k]
     +
     +    sock, client = accept(**params)
    -+    expect_disconnected_handshake(sock)
    ++    with sock:
    ++        expect_disconnected_handshake(sock)
     +
     +    expected_error = "oauth_issuer and oauth_client_id are not both set"
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
    @@ src/test/python/client/test_oauth.py (new)
     +        "token_endpoint", "POST", "/token", token_endpoint
     +    )
     +
    ++    # First connection is a discovery request, which should result in the above
    ++    # endpoints being called.
    ++    with sock:
    ++        handle_discovery_connection(sock, openid_provider.discovery_uri)
    ++
    ++    # Second connection sends the token.
    ++    sock, _ = accept()
     +    with sock:
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    -+            # Initiate a handshake, which should result in the above endpoints
    -+            # being called.
     +            initial = start_oauth_handshake(conn)
     +
     +            # Validate and accept the token.
     +            auth = get_auth_value(initial)
     +            assert auth == f"Bearer {access_token}".encode("ascii")
     +
    -+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
     +            finish_handshake(conn)
     +
     +
    @@ src/test/python/client/test_oauth.py (new)
     +    attempts = 0
     +    last_retry = None
     +    retry_lock = threading.Lock()
    ++    token_sent = threading.Event()
     +
     +    def token_endpoint(headers, params):
     +        now = time.monotonic()
    @@ src/test/python/client/test_oauth.py (new)
     +
     +                return 400, {"error": error_code}
     +
    -+        # Successfully finish the request by sending the access bearer token.
    ++        # Successfully finish the request by sending the access bearer token,
    ++        # and signal the main thread to continue.
     +        resp = {
     +            "access_token": access_token,
     +            "token_type": "bearer",
     +        }
    ++        token_sent.set()
     +
     +        return 200, resp
     +
    @@ src/test/python/client/test_oauth.py (new)
     +        "token_endpoint", "POST", "/token", token_endpoint
     +    )
     +
    ++    # First connection is a discovery request, which should result in the above
    ++    # endpoints being called.
    ++    with sock:
    ++        handle_discovery_connection(sock, openid_provider.discovery_uri)
    ++
    ++    # At this point the client is talking to the authorization server. Wait for
    ++    # that to succeed so we don't run into the accept() timeout.
    ++    token_sent.wait()
    ++
    ++    # Client should reconnect and send the token.
    ++    sock, _ = accept()
     +    with sock:
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    -+            # Initiate a handshake, which should result in the above endpoints
    -+            # being called.
     +            initial = start_oauth_handshake(conn)
     +
     +            # Validate and accept the token.
     +            auth = get_auth_value(initial)
     +            assert auth == f"Bearer {access_token}".encode("ascii")
     +
    -+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
     +            finish_handshake(conn)
     +
     +
    @@ src/test/python/client/test_oauth.py (new)
     +    auth_data_cb.impl = bearer_hook
     +
     +    # Now drive the server side.
    ++    if retries >= 0:
    ++        # First connection is a discovery request, which should result in the
    ++        # hook being invoked.
    ++        with sock:
    ++            handle_discovery_connection(sock, discovery_uri)
    ++
    ++        # Client should reconnect to send the token.
    ++        sock, _ = accept()
    ++
     +    with sock:
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
     +            # Initiate a handshake, which should result in our custom callback
    @@ src/test/python/client/test_oauth.py (new)
     +            auth = get_auth_value(initial)
     +            assert auth == f"Bearer {access_token}".encode("ascii")
     +
    -+            pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
     +            finish_handshake(conn)
     +
     +    # Check the data provided to the hook.
    @@ src/test/python/client/test_oauth.py (new)
     +        "token_endpoint", "POST", "/token", token_endpoint
     +    )
     +
    -+    expect_disconnected_handshake(sock)
    ++    with sock:
    ++        handle_discovery_connection(sock, openid_provider.discovery_uri)
     +
     +    # Now make sure the client correctly failed.
     +    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
    @@ src/test/python/client/test_oauth.py (new)
     +        "token_endpoint", "POST", "/token", token_endpoint
     +    )
     +
    -+    expect_disconnected_handshake(sock)
    ++    with sock:
    ++        handle_discovery_connection(sock, openid_provider.discovery_uri)
     +
     +    # Now make sure the client correctly failed.
     +    if bad_value is Missing:
    @@ src/test/python/client/test_oauth.py (new)
     +        "token_endpoint", "POST", "/token", token_endpoint
     +    )
     +
    -+    expect_disconnected_handshake(sock)
    ++    with sock:
    ++        handle_discovery_connection(sock, openid_provider.discovery_uri)
     +
     +    # Now make sure the client correctly failed.
     +    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
    @@ src/test/python/client/test_oauth.py (new)
     +        "token_endpoint", "POST", "/token", token_endpoint
     +    )
     +
    -+    expect_disconnected_handshake(sock)
    ++    with sock:
    ++        handle_discovery_connection(sock, openid_provider.discovery_uri)
     +
     +    # Now make sure the client correctly failed.
     +    error_pattern = "failed to parse access token response: "
    @@ src/test/python/client/test_oauth.py (new)
     +        fail_resp["scope"] = scope
     +
     +    with sock:
    -+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    -+            initial = start_oauth_handshake(conn)
    -+
    -+            # For discovery, the client should send an empty auth header. See
    -+            # RFC 7628, Sec. 4.3.
    -+            auth = get_auth_value(initial)
    -+            assert auth == b""
    -+
    -+            # Always fail the first SASL exchange.
    -+            fail_oauth_handshake(conn, fail_resp)
    ++        handle_discovery_connection(sock, response=fail_resp)
     +
     +    # The client will connect to us a second time, using the parameters we sent
     +    # it.
    @@ src/test/python/client/test_oauth.py (new)
     +            assert auth == f"Bearer {access_token}".encode("ascii")
     +
     +            if success:
    -+                pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal)
     +                finish_handshake(conn)
     +
     +            else:
    @@ src/test/python/client/test_oauth.py (new)
     +        failing_discovery_handler,
     +    )
     +
    -+    expect_disconnected_handshake(sock)
    ++    with sock:
    ++        handle_discovery_connection(sock, openid_provider.discovery_uri)
     +
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
     +        client.check_completed()
    @@ src/test/python/client/test_oauth.py (new)
     +            dict(
     +                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
     +            ),
    -+            "expected error message",
    ++            "server rejected OAuth bearer token: invalid_request",
     +            id="standard server error: invalid_request",
     +        ),
     +        pytest.param(
    @@ src/test/python/client/test_oauth.py (new)
     +            id="standard server error: invalid_token without discovery URI",
     +        ),
     +        pytest.param(
    -+            {"status": "invalid_request"},
    ++            {"status": "invalid_token", "openid-configuration": ""},
     +            pq3.types.AuthnRequest,
     +            dict(type=pq3.authn.SASLContinue, body=b""),
     +            "server sent additional OAuth data",
     +            id="broken server: additional challenge after error",
     +        ),
     +        pytest.param(
    -+            {"status": "invalid_request"},
    ++            {"status": "invalid_token", "openid-configuration": ""},
     +            pq3.types.AuthnRequest,
     +            dict(type=pq3.authn.SASLFinal),
     +            "server sent additional OAuth data",
     +            id="broken server: SASL success after error",
     +        ),
     +        pytest.param(
    -+            {"status": "invalid_request"},
    ++            {"status": "invalid_token", "openid-configuration": ""},
     +            pq3.types.AuthnRequest,
     +            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
     +            "duplicate SASL authentication request",
    @@ src/test/python/client/test_oauth.py (new)
     +        ),
     +    ],
     +)
    -+def test_oauth_server_error(accept, sasl_err, resp_type, resp_payload, expected_error):
    ++def test_oauth_server_error(
    ++    accept, auth_data_cb, sasl_err, resp_type, resp_payload, expected_error
    ++):
    ++    wkuri = f"https://256.256.256.256/.well-known/openid-configuration"
     +    sock, client = accept(
    -+        oauth_issuer="https://example.com",
    ++        oauth_issuer=wkuri,
     +        oauth_client_id="some-id",
     +    )
     +
    ++    def bearer_hook(typ, pgconn, request):
    ++        """
    ++        Implementation of the PQAuthDataHook, which returns a token directly so
    ++        we don't need an openid_provider instance.
    ++        """
    ++        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
    ++        request.token = secrets.token_urlsafe().encode()
    ++        return 1
    ++
    ++    auth_data_cb.impl = bearer_hook
    ++
     +    with sock:
     +        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
     +            start_oauth_handshake(conn)
     +
     +            # Ignore the client data. Return an error "challenge".
    ++            if "openid-configuration" in sasl_err:
    ++                sasl_err["openid-configuration"] = wkuri
    ++
     +            resp = json.dumps(sasl_err)
     +            resp = resp.encode("utf-8")
     +
    @@ src/test/python/client/test_oauth.py (new)
     +            assert pkt.type == pq3.types.PasswordMessage
     +            assert pkt.payload == b"\x01"
     +
    -+            # Now fail the SASL exchange (in either a valid way, or an invalid
    -+            # one, depending on the test).
    ++            # Now fail the SASL exchange (in either a valid way, or an
    ++            # invalid one, depending on the test).
     +            pq3.send(conn, resp_type, **resp_payload)
     +
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
    @@ src/test/python/client/test_oauth.py (new)
     +        "token_endpoint", "POST", "/token", token_endpoint
     +    )
     +
    -+    expect_disconnected_handshake(sock)
    ++    with sock:
    ++        handle_discovery_connection(sock, openid_provider.discovery_uri)
     +
     +    expected_error = "slow_down interval overflow"
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
    @@ src/test/python/client/test_oauth.py (new)
     +    # No provider callbacks necessary; we should fail immediately.
     +
     +    with sock:
    -+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    -+            initial = start_oauth_handshake(conn)
    -+
    -+            resp = {
    -+                "status": "invalid_token",
    -+                "openid-configuration": to_http(openid_provider.discovery_uri),
    -+            }
    -+            pq3.send(
    -+                conn,
    -+                pq3.types.AuthnRequest,
    -+                type=pq3.authn.SASLContinue,
    -+                body=json.dumps(resp).encode("utf-8"),
    -+            )
    -+
    -+            # FIXME: the client disconnects at this point; it'd be nicer if
    -+            # it completed the exchange.
    ++        handle_discovery_connection(sock, to_http(openid_provider.discovery_uri))
     +
     +    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
     +    with pytest.raises(psycopg2.OperationalError, match=expected_error):
    ++        client.check_completed()
    ++
    ++
    ++@pytest.mark.parametrize("auth_type", [pq3.authn.OK, pq3.authn.SASLFinal])
    ++def test_discovery_incorrectly_permits_connection(accept, auth_type):
    ++    """
    ++    Incorrectly responds to a client's discovery request with AuthenticationOK
    ++    or AuthenticationSASLFinal. require_auth=oauth should catch the former, and
    ++    the mechanism itself should catch the latter.
    ++    """
    ++    issuer = "https://256.256.256.256"
    ++    sock, client = accept(
    ++        oauth_issuer=issuer,
    ++        oauth_client_id=secrets.token_hex(),
    ++        require_auth="oauth",
    ++    )
    ++
    ++    with sock:
    ++        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    ++            initial = start_oauth_handshake(conn)
    ++
    ++            auth = get_auth_value(initial)
    ++            assert auth == b""
    ++
    ++            # Incorrectly log the client in. It should immediately disconnect.
    ++            pq3.send(conn, pq3.types.AuthnRequest, type=auth_type)
    ++            assert not conn.read(1), "client sent unexpected data"
    ++
    ++    if auth_type == pq3.authn.OK:
    ++        expected_error = "server did not complete authentication"
    ++    else:
    ++        expected_error = "server sent unexpected additional OAuth data"
    ++
    ++    with pytest.raises(psycopg2.OperationalError, match=expected_error):
    ++        client.check_completed()
    ++
    ++
    ++def test_no_discovery_url_provided(accept):
    ++    """
    ++    Tests what happens when the client doesn't know who to contact and the
    ++    server doesn't tell it.
    ++    """
    ++    issuer = "https://256.256.256.256"
    ++    sock, client = accept(
    ++        oauth_issuer=issuer,
    ++        oauth_client_id=secrets.token_hex(),
    ++    )
    ++
    ++    with sock:
    ++        handle_discovery_connection(sock, discovery=None)
    ++
    ++    expected_error = "no discovery metadata was provided"
    ++    with pytest.raises(psycopg2.OperationalError, match=expected_error):
    ++        client.check_completed()
    ++
    ++
    ++@pytest.mark.parametrize("change_between_connections", [False, True])
    ++def test_discovery_url_changes(accept, openid_provider, change_between_connections):
    ++    """
    ++    Ensures that the client complains if the server agrees on the issuer, but
    ++    disagrees on the discovery URL to be used.
    ++    """
    ++
    ++    # Set up our provider callbacks.
    ++    # NOTE that these callbacks will be called on a background thread. Don't do
    ++    # any unprotected state mutation here.
    ++
    ++    def authorization_endpoint(headers, params):
    ++        resp = {
    ++            "device_code": "DEV",
    ++            "user_code": "USER",
    ++            "interval": 0,
    ++            "verification_uri": "https://example.org",
    ++            "expires_in": 5,
    ++        }
    ++
    ++        return 200, resp
    ++
    ++    openid_provider.register_endpoint(
    ++        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
    ++    )
    ++
    ++    def token_endpoint(headers, params):
    ++        resp = {
    ++            "access_token": secrets.token_urlsafe(),
    ++            "token_type": "bearer",
    ++        }
    ++
    ++        return 200, resp
    ++
    ++    openid_provider.register_endpoint(
    ++        "token_endpoint", "POST", "/token", token_endpoint
    ++    )
    ++
    ++    # Have the client connect.
    ++    sock, client = accept(
    ++        oauth_issuer=openid_provider.discovery_uri,
    ++        oauth_client_id="some-id",
    ++    )
    ++
    ++    other_wkuri = f"{openid_provider.issuer}/.well-known/oauth-authorization-server"
    ++
    ++    if not change_between_connections:
    ++        # Immediately respond with the wrong URL.
    ++        with sock:
    ++            handle_discovery_connection(sock, other_wkuri)
    ++
    ++    else:
    ++        # First connection; use the right URL to begin with.
    ++        with sock:
    ++            handle_discovery_connection(sock, openid_provider.discovery_uri)
    ++
    ++        # Second connection. Reject the token and switch the URL.
    ++        sock, _ = accept()
    ++        with sock:
    ++            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
    ++                initial = start_oauth_handshake(conn)
    ++                get_auth_value(initial)
    ++
    ++                # Ignore the token; fail with a different discovery URL.
    ++                resp = {
    ++                    "status": "invalid_token",
    ++                    "openid-configuration": other_wkuri,
    ++                }
    ++                fail_oauth_handshake(conn, resp)
    ++
    ++    expected_error = rf"server's discovery document has moved to {other_wkuri} \(previous location was {openid_provider.discovery_uri}\)"
    ++    with pytest.raises(psycopg2.OperationalError, match=expected_error):
     +        client.check_completed()
     
      ## src/test/python/conftest.py (new) ##
v42-0001-Move-PG_MAX_AUTH_TOKEN_LENGTH-to-libpq-auth.h.patchapplication/octet-stream; name=v42-0001-Move-PG_MAX_AUTH_TOKEN_LENGTH-to-libpq-auth.h.patchDownload
From 1f38ec8039b99c93c8918a8fbc8abf1838b3dfdb Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 8 Jan 2025 09:30:05 -0800
Subject: [PATCH v42 1/6] Move PG_MAX_AUTH_TOKEN_LENGTH to libpq/auth.h

OAUTHBEARER would like to use this as a limit on Bearer token messages
coming from the client, so promote it to the header file.
---
 src/backend/libpq/auth.c | 16 ----------------
 src/include/libpq/auth.h | 16 ++++++++++++++++
 2 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 46facc275ef..d6ef32cc823 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 9157dbe6092..902c5f6de32 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
-- 
2.34.1

v42-0002-require_auth-prepare-for-multiple-SASL-mechanism.patchapplication/octet-stream; name=v42-0002-require_auth-prepare-for-multiple-SASL-mechanism.patchDownload
From 9f87ffea1c7ee7b81be698f6e183cf5b5be806a0 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 16 Dec 2024 13:57:14 -0800
Subject: [PATCH v42 2/6] require_auth: prepare for multiple SASL mechanisms

Prior to this patch, the require_auth implementation assumed that the
AuthenticationSASL protocol message was synonymous with SCRAM-SHA-256.
In preparation for the OAUTHBEARER SASL mechanism, split the
implementation into two tiers: the first checks the acceptable
AUTH_REQ_* codes, and the second checks acceptable mechanisms if
AUTH_REQ_SASL et al are permitted.

conn->allowed_sasl_mechs is the list of pointers to acceptable
mechanisms. (Since we'll support only a small number of mechanisms, this
is an array of static length to minimize bookkeeping.) pg_SASL_init()
will bail if the selected mechanism isn't contained in this array.

Since there's only one mechansism supported right now, one branch of the
second tier cannot be exercised yet (it's marked with Assert(false)).
This assertion will need to be removed when the next mechanism is added.
---
 src/interfaces/libpq/fe-auth.c            |  29 ++++
 src/interfaces/libpq/fe-connect.c         | 178 +++++++++++++++++++---
 src/interfaces/libpq/libpq-int.h          |   2 +
 src/test/authentication/t/001_password.pl |  10 ++
 4 files changed, 202 insertions(+), 17 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 7e478489b71..70753d8ec29 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -543,6 +543,35 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
+	/* Make sure require_auth is satisfied. */
+	if (conn->require_auth)
+	{
+		bool		allowed = false;
+
+		for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		{
+			if (conn->sasl == conn->allowed_sasl_mechs[i])
+			{
+				allowed = true;
+				break;
+			}
+		}
+
+		if (!allowed)
+		{
+			/*
+			 * TODO: this is dead code until a second SASL mechanism is added;
+			 * the connection can't have proceeded past check_expected_areq()
+			 * if no SASL methods are allowed.
+			 */
+			Assert(false);
+
+			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
+									conn->require_auth, selected_mechanism);
+			goto error;
+		}
+	}
+
 	if (conn->channel_binding[0] == 'r' &&	/* require */
 		strcmp(selected_mechanism, SCRAM_SHA_256_PLUS_NAME) != 0)
 	{
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 7878e2e33af..ccbcbb7acda 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1117,6 +1117,56 @@ libpq_prng_init(PGconn *conn)
 	pg_prng_seed(&conn->prng_state, rseed);
 }
 
+/*
+ * Fills the connection's allowed_sasl_mechs list with all supported SASL
+ * mechanisms.
+ */
+static inline void
+fill_allowed_sasl_mechs(PGconn *conn)
+{
+	/*---
+	 * We only support one mechanism at the moment, so rather than deal with a
+	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
+	 * rely on the compile-time assertion here to keep us honest.
+	 *
+	 * To add a new mechanism to require_auth,
+	 * - update the length of conn->allowed_sasl_mechs,
+	 * - add the new pg_fe_sasl_mech pointer to this function, and
+	 * - handle the new mechanism name in the require_auth portion of
+	 *   pqConnectOptions2(), below.
+	 */
+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
+					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
+
+	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
+}
+
+/*
+ * Clears the connection's allowed_sasl_mechs list.
+ */
+static inline void
+clear_allowed_sasl_mechs(PGconn *conn)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		conn->allowed_sasl_mechs[i] = NULL;
+}
+
+/*
+ * Helper routine that searches the static allowed_sasl_mechs list for a
+ * specific mechanism.
+ */
+static inline int
+index_of_allowed_sasl_mech(PGconn *conn, const pg_fe_sasl_mech *mech)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+	{
+		if (conn->allowed_sasl_mechs[i] == mech)
+			return i;
+	}
+
+	return -1;
+}
+
 /*
  *		pqConnectOptions2
  *
@@ -1358,17 +1408,19 @@ pqConnectOptions2(PGconn *conn)
 		bool		negated = false;
 
 		/*
-		 * By default, start from an empty set of allowed options and add to
-		 * it.
+		 * By default, start from an empty set of allowed methods and
+		 * mechanisms, and add to it.
 		 */
 		conn->auth_required = true;
 		conn->allowed_auth_methods = 0;
+		clear_allowed_sasl_mechs(conn);
 
 		for (first = true, more = true; more; first = false)
 		{
 			char	   *method,
 					   *part;
-			uint32		bits;
+			uint32		bits = 0;
+			const pg_fe_sasl_mech *mech = NULL;
 
 			part = parse_comma_separated_list(&s, &more);
 			if (part == NULL)
@@ -1384,11 +1436,12 @@ pqConnectOptions2(PGconn *conn)
 				if (first)
 				{
 					/*
-					 * Switch to a permissive set of allowed options, and
-					 * subtract from it.
+					 * Switch to a permissive set of allowed methods and
+					 * mechanisms, and subtract from it.
 					 */
 					conn->auth_required = false;
 					conn->allowed_auth_methods = -1;
+					fill_allowed_sasl_mechs(conn);
 				}
 				else if (!negated)
 				{
@@ -1413,6 +1466,10 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
+			/*
+			 * First group: methods that can be handled solely with the
+			 * authentication request codes.
+			 */
 			if (strcmp(method, "password") == 0)
 			{
 				bits = (1 << AUTH_REQ_PASSWORD);
@@ -1431,13 +1488,22 @@ pqConnectOptions2(PGconn *conn)
 				bits = (1 << AUTH_REQ_SSPI);
 				bits |= (1 << AUTH_REQ_GSS_CONT);
 			}
+
+			/*
+			 * Next group: SASL mechanisms. All of these use the same request
+			 * codes, so the list of allowed mechanisms is tracked separately.
+			 *
+			 * fill_allowed_sasl_mechs() must be updated when adding a new
+			 * mechanism here!
+			 */
 			else if (strcmp(method, "scram-sha-256") == 0)
 			{
-				/* This currently assumes that SCRAM is the only SASL method. */
-				bits = (1 << AUTH_REQ_SASL);
-				bits |= (1 << AUTH_REQ_SASL_CONT);
-				bits |= (1 << AUTH_REQ_SASL_FIN);
+				mech = &pg_scram_mech;
 			}
+
+			/*
+			 * Final group: meta-options.
+			 */
 			else if (strcmp(method, "none") == 0)
 			{
 				/*
@@ -1473,20 +1539,68 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
-			/* Update the bitmask. */
-			if (negated)
+			if (mech)
 			{
-				if ((conn->allowed_auth_methods & bits) == 0)
-					goto duplicate;
+				/*
+				 * Update the mechanism set only. The method bitmask will be
+				 * updated for SASL further down.
+				 */
+				Assert(!bits);
+
+				if (negated)
+				{
+					/* Remove the existing mechanism from the list. */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i < 0)
+						goto duplicate;
 
-				conn->allowed_auth_methods &= ~bits;
+					conn->allowed_sasl_mechs[i] = NULL;
+				}
+				else
+				{
+					/*
+					 * Find a space to put the new mechanism (after making
+					 * sure it's not already there).
+					 */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i >= 0)
+						goto duplicate;
+
+					i = index_of_allowed_sasl_mech(conn, NULL);
+					if (i < 0)
+					{
+						/* Should not happen; the pointer list is corrupted. */
+						Assert(false);
+
+						conn->status = CONNECTION_BAD;
+						libpq_append_conn_error(conn,
+												"internal error: no space in allowed_sasl_mechs");
+						free(part);
+						return false;
+					}
+
+					conn->allowed_sasl_mechs[i] = mech;
+				}
 			}
 			else
 			{
-				if ((conn->allowed_auth_methods & bits) == bits)
-					goto duplicate;
+				/* Update the method bitmask. */
+				Assert(bits);
+
+				if (negated)
+				{
+					if ((conn->allowed_auth_methods & bits) == 0)
+						goto duplicate;
+
+					conn->allowed_auth_methods &= ~bits;
+				}
+				else
+				{
+					if ((conn->allowed_auth_methods & bits) == bits)
+						goto duplicate;
 
-				conn->allowed_auth_methods |= bits;
+					conn->allowed_auth_methods |= bits;
+				}
 			}
 
 			free(part);
@@ -1505,6 +1619,36 @@ pqConnectOptions2(PGconn *conn)
 			free(part);
 			return false;
 		}
+
+		/*
+		 * Finally, allow SASL authentication requests if (and only if) we've
+		 * allowed any mechanisms.
+		 */
+		{
+			bool		allowed = false;
+			const uint32 sasl_bits =
+				(1 << AUTH_REQ_SASL)
+				| (1 << AUTH_REQ_SASL_CONT)
+				| (1 << AUTH_REQ_SASL_FIN);
+
+			for (i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+			{
+				if (conn->allowed_sasl_mechs[i])
+				{
+					allowed = true;
+					break;
+				}
+			}
+
+			/*
+			 * For the standard case, add the SASL bits to the (default-empty)
+			 * set if needed. For the negated case, remove them.
+			 */
+			if (!negated && allowed)
+				conn->allowed_auth_methods |= sasl_bits;
+			else if (negated && !allowed)
+				conn->allowed_auth_methods &= ~sasl_bits;
+		}
 	}
 
 	/*
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 4be5fd7ae4f..e0d5b5fe0be 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -505,6 +505,8 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
+	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 	char		current_auth_response;	/* used by pqTraceOutputMessage to
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 773238b76fd..1357f806b6f 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -277,6 +277,16 @@ $node->connect_fails(
 	"require_auth methods cannot be duplicated, !none case",
 	expected_stderr =>
 	  qr/require_auth method "!none" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=scram-sha-256,scram-sha-256",
+	"require_auth methods cannot be duplicated, scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "scram-sha-256" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=!scram-sha-256,!scram-sha-256",
+	"require_auth methods cannot be duplicated, !scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "!scram-sha-256" is specified more than once/);
 
 # Unknown value defined in require_auth.
 $node->connect_fails(
-- 
2.34.1

v42-0003-libpq-handle-asynchronous-actions-during-SASL.patchapplication/octet-stream; name=v42-0003-libpq-handle-asynchronous-actions-during-SASL.patchDownload
From bda684d19cc7a9a24c84f72176b6a84cc08ebecd Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 8 Jan 2025 09:30:05 -0800
Subject: [PATCH v42 3/6] libpq: handle asynchronous actions during SASL

This adds the ability for a SASL mechanism to signal to PQconnectPoll()
that some arbitrary work must be done, external to the Postgres
connection, before authentication can continue. The intent is for the
upcoming OAUTHBEARER mechanism to make use of this functionality.

To ensure that threads are not blocked waiting for the SASL mechanism to
make long-running calls, the mechanism communicates with the top-level
client via the "altsock": a file or socket descriptor, opaque to this
layer of libpq, which is signaled when work is ready to be done again.
This socket temporarily takes the place of the standard connection
descriptor, so PQsocket() clients should continue to operate correctly
using their existing polling implementations.

A mechanism should set an authentication callback (conn->async_auth())
and a cleanup callback (conn->cleanup_async_auth()), return SASL_ASYNC
during the exchange, and assign conn->altsock during the first call to
async_auth(). When the cleanup callback is called, either because
authentication has succeeded or because the connection is being
dropped, the altsock must be released and disconnected from the PGconn.
---
 src/interfaces/libpq/fe-auth-sasl.h  |  11 ++-
 src/interfaces/libpq/fe-auth-scram.c |   6 +-
 src/interfaces/libpq/fe-auth.c       | 114 +++++++++++++++++++--------
 src/interfaces/libpq/fe-auth.h       |   3 +-
 src/interfaces/libpq/fe-connect.c    |  73 ++++++++++++++++-
 src/interfaces/libpq/fe-misc.c       |  35 +++++---
 src/interfaces/libpq/libpq-fe.h      |   2 +
 src/interfaces/libpq/libpq-int.h     |   6 ++
 8 files changed, 201 insertions(+), 49 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index f0c62139092..f06f547c07d 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,18 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth/cleanup_async_auth appropriately
+	 *					before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 557e9c568b6..fe18615197f 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -205,7 +206,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 70753d8ec29..a9a23eb97ee 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -430,7 +430,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +448,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -607,26 +607,48 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -671,7 +693,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -701,11 +723,25 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+
+		/*
+		 * The mechanism may optionally generate some output to send before
+		 * switching over to async auth, so continue onwards.
+		 */
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -1013,12 +1049,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1176,7 +1218,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1185,23 +1227,33 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 
 		case AUTH_REQ_SASL_CONT:
 		case AUTH_REQ_SASL_FIN:
-			if (conn->sasl_state == NULL)
 			{
-				appendPQExpBufferStr(&conn->errorMessage,
-									 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
-				return STATUS_ERROR;
-			}
-			oldmsglen = conn->errorMessage.len;
-			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
-			{
-				/* Use this message if pg_SASL_continue didn't supply one */
-				if (conn->errorMessage.len == oldmsglen)
+				bool		final = false;
+
+				if (conn->sasl_state == NULL)
+				{
 					appendPQExpBufferStr(&conn->errorMessage,
-										 "fe_sendauth: error in SASL authentication\n");
-				return STATUS_ERROR;
+										 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
+					return STATUS_ERROR;
+				}
+				oldmsglen = conn->errorMessage.len;
+
+				if (areq == AUTH_REQ_SASL_FIN)
+					final = true;
+
+				if (pg_SASL_continue(conn, payloadlen, final, async) != STATUS_OK)
+				{
+					/*
+					 * Append a generic error message unless pg_SASL_continue
+					 * did set a more specific one already.
+					 */
+					if (conn->errorMessage.len == oldmsglen)
+						appendPQExpBufferStr(&conn->errorMessage,
+											 "fe_sendauth: error in SASL authentication\n");
+					return STATUS_ERROR;
+				}
+				break;
 			}
-			break;
 
 		default:
 			libpq_append_conn_error(conn, "authentication method %u not supported", areq);
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index df0a68b0b21..1d4991f8996 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -19,7 +19,8 @@
 
 
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index ccbcbb7acda..a90f261cdb7 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -501,6 +501,19 @@ pqDropConnection(PGconn *conn, bool flushInput)
 	conn->cmd_queue_recycle = NULL;
 
 	/* Free authentication/encryption state */
+	if (conn->cleanup_async_auth)
+	{
+		/*
+		 * Any in-progress async authentication should be torn down first so
+		 * that cleanup_async_auth() can depend on the other authentication
+		 * state if necessary.
+		 */
+		conn->cleanup_async_auth(conn);
+		conn->cleanup_async_auth = NULL;
+	}
+	conn->async_auth = NULL;
+	conn->altsock = PGINVALID_SOCKET;	/* cleanup_async_auth() should have
+										 * done this, but make sure. */
 #ifdef ENABLE_GSS
 	{
 		OM_uint32	min_s;
@@ -2847,6 +2860,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3882,6 +3896,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -4070,7 +4085,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -4107,6 +4132,49 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+
+				if (!conn->async_auth || !conn->cleanup_async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn, "async authentication has no handler");
+					goto error_return;
+				}
+
+				/* Drive some external authentication work. */
+				status = conn->async_auth(conn);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/* Done. Tear down the async implementation. */
+					conn->cleanup_async_auth(conn);
+					conn->cleanup_async_auth = NULL;
+					Assert(conn->altsock == PGINVALID_SOCKET);
+
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+
+					goto keep_going;
+				}
+
+				/*
+				 * Caller needs to poll some more. conn->async_auth() should
+				 * have assigned an altsock to poll on.
+				 */
+				Assert(conn->altsock != PGINVALID_SOCKET);
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4788,6 +4856,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -7439,6 +7508,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 2c60eb5b569..d78445c70af 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1049,34 +1049,43 @@ pqWriteReady(PGconn *conn)
  * or both.  Returns >0 if one or more conditions are met, 0 if it timed
  * out, -1 if an error occurred.
  *
- * If SSL is in use, the SSL buffer is checked prior to checking the socket
- * for read data directly.
+ * If an altsock is set for asynchronous authentication, that will be used in
+ * preference to the "server" socket. Otherwise, if SSL is in use, the SSL
+ * buffer is checked prior to checking the socket for read data directly.
  */
 static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	if (conn->altsock != PGINVALID_SOCKET)
+		sock = conn->altsock;
+	else
 	{
-		libpq_append_conn_error(conn, "invalid socket");
-		return -1;
-	}
+		sock = conn->sock;
+		if (sock == PGINVALID_SOCKET)
+		{
+			libpq_append_conn_error(conn, "invalid socket");
+			return -1;
+		}
 
 #ifdef USE_SSL
-	/* Check for SSL library buffering read bytes */
-	if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
-	{
-		/* short-circuit the select */
-		return 1;
-	}
+		/* Check for SSL library buffering read bytes */
+		if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
+		{
+			/* short-circuit the select */
+			return 1;
+		}
 #endif
+	}
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index cce9ce60c55..a3491faf0c3 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -103,6 +103,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e0d5b5fe0be..2546f9f8a50 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -513,6 +513,12 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	/* Callbacks for external async authentication */
+	PostgresPollingStatusType (*async_auth) (PGconn *conn);
+	void		(*cleanup_async_auth) (PGconn *conn);
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
-- 
2.34.1

v42-0004-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v42-0004-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 3dc6dd3433cf9b49f70aa74c7ebc5293b7934d08 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v42 4/6] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 configure                                     |  213 ++
 configure.ac                                  |   32 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  381 +++
 doc/src/sgml/oauth-validators.sgml            |  402 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   23 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  860 ++++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    6 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2541 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1141 ++++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   50 +-
 src/interfaces/libpq/libpq-fe.h               |   82 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  264 ++
 .../modules/oauth_validator/t/001_server.pl   |  551 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  135 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 57 files changed, 8206 insertions(+), 31 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 18e944ca89d..8c518c317e7 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -219,6 +219,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -312,8 +313,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +692,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/configure b/configure
index a0b5e10ca39..e6b329ad2fe 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,144 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12207,6 +12356,59 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-libcurl" "$LINENO" 5
+fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
@@ -13955,6 +14157,17 @@ fi
 
 done
 
+fi
+
+if test "$with_libcurl" = yes; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index d713360f340..b13fee83701 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,27 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1315,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-libcurl])])
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
@@ -1588,6 +1616,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_libcurl" = yes; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..f84085dbac4 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a8866292d46..3aab7761e4c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index ebdb5b3bc2d..3fca2910dad 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1141,6 +1141,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2582,6 +2595,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index e04acf1c208..0ed80f547c4 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,106 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10129,278 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..d0bca9196d9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,402 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f7..ae4732df656 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index cfd654d2916..842559ac3ac 100644
--- a/meson.build
+++ b/meson.build
@@ -854,6 +854,24 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+  endif
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3034,6 +3052,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3702,6 +3724,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 1278b7744f4..6a745b5984a 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..6155d63a116
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,860 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked via before_shmem_exit().
+ */
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 38cb9e970d5..db582d2d62c 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4823,6 +4824,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 079efa1baa7..378aa8438d6 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..8fe56267780
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..4fcdda74305
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..9b1ed7996d3 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -663,6 +666,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 6a0def7273c..e9422888e3e 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..258602cfbfc
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2541 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	if (!ctx->nested)
+		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+			Assert(!*field->target.scalar);
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				conn->altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..cc53e2bdd1a
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1141 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..32598721686
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index a9a23eb97ee..10bc0bebbc3 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1574,3 +1576,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index a90f261cdb7..f876ffacb7f 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -649,6 +667,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1138,7 +1157,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1148,10 +1167,11 @@ fill_allowed_sasl_mechs(PGconn *conn)
 	 * - handle the new mechanism name in the require_auth portion of
 	 *   pqConnectOptions2(), below.
 	 */
-	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 2,
 					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
 
 	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
+	conn->allowed_sasl_mechs[1] = &pg_oauth_mech;
 }
 
 /*
@@ -1513,6 +1533,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4105,7 +4129,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4364,6 +4400,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -4976,6 +5015,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5129,6 +5174,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..5f8d608261e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a50..f36f7f19d58 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 1a5a223e1af..4180e35f8cf 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -4,6 +4,7 @@
 # args for executables (which depend on libpq).
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -40,6 +41,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14b..bdfd5f1f8de 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 4f544a042d4..0c2ccc75a63 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..f297ed5c968
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..138a8104622
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..f77a3e115c6
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..4b78c90557c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..12fe70c990b
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,264 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..80f52585896
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,551 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..95cccf90dd8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..f0f23d1d1a8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..8ec09102027
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..bf94f091def
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,135 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index a92944e0d9c..1bfdbcca59f 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2514,6 +2514,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2557,7 +2562,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d03921a4822..e8d48249440 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1951,6 +1958,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3087,6 +3095,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3481,6 +3491,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v42-0005-XXX-fix-libcurl-link-error.patchapplication/octet-stream; name=v42-0005-XXX-fix-libcurl-link-error.patchDownload
From 97e0a2aae26a0d2db4bb3e23cdcddef3e33d2fa5 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 13 Jan 2025 12:31:59 -0800
Subject: [PATCH v42 5/6] XXX fix libcurl link error

The ftp/curl port appears to be missing a minimum version dependency on
libssh2, so the following starts showing up after upgrading to curl
8.11.1_1:

    libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

But 13.3 is EOL, so it's not clear if anyone would be interested in a
bug report, and a FreeBSD 14 Cirrus image is in progress. Hack past it
for now.
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 8c518c317e7..97bb38c72c6 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -165,6 +165,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.34.1

v42-0006-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v42-0006-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From db0167009b9737f22d35bcc6883edd1e31dca2d6 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v42 6/6] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  196 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2671 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 +++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6452 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 97bb38c72c6..a6fab60bfd8 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -318,6 +318,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -402,8 +403,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 842559ac3ac..1c4333214d6 100644
--- a/meson.build
+++ b/meson.build
@@ -3365,6 +3365,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3531,6 +3534,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..236057cd99e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 00000000000..0e8f027b2ec
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 00000000000..b0695b6287e
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 00000000000..acf339a5899
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 00000000000..20e72a404aa
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,196 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        sock.settimeout(BLOCKING_TIMEOUT)
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 00000000000..8372376ede4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 00000000000..a3cbafe843e
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2671 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def handle_discovery_connection(sock, discovery=None, *, response=None):
+    """
+    Helper for all tests that expect an initial discovery connection from the
+    client. The provided discovery URI will be used in a standard error response
+    from the server (or response may be set, to provide a custom dictionary),
+    and the SASL exchange will be failed.
+
+    By default, the client is expected to complete the entire handshake. Set
+    finish to False if the client should immediately disconnect when it receives
+    the error response.
+    """
+    if response is None:
+        response = {"status": "invalid_token"}
+        if discovery is not None:
+            response["openid-configuration"] = discovery
+
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        initial = start_oauth_handshake(conn)
+
+        # For discovery, the client should send an empty auth header. See RFC
+        # 7628, Sec. 4.3.
+        auth = get_auth_value(initial)
+        assert auth == b""
+
+        # The discovery handshake is doomed to fail.
+        fail_oauth_handshake(conn, response)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "authorization_code",
+                    "urn:ietf:params:oauth:grant-type:device_code",
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+            except ssl.SSLError as err:
+                # FIXME OpenSSL 3.4 introduced an incompatibility with Python's
+                # TLS error handling, resulting in a bogus "[SYS] unknown error"
+                # on some platforms. Hopefully this is fixed in 2025's set of
+                # maintenance releases and this case can be removed.
+                #
+                #     https://github.com/python/cpython/issues/127257
+                #
+                if "[SYS] unknown error" in str(err):
+                    return
+                raise
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PGPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PGOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PGOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PGOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PGPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PGOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize(
+    "success, abnormal_failure",
+    [
+        pytest.param(True, False, id="success"),
+        pytest.param(False, False, id="normal failure"),
+        pytest.param(False, True, id="abnormal failure"),
+    ],
+)
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+    abnormal_failure,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Client should reconnect.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            elif abnormal_failure:
+                # Send an empty error response, which should result in a
+                # mechanism-level failure in the client. This test ensures that
+                # the client doesn't try a third connection for this case.
+                expected_error = "server sent error response without a status"
+                fail_oauth_handshake(conn, {})
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    with sock:
+        handle_discovery_connection(sock, discovery_uri)
+
+    # Expect the client to connect again.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment (no path)",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server/path/",
+            None,
+            id="extra empty segment (with path)",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+    with sock:
+        if expected_error and not server_discovery:
+            # If the client already knows the URL, it should disconnect as soon
+            # as it realizes it's not valid.
+            expect_disconnected_handshake(sock)
+        else:
+            # Otherwise, it should complete the connection.
+            handle_discovery_connection(sock, discovery_uri)
+
+    # The client should not reconnect.
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        startup = pq3.recv1(conn, cls=pq3.Startup)
+        assert startup.proto == pq3.protocol(3, 0)
+
+        pq3.send(
+            conn,
+            pq3.types.AuthnRequest,
+            type=pq3.authn.SASL,
+            body=[b"OAUTHBEARER", b""],
+        )
+
+        # The client should disconnect at this point.
+        assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    with sock:
+        expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Second connection sends the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+    token_sent = threading.Event()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token,
+        # and signal the main thread to continue.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+        token_sent.set()
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # At this point the client is talking to the authorization server. Wait for
+    # that to succeed so we don't run into the accept() timeout.
+    token_sent.wait()
+
+    # Client should reconnect and send the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PGOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    if retries >= 0:
+        # First connection is a discovery request, which should result in the
+        # hook being invoked.
+        with sock:
+            handle_discovery_connection(sock, discovery_uri)
+
+        # Client should reconnect to send the token.
+        sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        handle_discovery_connection(sock, response=fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "server rejected OAuth bearer token: invalid_request",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(
+    accept, auth_data_cb, sasl_err, resp_type, resp_payload, expected_error
+):
+    wkuri = f"https://256.256.256.256/.well-known/openid-configuration"
+    sock, client = accept(
+        oauth_issuer=wkuri,
+        oauth_client_id="some-id",
+    )
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which returns a token directly so
+        we don't need an openid_provider instance.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.token = secrets.token_urlsafe().encode()
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            if "openid-configuration" in sasl_err:
+                sasl_err["openid-configuration"] = wkuri
+
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an
+            # invalid one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        handle_discovery_connection(sock, to_http(openid_provider.discovery_uri))
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("auth_type", [pq3.authn.OK, pq3.authn.SASLFinal])
+def test_discovery_incorrectly_permits_connection(accept, auth_type):
+    """
+    Incorrectly responds to a client's discovery request with AuthenticationOK
+    or AuthenticationSASLFinal. require_auth=oauth should catch the former, and
+    the mechanism itself should catch the latter.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+        require_auth="oauth",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Incorrectly log the client in. It should immediately disconnect.
+            pq3.send(conn, pq3.types.AuthnRequest, type=auth_type)
+            assert not conn.read(1), "client sent unexpected data"
+
+    if auth_type == pq3.authn.OK:
+        expected_error = "server did not complete authentication"
+    else:
+        expected_error = "server sent unexpected additional OAuth data"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_no_discovery_url_provided(accept):
+    """
+    Tests what happens when the client doesn't know who to contact and the
+    server doesn't tell it.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        handle_discovery_connection(sock, discovery=None)
+
+    expected_error = "no discovery metadata was provided"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("change_between_connections", [False, True])
+def test_discovery_url_changes(accept, openid_provider, change_between_connections):
+    """
+    Ensures that the client complains if the server agrees on the issuer, but
+    disagrees on the discovery URL to be used.
+    """
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "DEV",
+            "user_code": "USER",
+            "interval": 0,
+            "verification_uri": "https://example.org",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Have the client connect.
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    other_wkuri = f"{openid_provider.issuer}/.well-known/oauth-authorization-server"
+
+    if not change_between_connections:
+        # Immediately respond with the wrong URL.
+        with sock:
+            handle_discovery_connection(sock, other_wkuri)
+
+    else:
+        # First connection; use the right URL to begin with.
+        with sock:
+            handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+        # Second connection. Reject the token and switch the URL.
+        sock, _ = accept()
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+                get_auth_value(initial)
+
+                # Ignore the token; fail with a different discovery URL.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": other_wkuri,
+                }
+                fail_oauth_handshake(conn, resp)
+
+    expected_error = rf"server's discovery document has moved to {other_wkuri} \(previous location was {openid_provider.discovery_uri}\)"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 00000000000..1a73865ee47
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 00000000000..e137df852ef
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 00000000000..ef809e288af
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 00000000000..ab7a6e7fb96
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 00000000000..0dfcffb83e0
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 00000000000..42af80c73ee
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 00000000000..85534b9cc99
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 00000000000..415748b9a66
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 00000000000..2839343ffa1
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 00000000000..02126dba792
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 00000000000..dee4855fc0b
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 00000000000..7c6817de31c
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 00000000000..075c02c1ca6
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 00000000000..804307ee120
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba7..ffdf760d79a 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.34.1

#189Kashif Zeeshan
kashi.zeeshan@gmail.com
In reply to: Jacob Champion (#187)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Jan 14, 2025 at 6:00 AM Jacob Champion <
jacob.champion@enterprisedb.com> wrote:

On Mon, Jan 13, 2025 at 3:21 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Next email will discuss the architectural bug that Kashif found.

Okay, here goes. A standard OAuth connection attempt looks like this
(oh, I hope Gmail doesn't mangle it):

Issuer User libpq Backend
| | |
| x -----> x -----> o [1] Startup Packet
| | | |
| | x <----- x [2] OAUTHBEARER Request
| | | |
| | x -----> x [3] Parameter Discovery
| | | |
| | x <----- o [4] Parameters Stored
| | |
| | |
| | |
| | x -----> o [5] New Startup Packet
| | | |
| | x <----- x [6] OAUTHBEARER Request
| | | |
x <----- x <----> x |
x <----- x <----> x | [7] OAuth Handshake
x <----- x <----> x |
| | | |
o | x -----> x [8] Send Token
| | |
| <----- x <----- x [9] Connection Established
| | |
x <----> x <----> x
x <----> x <----> x [10] Use the DB
. . .
. . .
. . .

When the server first asks for a token via OAUTHBEARER (step 2), the
client doesn't necessarily know what the server's requirements are for
a given user. It uses the rest of the doomed OAUTHBEARER exchange to
store the issuer and scope information in the PGconn (step 3-4), then
disconnects and sets need_new_connection in PQconnectPoll() so that a
second connection is immediately opened (step 5). When the OAUTHBEARER
mechanism takes control the second time, it has everything it needs to
conduct the login flow with the issuer (step 7). It then sends the
obtained token to establish a connection (steps 8 onward).

The problem is that step 7 is consuming the authentication_timeout for
the backend. I'm very good at completing these flows quickly, but if
you can't complete the browser prompts in time, you will simply not be
able to log into the server. Which is harsh to say the least. (Imagine
the pain if the standard psql password prompt timed out.) DBAs can get
around it by increasing the timeout, obviously, but that doesn't feel
very good as a solution.

Last week I looked into a fix where libpq would simply try again with
the stored token if the backend hangs up on it during the handshake,
but I think that will end up making the UX worse. The token validation
on the server side isn't going to be instantaneous, so if the client
is able to complete the token exchange in 59 seconds and send it to
the backend, there's an excellent chance that the connection is still
going to be torn down in a way that's indistinguishable from a crash.
We don't want the two sides to fight for time.

So I think what I'm going to need to do is modify v41-0003 to allow
the mechanism to politely hang up the connection while the flow is in
progress. This further decouples the lifetimes of the mechanism and
the async auth -- the async state now has to live outside of the SASL
exchange -- but I think it's probably more architecturally sound. Yell
at me if that sounds unmaintainable or if there's a more obvious fix
I'm missing.

Huge thanks to Kashif for pointing this out!

Thanks Jacob, the latest patch fixed the issues.

Show quoted text

--Jacob

#190Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#186)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 14 Jan 2025, at 00:21, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

- 0001 moves PG_MAX_AUTH_TOKEN_LENGTH, as discussed upthread
- 0002 handles the non-OAuth-specific changes to require_auth (0005
now highlights the OAuth-specific pieces)
- 0003 adds SASL_ASYNC and its handling code

I was reading these diffs with the aim of trying to get them in sooner rather
than later to get us closer to the full patchset committed. Two small things
came to mind:

+  /*
+   * The mechanism should have set up the necessary callbacks; all we
+   * need to do is signal the caller.
+   */
+  *async = true;
+  return STATUS_OK;
Is it worth adding assertions here to ensure that everything has been set up
properly to help when adding a new mechanism in the future?
+   /* Done. Tear down the async implementation. */
+   conn->cleanup_async_auth(conn);
+   conn->cleanup_async_auth = NULL;
+   Assert(conn->altsock == PGINVALID_SOCKET);
In pqDropConnection() we set ->altsock to NULL just to be sure rather than
assert that cleanup has done so.  Shouldn't we be consistent in the
expectation and set to NULL here as well?

--
Daniel Gustafsson

#191Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#190)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Jan 20, 2025 at 2:10 PM Daniel Gustafsson <daniel@yesql.se> wrote:

+  /*
+   * The mechanism should have set up the necessary callbacks; all we
+   * need to do is signal the caller.
+   */
+  *async = true;
+  return STATUS_OK;
Is it worth adding assertions here to ensure that everything has been set up
properly to help when adding a new mechanism in the future?

Yeah, I think that'd be helpful.

+   /* Done. Tear down the async implementation. */
+   conn->cleanup_async_auth(conn);
+   conn->cleanup_async_auth = NULL;
+   Assert(conn->altsock == PGINVALID_SOCKET);
In pqDropConnection() we set ->altsock to NULL

(I assume you mean PGINVALID_SOCKET?)

just to be sure rather than
assert that cleanup has done so. Shouldn't we be consistent in the
expectation and set to NULL here as well?

I'm not opposed; I just figured that the following code might be a bit
confusing:

Assert(conn->altsock == PGINVALID_SOCKET);
conn->altsock = PGINVALID_SOCKET;

But I can add a comment to the assignment to try to explain. I don't
know what the likelihood of landing code that trips that assertion is,
but an explicit assignment would at least stop problems from
cascading.

--Jacob

#192Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#191)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Jan 20, 2025 at 4:40 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

But I can add a comment to the assignment to try to explain. I don't
know what the likelihood of landing code that trips that assertion is,
but an explicit assignment would at least stop problems from
cascading.

On second thought, I can just fail the connection if this happens.

--Jacob

#193Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#191)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 21 Jan 2025, at 01:40, Jacob Champion <jacob.champion@enterprisedb.com> wrote:
On Mon, Jan 20, 2025 at 2:10 PM Daniel Gustafsson <daniel@yesql.se> wrote:

+   /* Done. Tear down the async implementation. */
+   conn->cleanup_async_auth(conn);
+   conn->cleanup_async_auth = NULL;
+   Assert(conn->altsock == PGINVALID_SOCKET);
In pqDropConnection() we set ->altsock to NULL

(I assume you mean PGINVALID_SOCKET?)

Doh, yes.

just to be sure rather than
assert that cleanup has done so. Shouldn't we be consistent in the
expectation and set to NULL here as well?

I'm not opposed; I just figured that the following code might be a bit
confusing:

Assert(conn->altsock == PGINVALID_SOCKET);
conn->altsock = PGINVALID_SOCKET;

But I can add a comment to the assignment to try to explain. I don't
know what the likelihood of landing code that trips that assertion is,
but an explicit assignment would at least stop problems from
cascading.

It is weird, but stopping the escalation of a problem seems important.

On 21 Jan 2025, at 01:43, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On second thought, I can just fail the connection if this happens.

Yeah, I think that's the best option here.

--
Daniel Gustafsson

#194Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#193)
7 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Jan 21, 2025 at 1:29 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On second thought, I can just fail the connection if this happens.

Yeah, I think that's the best option here.

Done that way in v43.

--Jacob

Attachments:

since-v42.diff.txttext/plain; charset=US-ASCII; name=since-v42.diff.txtDownload
1:  1f38ec8039b = 1:  5d474397364 Move PG_MAX_AUTH_TOKEN_LENGTH to libpq/auth.h
2:  9f87ffea1c7 = 2:  20452d21e0b require_auth: prepare for multiple SASL mechanisms
3:  bda684d19cc ! 3:  f0afefb80d6 libpq: handle asynchronous actions during SASL
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
     +		/*
     +		 * The mechanism should have set up the necessary callbacks; all we
     +		 * need to do is signal the caller.
    ++		 *
    ++		 * In non-assertion builds, this postcondition is enforced at time of
    ++		 * use in PQconnectPoll().
     +		 */
    ++		Assert(conn->async_auth);
    ++		Assert(conn->cleanup_async_auth);
    ++
     +		*async = true;
     +		return STATUS_OK;
     +	}
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
     +				if (!conn->async_auth || !conn->cleanup_async_auth)
     +				{
     +					/* programmer error; should not happen */
    -+					libpq_append_conn_error(conn, "async authentication has no handler");
    ++					libpq_append_conn_error(conn,
    ++											"internal error: async authentication has no handler");
     +					goto error_return;
     +				}
     +
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
     +					/* Done. Tear down the async implementation. */
     +					conn->cleanup_async_auth(conn);
     +					conn->cleanup_async_auth = NULL;
    -+					Assert(conn->altsock == PGINVALID_SOCKET);
    ++
    ++					/*
    ++					 * Cleanup must unset altsock, both as an indication that
    ++					 * it's been released, and to stop pqSocketCheck from
    ++					 * looking at the wrong socket after async auth is done.
    ++					 */
    ++					if (conn->altsock != PGINVALID_SOCKET)
    ++					{
    ++						Assert(false);
    ++						libpq_append_conn_error(conn,
    ++												"internal error: async cleanup did not release polling socket");
    ++						goto error_return;
    ++					}
     +
     +					/*
     +					 * Reenter the authentication exchange with the server. We
    @@ src/interfaces/libpq/fe-connect.c: keep_going:						/* We will come back to here
     +				 * Caller needs to poll some more. conn->async_auth() should
     +				 * have assigned an altsock to poll on.
     +				 */
    -+				Assert(conn->altsock != PGINVALID_SOCKET);
    ++				if (conn->altsock == PGINVALID_SOCKET)
    ++				{
    ++					Assert(false);
    ++					libpq_append_conn_error(conn,
    ++											"internal error: async authentication did not set a socket for polling");
    ++					goto error_return;
    ++				}
    ++
     +				return status;
     +			}
     +
4:  3dc6dd3433c = 4:  711ca3f1efc Add OAUTHBEARER SASL mechanism
5:  97e0a2aae26 = 5:  66ef3b4b687 XXX fix libcurl link error
6:  db0167009b9 = 6:  4df1bc59638 DO NOT MERGE: Add pytest suite for OAuth
v43-0001-Move-PG_MAX_AUTH_TOKEN_LENGTH-to-libpq-auth.h.patchapplication/octet-stream; name=v43-0001-Move-PG_MAX_AUTH_TOKEN_LENGTH-to-libpq-auth.h.patchDownload
From 5d4743973640313f08883ad1ed88f6ed69054e56 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 8 Jan 2025 09:30:05 -0800
Subject: [PATCH v43 1/6] Move PG_MAX_AUTH_TOKEN_LENGTH to libpq/auth.h

OAUTHBEARER would like to use this as a limit on Bearer token messages
coming from the client, so promote it to the header file.
---
 src/backend/libpq/auth.c | 16 ----------------
 src/include/libpq/auth.h | 16 ++++++++++++++++
 2 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 46facc275ef..d6ef32cc823 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 9157dbe6092..902c5f6de32 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
-- 
2.34.1

v43-0002-require_auth-prepare-for-multiple-SASL-mechanism.patchapplication/octet-stream; name=v43-0002-require_auth-prepare-for-multiple-SASL-mechanism.patchDownload
From 20452d21e0bdced70b53d3d3afa753ea6bf96d84 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 16 Dec 2024 13:57:14 -0800
Subject: [PATCH v43 2/6] require_auth: prepare for multiple SASL mechanisms

Prior to this patch, the require_auth implementation assumed that the
AuthenticationSASL protocol message was synonymous with SCRAM-SHA-256.
In preparation for the OAUTHBEARER SASL mechanism, split the
implementation into two tiers: the first checks the acceptable
AUTH_REQ_* codes, and the second checks acceptable mechanisms if
AUTH_REQ_SASL et al are permitted.

conn->allowed_sasl_mechs is the list of pointers to acceptable
mechanisms. (Since we'll support only a small number of mechanisms, this
is an array of static length to minimize bookkeeping.) pg_SASL_init()
will bail if the selected mechanism isn't contained in this array.

Since there's only one mechansism supported right now, one branch of the
second tier cannot be exercised yet (it's marked with Assert(false)).
This assertion will need to be removed when the next mechanism is added.
---
 src/interfaces/libpq/fe-auth.c            |  29 ++++
 src/interfaces/libpq/fe-connect.c         | 178 +++++++++++++++++++---
 src/interfaces/libpq/libpq-int.h          |   2 +
 src/test/authentication/t/001_password.pl |  10 ++
 4 files changed, 202 insertions(+), 17 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 7e478489b71..70753d8ec29 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -543,6 +543,35 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
+	/* Make sure require_auth is satisfied. */
+	if (conn->require_auth)
+	{
+		bool		allowed = false;
+
+		for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		{
+			if (conn->sasl == conn->allowed_sasl_mechs[i])
+			{
+				allowed = true;
+				break;
+			}
+		}
+
+		if (!allowed)
+		{
+			/*
+			 * TODO: this is dead code until a second SASL mechanism is added;
+			 * the connection can't have proceeded past check_expected_areq()
+			 * if no SASL methods are allowed.
+			 */
+			Assert(false);
+
+			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
+									conn->require_auth, selected_mechanism);
+			goto error;
+		}
+	}
+
 	if (conn->channel_binding[0] == 'r' &&	/* require */
 		strcmp(selected_mechanism, SCRAM_SHA_256_PLUS_NAME) != 0)
 	{
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 7878e2e33af..ccbcbb7acda 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -1117,6 +1117,56 @@ libpq_prng_init(PGconn *conn)
 	pg_prng_seed(&conn->prng_state, rseed);
 }
 
+/*
+ * Fills the connection's allowed_sasl_mechs list with all supported SASL
+ * mechanisms.
+ */
+static inline void
+fill_allowed_sasl_mechs(PGconn *conn)
+{
+	/*---
+	 * We only support one mechanism at the moment, so rather than deal with a
+	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
+	 * rely on the compile-time assertion here to keep us honest.
+	 *
+	 * To add a new mechanism to require_auth,
+	 * - update the length of conn->allowed_sasl_mechs,
+	 * - add the new pg_fe_sasl_mech pointer to this function, and
+	 * - handle the new mechanism name in the require_auth portion of
+	 *   pqConnectOptions2(), below.
+	 */
+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
+					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
+
+	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
+}
+
+/*
+ * Clears the connection's allowed_sasl_mechs list.
+ */
+static inline void
+clear_allowed_sasl_mechs(PGconn *conn)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		conn->allowed_sasl_mechs[i] = NULL;
+}
+
+/*
+ * Helper routine that searches the static allowed_sasl_mechs list for a
+ * specific mechanism.
+ */
+static inline int
+index_of_allowed_sasl_mech(PGconn *conn, const pg_fe_sasl_mech *mech)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+	{
+		if (conn->allowed_sasl_mechs[i] == mech)
+			return i;
+	}
+
+	return -1;
+}
+
 /*
  *		pqConnectOptions2
  *
@@ -1358,17 +1408,19 @@ pqConnectOptions2(PGconn *conn)
 		bool		negated = false;
 
 		/*
-		 * By default, start from an empty set of allowed options and add to
-		 * it.
+		 * By default, start from an empty set of allowed methods and
+		 * mechanisms, and add to it.
 		 */
 		conn->auth_required = true;
 		conn->allowed_auth_methods = 0;
+		clear_allowed_sasl_mechs(conn);
 
 		for (first = true, more = true; more; first = false)
 		{
 			char	   *method,
 					   *part;
-			uint32		bits;
+			uint32		bits = 0;
+			const pg_fe_sasl_mech *mech = NULL;
 
 			part = parse_comma_separated_list(&s, &more);
 			if (part == NULL)
@@ -1384,11 +1436,12 @@ pqConnectOptions2(PGconn *conn)
 				if (first)
 				{
 					/*
-					 * Switch to a permissive set of allowed options, and
-					 * subtract from it.
+					 * Switch to a permissive set of allowed methods and
+					 * mechanisms, and subtract from it.
 					 */
 					conn->auth_required = false;
 					conn->allowed_auth_methods = -1;
+					fill_allowed_sasl_mechs(conn);
 				}
 				else if (!negated)
 				{
@@ -1413,6 +1466,10 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
+			/*
+			 * First group: methods that can be handled solely with the
+			 * authentication request codes.
+			 */
 			if (strcmp(method, "password") == 0)
 			{
 				bits = (1 << AUTH_REQ_PASSWORD);
@@ -1431,13 +1488,22 @@ pqConnectOptions2(PGconn *conn)
 				bits = (1 << AUTH_REQ_SSPI);
 				bits |= (1 << AUTH_REQ_GSS_CONT);
 			}
+
+			/*
+			 * Next group: SASL mechanisms. All of these use the same request
+			 * codes, so the list of allowed mechanisms is tracked separately.
+			 *
+			 * fill_allowed_sasl_mechs() must be updated when adding a new
+			 * mechanism here!
+			 */
 			else if (strcmp(method, "scram-sha-256") == 0)
 			{
-				/* This currently assumes that SCRAM is the only SASL method. */
-				bits = (1 << AUTH_REQ_SASL);
-				bits |= (1 << AUTH_REQ_SASL_CONT);
-				bits |= (1 << AUTH_REQ_SASL_FIN);
+				mech = &pg_scram_mech;
 			}
+
+			/*
+			 * Final group: meta-options.
+			 */
 			else if (strcmp(method, "none") == 0)
 			{
 				/*
@@ -1473,20 +1539,68 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
-			/* Update the bitmask. */
-			if (negated)
+			if (mech)
 			{
-				if ((conn->allowed_auth_methods & bits) == 0)
-					goto duplicate;
+				/*
+				 * Update the mechanism set only. The method bitmask will be
+				 * updated for SASL further down.
+				 */
+				Assert(!bits);
+
+				if (negated)
+				{
+					/* Remove the existing mechanism from the list. */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i < 0)
+						goto duplicate;
 
-				conn->allowed_auth_methods &= ~bits;
+					conn->allowed_sasl_mechs[i] = NULL;
+				}
+				else
+				{
+					/*
+					 * Find a space to put the new mechanism (after making
+					 * sure it's not already there).
+					 */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i >= 0)
+						goto duplicate;
+
+					i = index_of_allowed_sasl_mech(conn, NULL);
+					if (i < 0)
+					{
+						/* Should not happen; the pointer list is corrupted. */
+						Assert(false);
+
+						conn->status = CONNECTION_BAD;
+						libpq_append_conn_error(conn,
+												"internal error: no space in allowed_sasl_mechs");
+						free(part);
+						return false;
+					}
+
+					conn->allowed_sasl_mechs[i] = mech;
+				}
 			}
 			else
 			{
-				if ((conn->allowed_auth_methods & bits) == bits)
-					goto duplicate;
+				/* Update the method bitmask. */
+				Assert(bits);
+
+				if (negated)
+				{
+					if ((conn->allowed_auth_methods & bits) == 0)
+						goto duplicate;
+
+					conn->allowed_auth_methods &= ~bits;
+				}
+				else
+				{
+					if ((conn->allowed_auth_methods & bits) == bits)
+						goto duplicate;
 
-				conn->allowed_auth_methods |= bits;
+					conn->allowed_auth_methods |= bits;
+				}
 			}
 
 			free(part);
@@ -1505,6 +1619,36 @@ pqConnectOptions2(PGconn *conn)
 			free(part);
 			return false;
 		}
+
+		/*
+		 * Finally, allow SASL authentication requests if (and only if) we've
+		 * allowed any mechanisms.
+		 */
+		{
+			bool		allowed = false;
+			const uint32 sasl_bits =
+				(1 << AUTH_REQ_SASL)
+				| (1 << AUTH_REQ_SASL_CONT)
+				| (1 << AUTH_REQ_SASL_FIN);
+
+			for (i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+			{
+				if (conn->allowed_sasl_mechs[i])
+				{
+					allowed = true;
+					break;
+				}
+			}
+
+			/*
+			 * For the standard case, add the SASL bits to the (default-empty)
+			 * set if needed. For the negated case, remove them.
+			 */
+			if (!negated && allowed)
+				conn->allowed_auth_methods |= sasl_bits;
+			else if (negated && !allowed)
+				conn->allowed_auth_methods &= ~sasl_bits;
+		}
 	}
 
 	/*
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 4be5fd7ae4f..e0d5b5fe0be 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -505,6 +505,8 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
+	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 	char		current_auth_response;	/* used by pqTraceOutputMessage to
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 773238b76fd..1357f806b6f 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -277,6 +277,16 @@ $node->connect_fails(
 	"require_auth methods cannot be duplicated, !none case",
 	expected_stderr =>
 	  qr/require_auth method "!none" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=scram-sha-256,scram-sha-256",
+	"require_auth methods cannot be duplicated, scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "scram-sha-256" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=!scram-sha-256,!scram-sha-256",
+	"require_auth methods cannot be duplicated, !scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "!scram-sha-256" is specified more than once/);
 
 # Unknown value defined in require_auth.
 $node->connect_fails(
-- 
2.34.1

v43-0003-libpq-handle-asynchronous-actions-during-SASL.patchapplication/octet-stream; name=v43-0003-libpq-handle-asynchronous-actions-during-SASL.patchDownload
From f0afefb80d6c2c7c0e23aa4e43585786c90b9c72 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 8 Jan 2025 09:30:05 -0800
Subject: [PATCH v43 3/6] libpq: handle asynchronous actions during SASL

This adds the ability for a SASL mechanism to signal to PQconnectPoll()
that some arbitrary work must be done, external to the Postgres
connection, before authentication can continue. The intent is for the
upcoming OAUTHBEARER mechanism to make use of this functionality.

To ensure that threads are not blocked waiting for the SASL mechanism to
make long-running calls, the mechanism communicates with the top-level
client via the "altsock": a file or socket descriptor, opaque to this
layer of libpq, which is signaled when work is ready to be done again.
This socket temporarily takes the place of the standard connection
descriptor, so PQsocket() clients should continue to operate correctly
using their existing polling implementations.

A mechanism should set an authentication callback (conn->async_auth())
and a cleanup callback (conn->cleanup_async_auth()), return SASL_ASYNC
during the exchange, and assign conn->altsock during the first call to
async_auth(). When the cleanup callback is called, either because
authentication has succeeded or because the connection is being
dropped, the altsock must be released and disconnected from the PGconn.
---
 src/interfaces/libpq/fe-auth-sasl.h  |  11 ++-
 src/interfaces/libpq/fe-auth-scram.c |   6 +-
 src/interfaces/libpq/fe-auth.c       | 120 ++++++++++++++++++++-------
 src/interfaces/libpq/fe-auth.h       |   3 +-
 src/interfaces/libpq/fe-connect.c    |  93 ++++++++++++++++++++-
 src/interfaces/libpq/fe-misc.c       |  35 +++++---
 src/interfaces/libpq/libpq-fe.h      |   2 +
 src/interfaces/libpq/libpq-int.h     |   6 ++
 8 files changed, 227 insertions(+), 49 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index f0c62139092..f06f547c07d 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,18 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth/cleanup_async_auth appropriately
+	 *					before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 557e9c568b6..fe18615197f 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -205,7 +206,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 70753d8ec29..761ee8f88f7 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -430,7 +430,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +448,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -607,26 +607,54 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 *
+		 * In non-assertion builds, this postcondition is enforced at time of
+		 * use in PQconnectPoll().
+		 */
+		Assert(conn->async_auth);
+		Assert(conn->cleanup_async_auth);
+
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -671,7 +699,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -701,11 +729,25 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+
+		/*
+		 * The mechanism may optionally generate some output to send before
+		 * switching over to async auth, so continue onwards.
+		 */
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -1013,12 +1055,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1176,7 +1224,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1185,23 +1233,33 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 
 		case AUTH_REQ_SASL_CONT:
 		case AUTH_REQ_SASL_FIN:
-			if (conn->sasl_state == NULL)
 			{
-				appendPQExpBufferStr(&conn->errorMessage,
-									 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
-				return STATUS_ERROR;
-			}
-			oldmsglen = conn->errorMessage.len;
-			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
-			{
-				/* Use this message if pg_SASL_continue didn't supply one */
-				if (conn->errorMessage.len == oldmsglen)
+				bool		final = false;
+
+				if (conn->sasl_state == NULL)
+				{
 					appendPQExpBufferStr(&conn->errorMessage,
-										 "fe_sendauth: error in SASL authentication\n");
-				return STATUS_ERROR;
+										 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
+					return STATUS_ERROR;
+				}
+				oldmsglen = conn->errorMessage.len;
+
+				if (areq == AUTH_REQ_SASL_FIN)
+					final = true;
+
+				if (pg_SASL_continue(conn, payloadlen, final, async) != STATUS_OK)
+				{
+					/*
+					 * Append a generic error message unless pg_SASL_continue
+					 * did set a more specific one already.
+					 */
+					if (conn->errorMessage.len == oldmsglen)
+						appendPQExpBufferStr(&conn->errorMessage,
+											 "fe_sendauth: error in SASL authentication\n");
+					return STATUS_ERROR;
+				}
+				break;
 			}
-			break;
 
 		default:
 			libpq_append_conn_error(conn, "authentication method %u not supported", areq);
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index df0a68b0b21..1d4991f8996 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -19,7 +19,8 @@
 
 
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index ccbcbb7acda..85ebf9f6d87 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -501,6 +501,19 @@ pqDropConnection(PGconn *conn, bool flushInput)
 	conn->cmd_queue_recycle = NULL;
 
 	/* Free authentication/encryption state */
+	if (conn->cleanup_async_auth)
+	{
+		/*
+		 * Any in-progress async authentication should be torn down first so
+		 * that cleanup_async_auth() can depend on the other authentication
+		 * state if necessary.
+		 */
+		conn->cleanup_async_auth(conn);
+		conn->cleanup_async_auth = NULL;
+	}
+	conn->async_auth = NULL;
+	conn->altsock = PGINVALID_SOCKET;	/* cleanup_async_auth() should have
+										 * done this, but make sure. */
 #ifdef ENABLE_GSS
 	{
 		OM_uint32	min_s;
@@ -2847,6 +2860,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3882,6 +3896,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -4070,7 +4085,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -4107,6 +4132,69 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+
+				if (!conn->async_auth || !conn->cleanup_async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn,
+											"internal error: async authentication has no handler");
+					goto error_return;
+				}
+
+				/* Drive some external authentication work. */
+				status = conn->async_auth(conn);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/* Done. Tear down the async implementation. */
+					conn->cleanup_async_auth(conn);
+					conn->cleanup_async_auth = NULL;
+
+					/*
+					 * Cleanup must unset altsock, both as an indication that
+					 * it's been released, and to stop pqSocketCheck from
+					 * looking at the wrong socket after async auth is done.
+					 */
+					if (conn->altsock != PGINVALID_SOCKET)
+					{
+						Assert(false);
+						libpq_append_conn_error(conn,
+												"internal error: async cleanup did not release polling socket");
+						goto error_return;
+					}
+
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+
+					goto keep_going;
+				}
+
+				/*
+				 * Caller needs to poll some more. conn->async_auth() should
+				 * have assigned an altsock to poll on.
+				 */
+				if (conn->altsock == PGINVALID_SOCKET)
+				{
+					Assert(false);
+					libpq_append_conn_error(conn,
+											"internal error: async authentication did not set a socket for polling");
+					goto error_return;
+				}
+
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4788,6 +4876,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -7439,6 +7528,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 2c60eb5b569..d78445c70af 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1049,34 +1049,43 @@ pqWriteReady(PGconn *conn)
  * or both.  Returns >0 if one or more conditions are met, 0 if it timed
  * out, -1 if an error occurred.
  *
- * If SSL is in use, the SSL buffer is checked prior to checking the socket
- * for read data directly.
+ * If an altsock is set for asynchronous authentication, that will be used in
+ * preference to the "server" socket. Otherwise, if SSL is in use, the SSL
+ * buffer is checked prior to checking the socket for read data directly.
  */
 static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	if (conn->altsock != PGINVALID_SOCKET)
+		sock = conn->altsock;
+	else
 	{
-		libpq_append_conn_error(conn, "invalid socket");
-		return -1;
-	}
+		sock = conn->sock;
+		if (sock == PGINVALID_SOCKET)
+		{
+			libpq_append_conn_error(conn, "invalid socket");
+			return -1;
+		}
 
 #ifdef USE_SSL
-	/* Check for SSL library buffering read bytes */
-	if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
-	{
-		/* short-circuit the select */
-		return 1;
-	}
+		/* Check for SSL library buffering read bytes */
+		if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
+		{
+			/* short-circuit the select */
+			return 1;
+		}
 #endif
+	}
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index cce9ce60c55..a3491faf0c3 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -103,6 +103,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e0d5b5fe0be..2546f9f8a50 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -513,6 +513,12 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	/* Callbacks for external async authentication */
+	PostgresPollingStatusType (*async_auth) (PGconn *conn);
+	void		(*cleanup_async_auth) (PGconn *conn);
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
-- 
2.34.1

v43-0004-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v43-0004-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 711ca3f1efc00e22b6155d1592f214098fa925e2 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v43 4/6] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 configure                                     |  213 ++
 configure.ac                                  |   32 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  381 +++
 doc/src/sgml/oauth-validators.sgml            |  402 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   23 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  860 ++++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    6 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2541 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1141 ++++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   50 +-
 src/interfaces/libpq/libpq-fe.h               |   82 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  264 ++
 .../modules/oauth_validator/t/001_server.pl   |  551 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  135 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 57 files changed, 8206 insertions(+), 31 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 18e944ca89d..8c518c317e7 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -219,6 +219,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -312,8 +313,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +692,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/configure b/configure
index ceeef9b0915..0686496d0d2 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,144 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12365,59 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' is required for --with-libcurl" "$LINENO" 5
+fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
@@ -13964,6 +14166,17 @@ fi
 
 done
 
+fi
+
+if test "$with_libcurl" = yes; then
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
 fi
 
 if test "$PORTNAME" = "win32" ; then
diff --git a/configure.ac b/configure.ac
index d713360f340..b13fee83701 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,27 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1315,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-libcurl])])
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
@@ -1588,6 +1616,10 @@ elif test "$with_uuid" = ossp ; then
       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
 fi
 
+if test "$with_libcurl" = yes; then
+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+fi
+
 if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..f84085dbac4 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a8866292d46..3aab7761e4c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 271615e4a65..cdf73747da0 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1141,6 +1141,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2582,6 +2595,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index e04acf1c208..0ed80f547c4 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,106 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10129,278 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..d0bca9196d9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,402 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index f4cef9e80f7..ae4732df656 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -336,6 +336,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 32fc89f3a4b..351f89a92dc 100644
--- a/meson.build
+++ b/meson.build
@@ -854,6 +854,24 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+  endif
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3034,6 +3052,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3702,6 +3724,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 1278b7744f4..6a745b5984a 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..6155d63a116
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,860 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked via before_shmem_exit().
+ */
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 38cb9e970d5..db582d2d62c 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4823,6 +4824,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 079efa1baa7..378aa8438d6 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..8fe56267780
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..4fcdda74305
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..9b1ed7996d3 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -663,6 +666,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 6a0def7273c..e9422888e3e 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..258602cfbfc
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2541 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	if (!ctx->nested)
+		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+			Assert(!*field->target.scalar);
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * accepts the device_code grant type and provides an authorization endpoint).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+	const struct curl_slist *grant;
+	bool		device_grant_found = false;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*------
+	 * First, sanity checks for discovery contents that are OPTIONAL in the
+	 * spec but required for our flow:
+	 * - the issuer must support the device_code grant
+	 * - the issuer must have actually given us a
+	 *   device_authorization_endpoint
+	 */
+
+	grant = provider->grant_types_supported;
+	while (grant)
+	{
+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
+		{
+			device_grant_found = true;
+			break;
+		}
+
+		grant = grant->next;
+	}
+
+	if (!device_grant_found)
+	{
+		actx_error(actx, "issuer \"%s\" does not support device code grants",
+				   provider->issuer);
+		return false;
+	}
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/* TODO: check that the endpoint uses HTTPS */
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	/*
+	 * XXX This is not safe. libcurl has stringent requirements for the thread
+	 * context in which you call curl_global_init(), because it's going to try
+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
+	 * probably need to consider both the TLS backend libcurl is compiled
+	 * against and what the user has asked us to do via PQinit[Open]SSL.
+	 *
+	 * Recent versions of libcurl have improved the thread-safety situation,
+	 * but you apparently can't check at compile time whether the
+	 * implementation is thread-safe, and there's a chicken-and-egg problem
+	 * where you can't check the thread safety until you've initialized
+	 * libcurl, which you can't do before you've made sure it's thread-safe...
+	 *
+	 * We know we've already initialized Winsock by this point, so we should
+	 * be able to safely skip that bit. But we have to tell libcurl to
+	 * initialize everything else, because other pieces of our client
+	 * executable may already be using libcurl for their own purposes. If we
+	 * initialize libcurl first, with only a subset of its features, we could
+	 * break those other clients nondeterministically, and that would probably
+	 * be a nightmare to debug.
+	 */
+	curl_global_init(CURL_GLOBAL_ALL
+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				conn->altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..cc53e2bdd1a
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1141 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..32598721686
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f7..ec7a9236044 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85ebf9f6d87..b0669a2dbfd 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -649,6 +667,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1138,7 +1157,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1148,10 +1167,11 @@ fill_allowed_sasl_mechs(PGconn *conn)
 	 * - handle the new mechanism name in the require_auth portion of
 	 *   pqConnectOptions2(), below.
 	 */
-	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 2,
 					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
 
 	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
+	conn->allowed_sasl_mechs[1] = &pg_oauth_mech;
 }
 
 /*
@@ -1513,6 +1533,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4105,7 +4129,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4384,6 +4420,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -4996,6 +5035,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5149,6 +5194,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..5f8d608261e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a50..f36f7f19d58 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 1a5a223e1af..4180e35f8cf 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -4,6 +4,7 @@
 # args for executables (which depend on libpq).
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -40,6 +41,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14b..bdfd5f1f8de 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 4f544a042d4..0c2ccc75a63 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..f297ed5c968
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..138a8104622
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..f77a3e115c6
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..4b78c90557c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..12fe70c990b
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,264 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..80f52585896
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,551 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..95cccf90dd8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..f0f23d1d1a8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..8ec09102027
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..bf94f091def
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,135 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index a92944e0d9c..1bfdbcca59f 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2514,6 +2514,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2557,7 +2562,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index d5aa5c295ae..245bbaabc78 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1951,6 +1958,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3088,6 +3096,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3482,6 +3492,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v43-0005-XXX-fix-libcurl-link-error.patchapplication/octet-stream; name=v43-0005-XXX-fix-libcurl-link-error.patchDownload
From 66ef3b4b68756843735b320a71244dfa67658a5f Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 13 Jan 2025 12:31:59 -0800
Subject: [PATCH v43 5/6] XXX fix libcurl link error

The ftp/curl port appears to be missing a minimum version dependency on
libssh2, so the following starts showing up after upgrading to curl
8.11.1_1:

    libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

But 13.3 is EOL, so it's not clear if anyone would be interested in a
bug report, and a FreeBSD 14 Cirrus image is in progress. Hack past it
for now.
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 8c518c317e7..97bb38c72c6 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -165,6 +165,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.34.1

v43-0006-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v43-0006-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 4df1bc59638f3dfa25c0a3610d0835acb5f5f2fd Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v43 6/6] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  196 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2671 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 +++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6452 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 97bb38c72c6..a6fab60bfd8 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -318,6 +318,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -402,8 +403,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 351f89a92dc..72552df0f27 100644
--- a/meson.build
+++ b/meson.build
@@ -3365,6 +3365,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3531,6 +3534,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..236057cd99e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 00000000000..0e8f027b2ec
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 00000000000..b0695b6287e
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 00000000000..acf339a5899
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 00000000000..20e72a404aa
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,196 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        sock.settimeout(BLOCKING_TIMEOUT)
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 00000000000..8372376ede4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 00000000000..a3cbafe843e
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2671 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def handle_discovery_connection(sock, discovery=None, *, response=None):
+    """
+    Helper for all tests that expect an initial discovery connection from the
+    client. The provided discovery URI will be used in a standard error response
+    from the server (or response may be set, to provide a custom dictionary),
+    and the SASL exchange will be failed.
+
+    By default, the client is expected to complete the entire handshake. Set
+    finish to False if the client should immediately disconnect when it receives
+    the error response.
+    """
+    if response is None:
+        response = {"status": "invalid_token"}
+        if discovery is not None:
+            response["openid-configuration"] = discovery
+
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        initial = start_oauth_handshake(conn)
+
+        # For discovery, the client should send an empty auth header. See RFC
+        # 7628, Sec. 4.3.
+        auth = get_auth_value(initial)
+        assert auth == b""
+
+        # The discovery handshake is doomed to fail.
+        fail_oauth_handshake(conn, response)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "authorization_code",
+                    "urn:ietf:params:oauth:grant-type:device_code",
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+            except ssl.SSLError as err:
+                # FIXME OpenSSL 3.4 introduced an incompatibility with Python's
+                # TLS error handling, resulting in a bogus "[SYS] unknown error"
+                # on some platforms. Hopefully this is fixed in 2025's set of
+                # maintenance releases and this case can be removed.
+                #
+                #     https://github.com/python/cpython/issues/127257
+                #
+                if "[SYS] unknown error" in str(err):
+                    return
+                raise
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PGPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PGOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PGOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PGOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PGPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PGOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize(
+    "success, abnormal_failure",
+    [
+        pytest.param(True, False, id="success"),
+        pytest.param(False, False, id="normal failure"),
+        pytest.param(False, True, id="abnormal failure"),
+    ],
+)
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+    abnormal_failure,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Client should reconnect.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            elif abnormal_failure:
+                # Send an empty error response, which should result in a
+                # mechanism-level failure in the client. This test ensures that
+                # the client doesn't try a third connection for this case.
+                expected_error = "server sent error response without a status"
+                fail_oauth_handshake(conn, {})
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    with sock:
+        handle_discovery_connection(sock, discovery_uri)
+
+    # Expect the client to connect again.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment (no path)",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server/path/",
+            None,
+            id="extra empty segment (with path)",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+    with sock:
+        if expected_error and not server_discovery:
+            # If the client already knows the URL, it should disconnect as soon
+            # as it realizes it's not valid.
+            expect_disconnected_handshake(sock)
+        else:
+            # Otherwise, it should complete the connection.
+            handle_discovery_connection(sock, discovery_uri)
+
+    # The client should not reconnect.
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        startup = pq3.recv1(conn, cls=pq3.Startup)
+        assert startup.proto == pq3.protocol(3, 0)
+
+        pq3.send(
+            conn,
+            pq3.types.AuthnRequest,
+            type=pq3.authn.SASL,
+            body=[b"OAUTHBEARER", b""],
+        )
+
+        # The client should disconnect at this point.
+        assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    with sock:
+        expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Second connection sends the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+    token_sent = threading.Event()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token,
+        # and signal the main thread to continue.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+        token_sent.set()
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # At this point the client is talking to the authorization server. Wait for
+    # that to succeed so we don't run into the accept() timeout.
+    token_sent.wait()
+
+    # Client should reconnect and send the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PGOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    if retries >= 0:
+        # First connection is a discovery request, which should result in the
+        # hook being invoked.
+        with sock:
+            handle_discovery_connection(sock, discovery_uri)
+
+        # Client should reconnect to send the token.
+        sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        handle_discovery_connection(sock, response=fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not support device code grants',
+            id="missing device code grants",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "server rejected OAuth bearer token: invalid_request",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(
+    accept, auth_data_cb, sasl_err, resp_type, resp_payload, expected_error
+):
+    wkuri = f"https://256.256.256.256/.well-known/openid-configuration"
+    sock, client = accept(
+        oauth_issuer=wkuri,
+        oauth_client_id="some-id",
+    )
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which returns a token directly so
+        we don't need an openid_provider instance.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.token = secrets.token_urlsafe().encode()
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            if "openid-configuration" in sasl_err:
+                sasl_err["openid-configuration"] = wkuri
+
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an
+            # invalid one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        handle_discovery_connection(sock, to_http(openid_provider.discovery_uri))
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("auth_type", [pq3.authn.OK, pq3.authn.SASLFinal])
+def test_discovery_incorrectly_permits_connection(accept, auth_type):
+    """
+    Incorrectly responds to a client's discovery request with AuthenticationOK
+    or AuthenticationSASLFinal. require_auth=oauth should catch the former, and
+    the mechanism itself should catch the latter.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+        require_auth="oauth",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Incorrectly log the client in. It should immediately disconnect.
+            pq3.send(conn, pq3.types.AuthnRequest, type=auth_type)
+            assert not conn.read(1), "client sent unexpected data"
+
+    if auth_type == pq3.authn.OK:
+        expected_error = "server did not complete authentication"
+    else:
+        expected_error = "server sent unexpected additional OAuth data"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_no_discovery_url_provided(accept):
+    """
+    Tests what happens when the client doesn't know who to contact and the
+    server doesn't tell it.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        handle_discovery_connection(sock, discovery=None)
+
+    expected_error = "no discovery metadata was provided"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("change_between_connections", [False, True])
+def test_discovery_url_changes(accept, openid_provider, change_between_connections):
+    """
+    Ensures that the client complains if the server agrees on the issuer, but
+    disagrees on the discovery URL to be used.
+    """
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "DEV",
+            "user_code": "USER",
+            "interval": 0,
+            "verification_uri": "https://example.org",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Have the client connect.
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    other_wkuri = f"{openid_provider.issuer}/.well-known/oauth-authorization-server"
+
+    if not change_between_connections:
+        # Immediately respond with the wrong URL.
+        with sock:
+            handle_discovery_connection(sock, other_wkuri)
+
+    else:
+        # First connection; use the right URL to begin with.
+        with sock:
+            handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+        # Second connection. Reject the token and switch the URL.
+        sock, _ = accept()
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+                get_auth_value(initial)
+
+                # Ignore the token; fail with a different discovery URL.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": other_wkuri,
+                }
+                fail_oauth_handshake(conn, resp)
+
+    expected_error = rf"server's discovery document has moved to {other_wkuri} \(previous location was {openid_provider.discovery_uri}\)"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 00000000000..1a73865ee47
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 00000000000..e137df852ef
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 00000000000..ef809e288af
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 00000000000..ab7a6e7fb96
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 00000000000..0dfcffb83e0
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 00000000000..42af80c73ee
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 00000000000..85534b9cc99
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 00000000000..415748b9a66
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 00000000000..2839343ffa1
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 00000000000..02126dba792
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 00000000000..dee4855fc0b
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 00000000000..7c6817de31c
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 00000000000..075c02c1ca6
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 00000000000..804307ee120
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba7..ffdf760d79a 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.34.1

#195Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#194)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 21 Jan 2025, at 17:46, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

Done that way in v43.

I've spent some time staring at, and testing, 0001, 0002 and 0003 with the
intent of getting them in to pave the way for the end goal of getting 0004 in.
In general I would say they are ready, I only have a small nitpick on 0002:

+ conn->allowed_sasl_mechs[0] = &pg_scram_mech;
I'm not a huge fan of this hardcoding in fill_allowed_sasl_mechs(). It's true
that we only have one as of this patch, but we might as well plan a little for
the future maintainability. I took a quick stab in the attached.

On top of that I just re-arranged a comment to, IMHO, better match the style in
the rest of the file.

Unless there are objections I aim at committing these patches reasonably soon
to lower the barrier for getting OAuth support committed.

--
Daniel Gustafsson

Attachments:

v43review.diff.txttext/plain; name=v43review.diff.txt; x-unix-mode=0644Download
commit 73f6e943711f9c158a0f1b32fcc78f210d767083
Author: Daniel Gustafsson <daniel@yesql.se>
Date:   Mon Jan 27 15:13:58 2025 +0100

    nitpickerying

diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85ebf9f6d87..e390e428284 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -396,6 +396,25 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 	}
 };
 
+typedef struct SupportedSASLMech
+{
+	const char *name;
+	const pg_fe_sasl_mech *mech;
+} SupportedSASLMech;
+
+#define SASL_MECHANISM_COUNT 1
+
+static SupportedSASLMech supported_sasl_mech[] =
+{
+	{
+		"SCRAM", &pg_scram_mech
+	},
+	{
+		NULL, NULL
+	}
+};
+
+
 /* The connection URI must start with either of the following designators: */
 static const char uri_designator[] = "postgresql://";
 static const char short_uri_designator[] = "postgres://";
@@ -512,8 +531,8 @@ pqDropConnection(PGconn *conn, bool flushInput)
 		conn->cleanup_async_auth = NULL;
 	}
 	conn->async_auth = NULL;
-	conn->altsock = PGINVALID_SOCKET;	/* cleanup_async_auth() should have
-										 * done this, but make sure. */
+	/* cleanup_async_auth() should have done this, but make sure */
+	conn->altsock = PGINVALID_SOCKET;
 #ifdef ENABLE_GSS
 	{
 		OM_uint32	min_s;
@@ -1144,14 +1163,14 @@ fill_allowed_sasl_mechs(PGconn *conn)
 	 *
 	 * To add a new mechanism to require_auth,
 	 * - update the length of conn->allowed_sasl_mechs,
-	 * - add the new pg_fe_sasl_mech pointer to this function, and
 	 * - handle the new mechanism name in the require_auth portion of
 	 *   pqConnectOptions2(), below.
 	 */
-	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
-					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == SASL_MECHANISM_COUNT,
+					 "conn->allowed_sasl_mechs[] is not sufficiently large for holding all supported SASL mechanisms");
 
-	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
+	for (int i = 0; i < SASL_MECHANISM_COUNT; i++)
+		conn->allowed_sasl_mechs[i] = supported_sasl_mech[i].mech;
 }
 
 /*
@@ -1506,8 +1525,7 @@ pqConnectOptions2(PGconn *conn)
 			 * Next group: SASL mechanisms. All of these use the same request
 			 * codes, so the list of allowed mechanisms is tracked separately.
 			 *
-			 * fill_allowed_sasl_mechs() must be updated when adding a new
-			 * mechanism here!
+			 * supported_sasl_mech must contain all mechanism handled here.
 			 */
 			else if (strcmp(method, "scram-sha-256") == 0)
 			{
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a2644a2e653..89d738efdc1 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2801,6 +2801,7 @@ Subscription
 SubscriptionInfo
 SubscriptionRelState
 SummarizerReadLocalXLogPrivate
+SupportedSASLMech
 SupportRequestCost
 SupportRequestIndexCondition
 SupportRequestOptimizeWindowClause
#196Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#195)
7 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Jan 27, 2025 at 2:50 PM Daniel Gustafsson <daniel@yesql.se> wrote:

+ conn->allowed_sasl_mechs[0] = &pg_scram_mech;
I'm not a huge fan of this hardcoding in fill_allowed_sasl_mechs(). It's true
that we only have one as of this patch, but we might as well plan a little for
the future maintainability. I took a quick stab in the attached.

Okay. I've folded that in and simplified it some, to remove the unused
names and just store the mechanism pointers without a wrapper struct;
see what you think.

Unless there are objections I aim at committing these patches reasonably soon
to lower the barrier for getting OAuth support committed.

Thanks!

--

v44 tackles threadsafety for older versions of Curl. If we can't prove
that the installed libcurl is threadsafe at configure time, we'll wrap
our one-time initialization in the pg_g_threadlock. Otherwise, we
won't bother with locking, but we will bail out loudly if our
threadsafety code has not been compiled in and libcurl has been
downgraded to a version/build that can't do that itself. Documentation
has been added for clients, to detail when they need to worry about
PQregisterThreadLock(), in the same way they already do with Kerberos.

While I was playing with that, I noticed that the Autoconf side of
things was not correctly picking up pkg-config variables, and my local
environment had masked the bug. I've added code to handle
CFLAGS/LDFLAGS in the same way that e.g. libxml is handled.

libpq no longer requires the authorization server to advertise support
for the device_code grant type. Entra ID doesn't appear to add that to
any of the openid-configurations it publishes, which was the primary
impetus for the change. Note that if a provider claims to support a
device_authorization_endpoint but then rejects a device_code grant,
we're not going to know what spec they're implementing anyway, so this
check likely doesn't give us any particular advantage. I've removed it
with an explanatory comment.

A description of the OAUTHBEARER handshake has been added to our
protocol docs, and I've added a comment to the new GUC in the sample
file. I've also added slightly nicer error messages in the case that
either OAuth endpoint isn't secured by HTTPS.

Thanks,
--Jacob

Attachments:

since-v43.diff.txttext/plain; charset=US-ASCII; name=since-v43.diff.txtDownload
1:  5d474397364 = 1:  258b8dbb770 Move PG_MAX_AUTH_TOKEN_LENGTH to libpq/auth.h
2:  20452d21e0b ! 2:  ec960cf363d require_auth: prepare for multiple SASL mechanisms
    @@ src/interfaces/libpq/fe-auth.c: pg_SASL_init(PGconn *conn, int payloadlen)
      	{
     
      ## src/interfaces/libpq/fe-connect.c ##
    +@@ src/interfaces/libpq/fe-connect.c: static const PQEnvironmentOption EnvironmentOptions[] =
    + 	}
    + };
    + 
    ++static const pg_fe_sasl_mech *supported_sasl_mechs[] =
    ++{
    ++	&pg_scram_mech,
    ++};
    ++#define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
    ++
    + /* The connection URI must start with either of the following designators: */
    + static const char uri_designator[] = "postgresql://";
    + static const char short_uri_designator[] = "postgres://";
     @@ src/interfaces/libpq/fe-connect.c: libpq_prng_init(PGconn *conn)
      	pg_prng_seed(&conn->prng_state, rseed);
      }
    @@ src/interfaces/libpq/fe-connect.c: libpq_prng_init(PGconn *conn)
     +	 * rely on the compile-time assertion here to keep us honest.
     +	 *
     +	 * To add a new mechanism to require_auth,
    ++	 * - add it to supported_sasl_mechs,
     +	 * - update the length of conn->allowed_sasl_mechs,
    -+	 * - add the new pg_fe_sasl_mech pointer to this function, and
     +	 * - handle the new mechanism name in the require_auth portion of
     +	 *   pqConnectOptions2(), below.
     +	 */
    -+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
    -+					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
    ++	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == SASL_MECHANISM_COUNT,
    ++					 "conn->allowed_sasl_mechs[] is not sufficiently large for holding all supported SASL mechanisms");
     +
    -+	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
    ++	for (int i = 0; i < SASL_MECHANISM_COUNT; i++)
    ++		conn->allowed_sasl_mechs[i] = supported_sasl_mechs[i];
     +}
     +
     +/*
    @@ src/interfaces/libpq/fe-connect.c: pqConnectOptions2(PGconn *conn)
     +			 * Next group: SASL mechanisms. All of these use the same request
     +			 * codes, so the list of allowed mechanisms is tracked separately.
     +			 *
    -+			 * fill_allowed_sasl_mechs() must be updated when adding a new
    -+			 * mechanism here!
    ++			 * supported_sasl_mechs must contain all mechanisms handled here.
     +			 */
      			else if (strcmp(method, "scram-sha-256") == 0)
      			{
    @@ src/interfaces/libpq/fe-connect.c: pqConnectOptions2(PGconn *conn)
     +					i = index_of_allowed_sasl_mech(conn, mech);
     +					if (i < 0)
     +						goto duplicate;
    - 
    --				conn->allowed_auth_methods &= ~bits;
    ++
     +					conn->allowed_sasl_mechs[i] = NULL;
     +				}
     +				else
    @@ src/interfaces/libpq/fe-connect.c: pqConnectOptions2(PGconn *conn)
     +					i = index_of_allowed_sasl_mech(conn, mech);
     +					if (i >= 0)
     +						goto duplicate;
    -+
    + 
    +-				conn->allowed_auth_methods &= ~bits;
     +					i = index_of_allowed_sasl_mech(conn, NULL);
     +					if (i < 0)
     +					{
3:  f0afefb80d6 ! 3:  9725788086c libpq: handle asynchronous actions during SASL
    @@ src/interfaces/libpq/fe-connect.c: pqDropConnection(PGconn *conn, bool flushInpu
     +		conn->cleanup_async_auth = NULL;
     +	}
     +	conn->async_auth = NULL;
    -+	conn->altsock = PGINVALID_SOCKET;	/* cleanup_async_auth() should have
    -+										 * done this, but make sure. */
    ++	/* cleanup_async_auth() should have done this, but make sure */
    ++	conn->altsock = PGINVALID_SOCKET;
      #ifdef ENABLE_GSS
      	{
      		OM_uint32	min_s;
4:  711ca3f1efc ! 4:  a260d9436f0 Add OAUTHBEARER SASL mechanism
    @@ .cirrus.tasks.yml: task:
        ###
        # Test that code can be built with gcc/clang without warnings
     
    + ## config/programs.m4 ##
    +@@ config/programs.m4: AC_DEFUN([PGAC_CHECK_STRIP],
    +   AC_SUBST(STRIP_STATIC_LIB)
    +   AC_SUBST(STRIP_SHARED_LIB)
    + ])# PGAC_CHECK_STRIP
    ++
    ++
    ++
    ++# PGAC_CHECK_LIBCURL
    ++# ------------------
    ++# Check for required libraries and headers, and test to see whether the current
    ++# installation of libcurl is threadsafe.
    ++
    ++AC_DEFUN([PGAC_CHECK_LIBCURL],
    ++[
    ++  AC_CHECK_HEADER(curl/curl.h, [],
    ++				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
    ++  AC_CHECK_LIB(curl, curl_multi_init, [],
    ++			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
    ++
    ++  # Check to see whether the current platform supports threadsafe Curl
    ++  # initialization.
    ++  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
    ++  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
    ++#include <curl/curl.h>
    ++],[
    ++    curl_version_info_data *info;
    ++
    ++    if (curl_global_init(CURL_GLOBAL_ALL))
    ++        return -1;
    ++
    ++    info = curl_version_info(CURLVERSION_NOW);
    ++#ifdef CURL_VERSION_THREADSAFE
    ++    if (info->features & CURL_VERSION_THREADSAFE)
    ++        return 0;
    ++#endif
    ++
    ++    return 1;
    ++])],
    ++  [pgac_cv__libcurl_threadsafe_init=yes],
    ++  [pgac_cv__libcurl_threadsafe_init=no],
    ++  [pgac_cv__libcurl_threadsafe_init=unknown])])
    ++  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
    ++    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
    ++              [Define to 1 if curl_global_init() is guaranteed to be threadsafe.])
    ++  fi
    ++])# PGAC_CHECK_LIBCURL
    +
      ## configure ##
     @@ configure: XML2_LIBS
      XML2_CFLAGS
    @@ configure: fi
     +
     +fi
     +
    ++  # We only care about -I, -D, and -L switches;
    ++  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
    ++  for pgac_option in $LIBCURL_CFLAGS; do
    ++    case $pgac_option in
    ++      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
    ++    esac
    ++  done
    ++  for pgac_option in $LIBCURL_LIBS; do
    ++    case $pgac_option in
    ++      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
    ++    esac
    ++  done
    ++
     +  # OAuth requires python for testing
     +  if test "$with_python" != yes; then
     +    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
    @@ configure: fi
     +# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
     +# dependency on that platform?
     +if test "$with_libcurl" = yes ; then
    ++
    ++  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
    ++if test "x$ac_cv_header_curl_curl_h" = xyes; then :
    ++
    ++else
    ++  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
    ++fi
    ++
    ++
     +  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
     +$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
     +if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
    @@ configure: fi
     +  LIBS="-lcurl $LIBS"
     +
     +else
    -+  as_fn_error $? "library 'curl' is required for --with-libcurl" "$LINENO" 5
    ++  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
     +fi
     +
    ++
    ++  # Check to see whether the current platform supports threadsafe Curl
    ++  # initialization.
    ++  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
    ++$as_echo_n "checking for curl_global_init thread safety... " >&6; }
    ++if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
    ++  $as_echo_n "(cached) " >&6
    ++else
    ++  if test "$cross_compiling" = yes; then :
    ++  pgac_cv__libcurl_threadsafe_init=unknown
    ++else
    ++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
    ++/* end confdefs.h.  */
    ++
    ++#include <curl/curl.h>
    ++
    ++int
    ++main ()
    ++{
    ++
    ++    curl_version_info_data *info;
    ++
    ++    if (curl_global_init(CURL_GLOBAL_ALL))
    ++        return -1;
    ++
    ++    info = curl_version_info(CURLVERSION_NOW);
    ++#ifdef CURL_VERSION_THREADSAFE
    ++    if (info->features & CURL_VERSION_THREADSAFE)
    ++        return 0;
    ++#endif
    ++
    ++    return 1;
    ++
    ++  ;
    ++  return 0;
    ++}
    ++_ACEOF
    ++if ac_fn_c_try_run "$LINENO"; then :
    ++  pgac_cv__libcurl_threadsafe_init=yes
    ++else
    ++  pgac_cv__libcurl_threadsafe_init=no
    ++fi
    ++rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
    ++  conftest.$ac_objext conftest.beam conftest.$ac_ext
    ++fi
    ++
    ++fi
    ++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
    ++$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
    ++  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
    ++
    ++$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
    ++
    ++  fi
    ++
     +fi
     +
      if test "$with_gssapi" = yes ; then
        if test "$PORTNAME" != "win32"; then
          { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
    -@@ configure: fi
    - 
    - done
    - 
    -+fi
    -+
    -+if test "$with_libcurl" = yes; then
    -+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
    -+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
    -+
    -+else
    -+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
    -+fi
    -+
    -+
    - fi
    - 
    - if test "$PORTNAME" = "win32" ; then
     
      ## configure.ac ##
     @@ configure.ac: fi
    @@ configure.ac: fi
     +  # to explicitly set TLS 1.3 ciphersuites).
     +  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
     +
    ++  # We only care about -I, -D, and -L switches;
    ++  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
    ++  for pgac_option in $LIBCURL_CFLAGS; do
    ++    case $pgac_option in
    ++      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
    ++    esac
    ++  done
    ++  for pgac_option in $LIBCURL_LIBS; do
    ++    case $pgac_option in
    ++      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
    ++    esac
    ++  done
    ++
     +  # OAuth requires python for testing
     +  if test "$with_python" != yes; then
     +    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
    @@ configure.ac: failure.  It is possible the compiler isn't looking in the proper
     +# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
     +# dependency on that platform?
     +if test "$with_libcurl" = yes ; then
    -+  AC_CHECK_LIB(curl, curl_multi_init, [], [AC_MSG_ERROR([library 'curl' is required for --with-libcurl])])
    ++  PGAC_CHECK_LIBCURL
     +fi
     +
      if test "$with_gssapi" = yes ; then
        if test "$PORTNAME" != "win32"; then
          AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
    -@@ configure.ac: elif test "$with_uuid" = ossp ; then
    -       [AC_MSG_ERROR([header file <ossp/uuid.h> or <uuid.h> is required for OSSP UUID])])])
    - fi
    - 
    -+if test "$with_libcurl" = yes; then
    -+  AC_CHECK_HEADER(curl/curl.h, [], [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
    -+fi
    -+
    - if test "$PORTNAME" = "win32" ; then
    -    AC_CHECK_HEADERS(crtdefs.h)
    - fi
     
      ## doc/src/sgml/client-auth.sgml ##
     @@ doc/src/sgml/client-auth.sgml: include_dir         <replaceable>directory</replaceable>
    @@ doc/src/sgml/libpq.sgml: void PQinitSSL(int do_ssl);
      
       <sect1 id="libpq-threading">
        <title>Behavior in Threaded Programs</title>
    +@@ doc/src/sgml/libpq.sgml: int PQisthreadsafe();
    +    <application>libpq</application> source code for a way to do cooperative
    +    locking between <application>libpq</application> and your application.
    +   </para>
    ++
    ++  <para>
    ++   Similarly, if you are using Curl inside your application,
    ++   <emphasis>and</emphasis> you do not already
    ++   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
    ++   libcurl globally</ulink> before starting new threads, you will need to
    ++   cooperatively lock (again via <function>PQregisterThreadLock</function>)
    ++   around any code that may initialize libcurl. This restriction is lifted for
    ++   more recent versions of Curl that are built to support threadsafe
    ++   initialization; those builds can be identified by the advertisement of a
    ++   <literal>threadsafe</literal> feature in their version metadata.
    ++  </para>
    +  </sect1>
    + 
    + 
     
      ## doc/src/sgml/oauth-validators.sgml (new) ##
     @@
    @@ doc/src/sgml/postgres.sgml: break is not needed in a wider output rendering.
       </part>
      
     
    + ## doc/src/sgml/protocol.sgml ##
    +@@ doc/src/sgml/protocol.sgml: SELCT 1/0;<!-- this typo is intentional -->
    + 
    +   <para>
    +    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
    +-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
    +-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
    +-   might be added in the future. The below steps illustrate how SASL
    +-   authentication is performed in general, while the next subsection gives
    +-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
    ++   protocols. At the moment, <productname>PostgreSQL</productname> implements three
    ++   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
    ++   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
    ++   authentication is performed in general, while the next subsections give
    ++   more details on particular mechanisms.
    +   </para>
    + 
    +   <procedure>
    +@@ doc/src/sgml/protocol.sgml: SELCT 1/0;<!-- this typo is intentional -->
    +    <step id="sasl-auth-end">
    +     <para>
    +      Finally, when the authentication exchange is completed successfully, the
    +-     server sends an AuthenticationSASLFinal message, followed
    ++     server sends an optional AuthenticationSASLFinal message, followed
    +      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
    +      contains additional server-to-client data, whose content is particular to the
    +      selected authentication mechanism. If the authentication mechanism doesn't
    +@@ doc/src/sgml/protocol.sgml: SELCT 1/0;<!-- this typo is intentional -->
    +    <title>SCRAM-SHA-256 Authentication</title>
    + 
    +    <para>
    +-    The implemented SASL mechanisms at the moment
    +-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
    +-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
    ++    <literal>SCRAM-SHA-256</literal>, and its variant with channel
    ++    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
    ++    authentication mechanisms. They are described in
    +     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
    +     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    +    </para>
    +@@ doc/src/sgml/protocol.sgml: SELCT 1/0;<!-- this typo is intentional -->
    +     </step>
    +    </procedure>
    +   </sect2>
    ++
    ++  <sect2 id="sasl-oauthbearer">
    ++   <title>OAUTHBEARER Authentication</title>
    ++
    ++   <para>
    ++    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
    ++    authentication. It is described in detail in
    ++    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
    ++   </para>
    ++
    ++   <para>
    ++    A typical exchange differs depending on whether or not the client already
    ++    has a bearer token cached for the current user. If it does not, the exchange
    ++    will take place over two connections: the first "discovery" connection to
    ++    obtain OAuth metadata from the server, and the second connection to send
    ++    the token after the client has obtained it. (libpq does not currently
    ++    implement a caching method as part of its builtin flow, so it uses the
    ++    two-connection exchange.)
    ++   </para>
    ++
    ++   <para>
    ++    This mechanism is client-initiated, like SCRAM. The client initial response
    ++    consists of the standard "GS2" header used by SCRAM, followed by a list of
    ++    <literal>key=value</literal> pairs. The only key currently supported by
    ++    the server is <literal>auth</literal>, which contains the bearer token.
    ++    <literal>OAUTHBEARER</literal> additionally specifies three optional
    ++    components of the client initial response (the <literal>authzid</literal> of
    ++    the GS2 header, and the <structfield>host</structfield> and
    ++    <structfield>port</structfield> keys) which are currently ignored by the
    ++    server.
    ++   </para>
    ++
    ++   <para>
    ++    <literal>OAUTHBEARER</literal> does not support channel binding, and there
    ++    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
    ++    server data during a successful authentication, so the
    ++    AuthenticationSASLFinal message is not used in the exchange.
    ++   </para>
    ++
    ++   <procedure>
    ++    <title>Example</title>
    ++    <step>
    ++     <para>
    ++      During the first exchange, the server sends an AuthenticationSASL message
    ++      with the <literal>OAUTHBEARER</literal> mechanism advertised.
    ++     </para>
    ++    </step>
    ++
    ++    <step>
    ++     <para>
    ++      The client responds by sending a SASLInitialResponse message which
    ++      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
    ++      client does not already have a valid bearer token for the current user,
    ++      the <structfield>auth</structfield> field is empty, indicating a discovery
    ++      connection.
    ++     </para>
    ++    </step>
    ++
    ++    <step>
    ++     <para>
    ++      Server sends an AuthenticationSASLContinue message containing an error
    ++      <literal>status</literal> alongside a well-known URI and scopes that the
    ++      client should use to conduct an OAuth flow.
    ++     </para>
    ++    </step>
    ++
    ++    <step>
    ++     <para>
    ++      Client sends a SASLResponse message containing the empty set (a single
    ++      <literal>0x01</literal> byte) to finish its half of the discovery
    ++      exchange.
    ++     </para>
    ++    </step>
    ++
    ++    <step>
    ++     <para>
    ++      Server sends an ErrorMessage to fail the first exchange.
    ++     </para>
    ++     <para>
    ++      At this point, the client conducts one of many possible OAuth flows to
    ++      obtain a bearer token, using any metadata that it has been configured with
    ++      in addition to that provided by the server. (This description is left
    ++      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
    ++      mandate any particular method for obtaining a token.)
    ++     </para>
    ++     <para>
    ++      Once it has a token, the client reconnects to the server for the final
    ++      exchange:
    ++     </para>
    ++    </step>
    ++
    ++    <step>
    ++     <para>
    ++      The server once again sends an AuthenticationSASL message with the
    ++      <literal>OAUTHBEARER</literal> mechanism advertised.
    ++     </para>
    ++    </step>
    ++
    ++    <step>
    ++     <para>
    ++      The client responds by sending a SASLInitialResponse message, but this
    ++      time the <structfield>auth</structfield> field in the message contains the
    ++      bearer token that was obtained during the client flow.
    ++     </para>
    ++    </step>
    ++
    ++    <step>
    ++     <para>
    ++      The server validates the token according to the instructions of the
    ++      token provider. If the client is authorized to connect, it sends an
    ++      AuthenticationOk message to end the SASL exchange.
    ++     </para>
    ++    </step>
    ++   </procedure>
    ++  </sect2>
    +  </sect1>
    + 
    +  <sect1 id="protocol-replication">
    +
      ## doc/src/sgml/regress.sgml ##
     @@ doc/src/sgml/regress.sgml: make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
            </para>
    @@ meson.build: endif
     +  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
     +  if libcurl.found()
     +    cdata.set('USE_LIBCURL', 1)
    ++
    ++    # Check to see whether the current platform supports threadsafe Curl
    ++    # initialization.
    ++    libcurl_threadsafe_init = false
    ++
    ++    if not meson.is_cross_build()
    ++      r = cc.run('''
    ++        #include <curl/curl.h>
    ++
    ++        int main(void)
    ++        {
    ++            curl_version_info_data *info;
    ++
    ++            if (curl_global_init(CURL_GLOBAL_ALL))
    ++                return -1;
    ++
    ++            info = curl_version_info(CURLVERSION_NOW);
    ++        #ifdef CURL_VERSION_THREADSAFE
    ++            if (info->features & CURL_VERSION_THREADSAFE)
    ++                return 0;
    ++        #endif
    ++
    ++            return 1;
    ++        }''',
    ++        name: 'test for curl_global_init thread safety',
    ++        dependencies: libcurl,
    ++      )
    ++
    ++      assert(r.compiled())
    ++      if r.returncode() == 0
    ++        libcurl_threadsafe_init = true
    ++        message('curl_global_init is threadsafe')
    ++      elif r.returncode() == 1
    ++        message('curl_global_init is not threadsafe')
    ++      else
    ++        message('curl_global_init failed; assuming not threadsafe')
    ++      endif
    ++    endif
    ++
    ++    if libcurl_threadsafe_init
    ++      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
    ++    endif
     +  endif
    ++
     +else
     +  libcurl = not_found_dep
     +endif
    @@ src/backend/utils/misc/postgresql.conf.sample
      #ssl_passphrase_command_supports_reload = off
      
     +# OAuth
    -+#oauth_validator_libraries = ''
    ++#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
     +
      
      #------------------------------------------------------------------------------
    @@ src/include/pg_config.h.in
      /* Define to 1 if you have the `ldap' library (-lldap). */
      #undef HAVE_LIBLDAP
      
    +@@
    + /* Define to 1 if you have the <termios.h> header file. */
    + #undef HAVE_TERMIOS_H
    + 
    ++/* Define to 1 if curl_global_init() is guaranteed to be threadsafe. */
    ++#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
    ++
    + /* Define to 1 if your compiler understands `typeof' or something similar. */
    + #undef HAVE_TYPEOF
    + 
     @@
      /* Define to 1 to build with LDAP support. (--with-ldap) */
      #undef USE_LDAP
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	return true;
     +}
     +
    ++#define HTTPS_SCHEME "https://"
     +#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
     +
     +/*
     + * Ensure that the provider supports the Device Authorization flow (i.e. it
    -+ * accepts the device_code grant type and provides an authorization endpoint).
    ++ * provides an authorization endpoint, and both the token and authorization
    ++ * endpoint URLs seem reasonable).
     + */
     +static bool
     +check_for_device_flow(struct async_ctx *actx)
     +{
     +	const struct provider *provider = &actx->provider;
    -+	const struct curl_slist *grant;
    -+	bool		device_grant_found = false;
     +
     +	Assert(provider->issuer);	/* ensured by parse_provider() */
    -+
    -+	/*------
    -+	 * First, sanity checks for discovery contents that are OPTIONAL in the
    -+	 * spec but required for our flow:
    -+	 * - the issuer must support the device_code grant
    -+	 * - the issuer must have actually given us a
    -+	 *   device_authorization_endpoint
    -+	 */
    -+
    -+	grant = provider->grant_types_supported;
    -+	while (grant)
    -+	{
    -+		if (strcmp(grant->data, OAUTH_GRANT_TYPE_DEVICE_CODE) == 0)
    -+		{
    -+			device_grant_found = true;
    -+			break;
    -+		}
    -+
    -+		grant = grant->next;
    -+	}
    -+
    -+	if (!device_grant_found)
    -+	{
    -+		actx_error(actx, "issuer \"%s\" does not support device code grants",
    -+				   provider->issuer);
    -+		return false;
    -+	}
    ++	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
     +
     +	if (!provider->device_authorization_endpoint)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		return false;
     +	}
     +
    -+	/* TODO: check that the endpoint uses HTTPS */
    ++	/*
    ++	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
    ++	 * was present in the discovery document's grant_types_supported list. MS
    ++	 * Entra does not advertise this grant type, though, and since it doesn't
    ++	 * make sense to stand up a device_authorization_endpoint without also
    ++	 * accepting device codes at the token_endpoint, that's the only thing we
    ++	 * currently require.
    ++	 */
    ++
    ++	/*
    ++	 * Although libcurl will fail later if the URL contains an unsupported
    ++	 * scheme, that error message is going to be a bit opaque. This is a
    ++	 * decent time to bail out if we're not using HTTPS for the endpoints
    ++	 * we'll use for the flow.
    ++	 */
    ++	if (!actx->debugging)
    ++	{
    ++		if (pg_strncasecmp(provider->device_authorization_endpoint,
    ++						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
    ++		{
    ++			actx_error(actx,
    ++					   "device authorization endpoint \"%s\" must use HTTPS",
    ++					   provider->device_authorization_endpoint);
    ++			return false;
    ++		}
    ++
    ++		if (pg_strncasecmp(provider->token_endpoint,
    ++						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
    ++		{
    ++			actx_error(actx,
    ++					   "token endpoint \"%s\" must use HTTPS",
    ++					   provider->token_endpoint);
    ++			return false;
    ++		}
    ++	}
     +
     +	return true;
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	return true;
     +}
     +
    ++/*
    ++ * Calls curl_global_init() in a thread-safe way.
    ++ *
    ++ * libcurl has stringent requirements for the thread context in which you call
    ++ * curl_global_init(), because it's going to try initializing a bunch of other
    ++ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
    ++ * the thread-safety situation, but there's a chicken-and-egg problem at
    ++ * runtime: you can't check the thread safety until you've initialized libcurl,
    ++ * which you can't do from within a thread unless you know it's thread-safe...
    ++ *
    ++ * Returns true if initialization was successful. Successful or not, this
    ++ * function will not try to reinitialize Curl on successive calls.
    ++ */
    ++static bool
    ++initialize_curl(PGconn *conn)
    ++{
    ++	/*
    ++	 * Don't let the compiler play tricks with this variable. In the
    ++	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
    ++	 * enter simultaneously, but we do care if this gets set transiently to
    ++	 * PG_BOOL_YES/NO in cases where that's not the final answer.
    ++	 */
    ++	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
    ++#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
    ++	curl_version_info_data *info;
    ++#endif
    ++
    ++#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
    ++
    ++	/*
    ++	 * Lock around the whole function. If a libpq client performs its own work
    ++	 * with libcurl, it must either ensure that Curl is initialized safely
    ++	 * before calling us (in which case our call will be a no-op), or else it
    ++	 * must guard its own calls to curl_global_init() with a registered
    ++	 * threadlock handler. See PQregisterThreadLock().
    ++	 */
    ++	pglock_thread();
    ++#endif
    ++
    ++	/*
    ++	 * Skip initialization if we've already done it. (Curl tracks the number
    ++	 * of calls; there's no point in incrementing the counter every time we
    ++	 * connect.)
    ++	 */
    ++	if (init_successful == PG_BOOL_YES)
    ++		goto done;
    ++	else if (init_successful == PG_BOOL_NO)
    ++	{
    ++		libpq_append_conn_error(conn,
    ++								"curl_global_init previously failed during OAuth setup");
    ++		goto done;
    ++	}
    ++
    ++	/*
    ++	 * We know we've already initialized Winsock by this point (see
    ++	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
    ++	 * we have to tell libcurl to initialize everything else, because other
    ++	 * pieces of our client executable may already be using libcurl for their
    ++	 * own purposes. If we initialize libcurl with only a subset of its
    ++	 * features, we could break those other clients nondeterministically, and
    ++	 * that would probably be a nightmare to debug.
    ++	 *
    ++	 * If some other part of the program has already called this, it's a
    ++	 * no-op.
    ++	 */
    ++	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
    ++	{
    ++		libpq_append_conn_error(conn,
    ++								"curl_global_init failed during OAuth setup");
    ++		init_successful = PG_BOOL_NO;
    ++		goto done;
    ++	}
    ++
    ++#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
    ++
    ++	/*
    ++	 * If we determined at configure time that the Curl installation is
    ++	 * threadsafe, our job here is much easier. We simply initialize above
    ++	 * without any locking (concurrent or duplicated calls are fine in that
    ++	 * situation), then double-check to make sure the runtime setting agrees,
    ++	 * to try to catch silent downgrades.
    ++	 */
    ++	info = curl_version_info(CURLVERSION_NOW);
    ++	if (!(info->features & CURL_VERSION_THREADSAFE))
    ++	{
    ++		/*
    ++		 * In a downgrade situation, the damage is already done. Curl global
    ++		 * state may be corrupted. Be noisy.
    ++		 */
    ++		libpq_append_conn_error(conn, "libcurl is no longer threadsafe\n"
    ++								"\tCurl initialization was reported threadsafe when libpq\n"
    ++								"\twas compiled, but the currently installed version of\n"
    ++								"\tlibcurl reports that it is not. Recompile libpq against\n"
    ++								"\tthe installed version of libcurl.");
    ++		init_successful = PG_BOOL_NO;
    ++		goto done;
    ++	}
    ++#endif
    ++
    ++	init_successful = PG_BOOL_YES;
    ++
    ++done:
    ++#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
    ++	pgunlock_thread();
    ++#endif
    ++	return (init_successful == PG_BOOL_YES);
    ++}
     +
     +/*
     + * The core nonblocking libcurl implementation. This will be called several
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	fe_oauth_state *state = conn->sasl_state;
     +	struct async_ctx *actx;
     +
    -+	/*
    -+	 * XXX This is not safe. libcurl has stringent requirements for the thread
    -+	 * context in which you call curl_global_init(), because it's going to try
    -+	 * initializing a bunch of other libraries (OpenSSL, Winsock...). And we
    -+	 * probably need to consider both the TLS backend libcurl is compiled
    -+	 * against and what the user has asked us to do via PQinit[Open]SSL.
    -+	 *
    -+	 * Recent versions of libcurl have improved the thread-safety situation,
    -+	 * but you apparently can't check at compile time whether the
    -+	 * implementation is thread-safe, and there's a chicken-and-egg problem
    -+	 * where you can't check the thread safety until you've initialized
    -+	 * libcurl, which you can't do before you've made sure it's thread-safe...
    -+	 *
    -+	 * We know we've already initialized Winsock by this point, so we should
    -+	 * be able to safely skip that bit. But we have to tell libcurl to
    -+	 * initialize everything else, because other pieces of our client
    -+	 * executable may already be using libcurl for their own purposes. If we
    -+	 * initialize libcurl first, with only a subset of its features, we could
    -+	 * break those other clients nondeterministically, and that would probably
    -+	 * be a nightmare to debug.
    -+	 */
    -+	curl_global_init(CURL_GLOBAL_ALL
    -+					 & ~CURL_GLOBAL_WIN32); /* we already initialized Winsock */
    ++	if (!initialize_curl(conn))
    ++		return PGRES_POLLING_FAILED;
     +
     +	if (!state->async_ctx)
     +	{
    @@ src/interfaces/libpq/fe-connect.c: static const internalPQconninfoOption PQconni
      	/* Terminating entry --- MUST BE LAST */
      	{NULL, NULL, NULL, NULL,
      	NULL, NULL, 0}
    +@@ src/interfaces/libpq/fe-connect.c: static const PQEnvironmentOption EnvironmentOptions[] =
    + static const pg_fe_sasl_mech *supported_sasl_mechs[] =
    + {
    + 	&pg_scram_mech,
    ++	&pg_oauth_mech,
    + };
    + #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
    + 
     @@ src/interfaces/libpq/fe-connect.c: pqDropServerData(PGconn *conn)
      	conn->write_failed = false;
      	free(conn->write_err_msg);
    @@ src/interfaces/libpq/fe-connect.c: static inline void
      	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
      	 * rely on the compile-time assertion here to keep us honest.
      	 *
    -@@ src/interfaces/libpq/fe-connect.c: fill_allowed_sasl_mechs(PGconn *conn)
    - 	 * - handle the new mechanism name in the require_auth portion of
    - 	 *   pqConnectOptions2(), below.
    - 	 */
    --	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 1,
    -+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == 2,
    - 					 "fill_allowed_sasl_mechs() must be updated when resizing conn->allowed_sasl_mechs[]");
    - 
    - 	conn->allowed_sasl_mechs[0] = &pg_scram_mech;
    -+	conn->allowed_sasl_mechs[1] = &pg_oauth_mech;
    - }
    - 
    - /*
     @@ src/interfaces/libpq/fe-connect.c: pqConnectOptions2(PGconn *conn)
      			{
      				mech = &pg_scram_mech;
5:  66ef3b4b687 = 5:  035a3832b40 XXX fix libcurl link error
6:  4df1bc59638 ! 6:  5e360725bf9 DO NOT MERGE: Add pytest suite for OAuth
    @@ src/test/python/client/test_oauth.py (new)
     +                {
     +                    "issuer": "{issuer}",
     +                    "token_endpoint": "https://256.256.256.256/token",
    -+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
    -+                },
    -+            ),
    -+            r'cannot run OAuth device authorization: issuer "https://.*" does not support device code grants',
    -+            id="missing device code grants",
    -+        ),
    -+        pytest.param(
    -+            (
    -+                200,
    -+                {
    -+                    "issuer": "{issuer}",
    -+                    "token_endpoint": "https://256.256.256.256/token",
     +                    "grant_types_supported": [
     +                        "urn:ietf:params:oauth:grant-type:device_code"
     +                    ],
v44-0001-Move-PG_MAX_AUTH_TOKEN_LENGTH-to-libpq-auth.h.patchapplication/octet-stream; name=v44-0001-Move-PG_MAX_AUTH_TOKEN_LENGTH-to-libpq-auth.h.patchDownload
From 258b8dbb77021a3943aec88612bdc9796202a308 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 8 Jan 2025 09:30:05 -0800
Subject: [PATCH v44 1/6] Move PG_MAX_AUTH_TOKEN_LENGTH to libpq/auth.h

OAUTHBEARER would like to use this as a limit on Bearer token messages
coming from the client, so promote it to the header file.
---
 src/backend/libpq/auth.c | 16 ----------------
 src/include/libpq/auth.h | 16 ++++++++++++++++
 2 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 46facc275ef..d6ef32cc823 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -201,22 +201,6 @@ static int	CheckRADIUSAuth(Port *port);
 static int	PerformRadiusTransaction(const char *server, const char *secret, const char *portstr, const char *identifier, const char *user_name, const char *passwd);
 
 
-/*
- * Maximum accepted size of GSS and SSPI authentication tokens.
- * We also use this as a limit on ordinary password packet lengths.
- *
- * Kerberos tickets are usually quite small, but the TGTs issued by Windows
- * domain controllers include an authorization field known as the Privilege
- * Attribute Certificate (PAC), which contains the user's Windows permissions
- * (group memberships etc.). The PAC is copied into all tickets obtained on
- * the basis of this TGT (even those issued by Unix realms which the Windows
- * realm trusts), and can be several kB in size. The maximum token size
- * accepted by Windows systems is determined by the MaxAuthToken Windows
- * registry setting. Microsoft recommends that it is not set higher than
- * 65535 bytes, so that seems like a reasonable limit for us as well.
- */
-#define PG_MAX_AUTH_TOKEN_LENGTH	65535
-
 /*----------------------------------------------------------------
  * Global authentication functions
  *----------------------------------------------------------------
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 9157dbe6092..902c5f6de32 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -16,6 +16,22 @@
 
 #include "libpq/libpq-be.h"
 
+/*
+ * Maximum accepted size of GSS and SSPI authentication tokens.
+ * We also use this as a limit on ordinary password packet lengths.
+ *
+ * Kerberos tickets are usually quite small, but the TGTs issued by Windows
+ * domain controllers include an authorization field known as the Privilege
+ * Attribute Certificate (PAC), which contains the user's Windows permissions
+ * (group memberships etc.). The PAC is copied into all tickets obtained on
+ * the basis of this TGT (even those issued by Unix realms which the Windows
+ * realm trusts), and can be several kB in size. The maximum token size
+ * accepted by Windows systems is determined by the MaxAuthToken Windows
+ * registry setting. Microsoft recommends that it is not set higher than
+ * 65535 bytes, so that seems like a reasonable limit for us as well.
+ */
+#define PG_MAX_AUTH_TOKEN_LENGTH	65535
+
 extern PGDLLIMPORT char *pg_krb_server_keyfile;
 extern PGDLLIMPORT bool pg_krb_caseins_users;
 extern PGDLLIMPORT bool pg_gss_accept_delegation;
-- 
2.34.1

v44-0002-require_auth-prepare-for-multiple-SASL-mechanism.patchapplication/octet-stream; name=v44-0002-require_auth-prepare-for-multiple-SASL-mechanism.patchDownload
From ec960cf363d908372b364e91b3c36c98bf52cdef Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 16 Dec 2024 13:57:14 -0800
Subject: [PATCH v44 2/6] require_auth: prepare for multiple SASL mechanisms

Prior to this patch, the require_auth implementation assumed that the
AuthenticationSASL protocol message was synonymous with SCRAM-SHA-256.
In preparation for the OAUTHBEARER SASL mechanism, split the
implementation into two tiers: the first checks the acceptable
AUTH_REQ_* codes, and the second checks acceptable mechanisms if
AUTH_REQ_SASL et al are permitted.

conn->allowed_sasl_mechs is the list of pointers to acceptable
mechanisms. (Since we'll support only a small number of mechanisms, this
is an array of static length to minimize bookkeeping.) pg_SASL_init()
will bail if the selected mechanism isn't contained in this array.

Since there's only one mechansism supported right now, one branch of the
second tier cannot be exercised yet (it's marked with Assert(false)).
This assertion will need to be removed when the next mechanism is added.
---
 src/interfaces/libpq/fe-auth.c            |  29 ++++
 src/interfaces/libpq/fe-connect.c         | 184 ++++++++++++++++++++--
 src/interfaces/libpq/libpq-int.h          |   2 +
 src/test/authentication/t/001_password.pl |  10 ++
 4 files changed, 208 insertions(+), 17 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 7e478489b71..70753d8ec29 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -543,6 +543,35 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
+	/* Make sure require_auth is satisfied. */
+	if (conn->require_auth)
+	{
+		bool		allowed = false;
+
+		for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		{
+			if (conn->sasl == conn->allowed_sasl_mechs[i])
+			{
+				allowed = true;
+				break;
+			}
+		}
+
+		if (!allowed)
+		{
+			/*
+			 * TODO: this is dead code until a second SASL mechanism is added;
+			 * the connection can't have proceeded past check_expected_areq()
+			 * if no SASL methods are allowed.
+			 */
+			Assert(false);
+
+			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
+									conn->require_auth, selected_mechanism);
+			goto error;
+		}
+	}
+
 	if (conn->channel_binding[0] == 'r' &&	/* require */
 		strcmp(selected_mechanism, SCRAM_SHA_256_PLUS_NAME) != 0)
 	{
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 7878e2e33af..e1cea790f9e 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -396,6 +396,12 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 	}
 };
 
+static const pg_fe_sasl_mech *supported_sasl_mechs[] =
+{
+	&pg_scram_mech,
+};
+#define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
+
 /* The connection URI must start with either of the following designators: */
 static const char uri_designator[] = "postgresql://";
 static const char short_uri_designator[] = "postgres://";
@@ -1117,6 +1123,57 @@ libpq_prng_init(PGconn *conn)
 	pg_prng_seed(&conn->prng_state, rseed);
 }
 
+/*
+ * Fills the connection's allowed_sasl_mechs list with all supported SASL
+ * mechanisms.
+ */
+static inline void
+fill_allowed_sasl_mechs(PGconn *conn)
+{
+	/*---
+	 * We only support one mechanism at the moment, so rather than deal with a
+	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
+	 * rely on the compile-time assertion here to keep us honest.
+	 *
+	 * To add a new mechanism to require_auth,
+	 * - add it to supported_sasl_mechs,
+	 * - update the length of conn->allowed_sasl_mechs,
+	 * - handle the new mechanism name in the require_auth portion of
+	 *   pqConnectOptions2(), below.
+	 */
+	StaticAssertDecl(lengthof(conn->allowed_sasl_mechs) == SASL_MECHANISM_COUNT,
+					 "conn->allowed_sasl_mechs[] is not sufficiently large for holding all supported SASL mechanisms");
+
+	for (int i = 0; i < SASL_MECHANISM_COUNT; i++)
+		conn->allowed_sasl_mechs[i] = supported_sasl_mechs[i];
+}
+
+/*
+ * Clears the connection's allowed_sasl_mechs list.
+ */
+static inline void
+clear_allowed_sasl_mechs(PGconn *conn)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+		conn->allowed_sasl_mechs[i] = NULL;
+}
+
+/*
+ * Helper routine that searches the static allowed_sasl_mechs list for a
+ * specific mechanism.
+ */
+static inline int
+index_of_allowed_sasl_mech(PGconn *conn, const pg_fe_sasl_mech *mech)
+{
+	for (int i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+	{
+		if (conn->allowed_sasl_mechs[i] == mech)
+			return i;
+	}
+
+	return -1;
+}
+
 /*
  *		pqConnectOptions2
  *
@@ -1358,17 +1415,19 @@ pqConnectOptions2(PGconn *conn)
 		bool		negated = false;
 
 		/*
-		 * By default, start from an empty set of allowed options and add to
-		 * it.
+		 * By default, start from an empty set of allowed methods and
+		 * mechanisms, and add to it.
 		 */
 		conn->auth_required = true;
 		conn->allowed_auth_methods = 0;
+		clear_allowed_sasl_mechs(conn);
 
 		for (first = true, more = true; more; first = false)
 		{
 			char	   *method,
 					   *part;
-			uint32		bits;
+			uint32		bits = 0;
+			const pg_fe_sasl_mech *mech = NULL;
 
 			part = parse_comma_separated_list(&s, &more);
 			if (part == NULL)
@@ -1384,11 +1443,12 @@ pqConnectOptions2(PGconn *conn)
 				if (first)
 				{
 					/*
-					 * Switch to a permissive set of allowed options, and
-					 * subtract from it.
+					 * Switch to a permissive set of allowed methods and
+					 * mechanisms, and subtract from it.
 					 */
 					conn->auth_required = false;
 					conn->allowed_auth_methods = -1;
+					fill_allowed_sasl_mechs(conn);
 				}
 				else if (!negated)
 				{
@@ -1413,6 +1473,10 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
+			/*
+			 * First group: methods that can be handled solely with the
+			 * authentication request codes.
+			 */
 			if (strcmp(method, "password") == 0)
 			{
 				bits = (1 << AUTH_REQ_PASSWORD);
@@ -1431,13 +1495,21 @@ pqConnectOptions2(PGconn *conn)
 				bits = (1 << AUTH_REQ_SSPI);
 				bits |= (1 << AUTH_REQ_GSS_CONT);
 			}
+
+			/*
+			 * Next group: SASL mechanisms. All of these use the same request
+			 * codes, so the list of allowed mechanisms is tracked separately.
+			 *
+			 * supported_sasl_mechs must contain all mechanisms handled here.
+			 */
 			else if (strcmp(method, "scram-sha-256") == 0)
 			{
-				/* This currently assumes that SCRAM is the only SASL method. */
-				bits = (1 << AUTH_REQ_SASL);
-				bits |= (1 << AUTH_REQ_SASL_CONT);
-				bits |= (1 << AUTH_REQ_SASL_FIN);
+				mech = &pg_scram_mech;
 			}
+
+			/*
+			 * Final group: meta-options.
+			 */
 			else if (strcmp(method, "none") == 0)
 			{
 				/*
@@ -1473,20 +1545,68 @@ pqConnectOptions2(PGconn *conn)
 				return false;
 			}
 
-			/* Update the bitmask. */
-			if (negated)
+			if (mech)
 			{
-				if ((conn->allowed_auth_methods & bits) == 0)
-					goto duplicate;
+				/*
+				 * Update the mechanism set only. The method bitmask will be
+				 * updated for SASL further down.
+				 */
+				Assert(!bits);
+
+				if (negated)
+				{
+					/* Remove the existing mechanism from the list. */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i < 0)
+						goto duplicate;
+
+					conn->allowed_sasl_mechs[i] = NULL;
+				}
+				else
+				{
+					/*
+					 * Find a space to put the new mechanism (after making
+					 * sure it's not already there).
+					 */
+					i = index_of_allowed_sasl_mech(conn, mech);
+					if (i >= 0)
+						goto duplicate;
 
-				conn->allowed_auth_methods &= ~bits;
+					i = index_of_allowed_sasl_mech(conn, NULL);
+					if (i < 0)
+					{
+						/* Should not happen; the pointer list is corrupted. */
+						Assert(false);
+
+						conn->status = CONNECTION_BAD;
+						libpq_append_conn_error(conn,
+												"internal error: no space in allowed_sasl_mechs");
+						free(part);
+						return false;
+					}
+
+					conn->allowed_sasl_mechs[i] = mech;
+				}
 			}
 			else
 			{
-				if ((conn->allowed_auth_methods & bits) == bits)
-					goto duplicate;
+				/* Update the method bitmask. */
+				Assert(bits);
+
+				if (negated)
+				{
+					if ((conn->allowed_auth_methods & bits) == 0)
+						goto duplicate;
+
+					conn->allowed_auth_methods &= ~bits;
+				}
+				else
+				{
+					if ((conn->allowed_auth_methods & bits) == bits)
+						goto duplicate;
 
-				conn->allowed_auth_methods |= bits;
+					conn->allowed_auth_methods |= bits;
+				}
 			}
 
 			free(part);
@@ -1505,6 +1625,36 @@ pqConnectOptions2(PGconn *conn)
 			free(part);
 			return false;
 		}
+
+		/*
+		 * Finally, allow SASL authentication requests if (and only if) we've
+		 * allowed any mechanisms.
+		 */
+		{
+			bool		allowed = false;
+			const uint32 sasl_bits =
+				(1 << AUTH_REQ_SASL)
+				| (1 << AUTH_REQ_SASL_CONT)
+				| (1 << AUTH_REQ_SASL_FIN);
+
+			for (i = 0; i < lengthof(conn->allowed_sasl_mechs); i++)
+			{
+				if (conn->allowed_sasl_mechs[i])
+				{
+					allowed = true;
+					break;
+				}
+			}
+
+			/*
+			 * For the standard case, add the SASL bits to the (default-empty)
+			 * set if needed. For the negated case, remove them.
+			 */
+			if (!negated && allowed)
+				conn->allowed_auth_methods |= sasl_bits;
+			else if (negated && !allowed)
+				conn->allowed_auth_methods &= ~sasl_bits;
+		}
 	}
 
 	/*
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 4be5fd7ae4f..e0d5b5fe0be 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -505,6 +505,8 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
+	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
 	char		current_auth_response;	/* used by pqTraceOutputMessage to
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 773238b76fd..1357f806b6f 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -277,6 +277,16 @@ $node->connect_fails(
 	"require_auth methods cannot be duplicated, !none case",
 	expected_stderr =>
 	  qr/require_auth method "!none" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=scram-sha-256,scram-sha-256",
+	"require_auth methods cannot be duplicated, scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "scram-sha-256" is specified more than once/);
+$node->connect_fails(
+	"user=scram_role require_auth=!scram-sha-256,!scram-sha-256",
+	"require_auth methods cannot be duplicated, !scram-sha-256 case",
+	expected_stderr =>
+	  qr/require_auth method "!scram-sha-256" is specified more than once/);
 
 # Unknown value defined in require_auth.
 $node->connect_fails(
-- 
2.34.1

v44-0003-libpq-handle-asynchronous-actions-during-SASL.patchapplication/octet-stream; name=v44-0003-libpq-handle-asynchronous-actions-during-SASL.patchDownload
From 9725788086c74dbec0a4feb895ff2f401dff83ea Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 8 Jan 2025 09:30:05 -0800
Subject: [PATCH v44 3/6] libpq: handle asynchronous actions during SASL

This adds the ability for a SASL mechanism to signal to PQconnectPoll()
that some arbitrary work must be done, external to the Postgres
connection, before authentication can continue. The intent is for the
upcoming OAUTHBEARER mechanism to make use of this functionality.

To ensure that threads are not blocked waiting for the SASL mechanism to
make long-running calls, the mechanism communicates with the top-level
client via the "altsock": a file or socket descriptor, opaque to this
layer of libpq, which is signaled when work is ready to be done again.
This socket temporarily takes the place of the standard connection
descriptor, so PQsocket() clients should continue to operate correctly
using their existing polling implementations.

A mechanism should set an authentication callback (conn->async_auth())
and a cleanup callback (conn->cleanup_async_auth()), return SASL_ASYNC
during the exchange, and assign conn->altsock during the first call to
async_auth(). When the cleanup callback is called, either because
authentication has succeeded or because the connection is being
dropped, the altsock must be released and disconnected from the PGconn.
---
 src/interfaces/libpq/fe-auth-sasl.h  |  11 ++-
 src/interfaces/libpq/fe-auth-scram.c |   6 +-
 src/interfaces/libpq/fe-auth.c       | 120 ++++++++++++++++++++-------
 src/interfaces/libpq/fe-auth.h       |   3 +-
 src/interfaces/libpq/fe-connect.c    |  93 ++++++++++++++++++++-
 src/interfaces/libpq/fe-misc.c       |  35 +++++---
 src/interfaces/libpq/libpq-fe.h      |   2 +
 src/interfaces/libpq/libpq-int.h     |   6 ++
 8 files changed, 227 insertions(+), 49 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index f0c62139092..f06f547c07d 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,18 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth/cleanup_async_auth appropriately
+	 *					before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 557e9c568b6..fe18615197f 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -205,7 +206,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 70753d8ec29..761ee8f88f7 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -430,7 +430,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +448,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -607,26 +607,54 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 *
+		 * In non-assertion builds, this postcondition is enforced at time of
+		 * use in PQconnectPoll().
+		 */
+		Assert(conn->async_auth);
+		Assert(conn->cleanup_async_auth);
+
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -671,7 +699,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -701,11 +729,25 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+
+		/*
+		 * The mechanism may optionally generate some output to send before
+		 * switching over to async auth, so continue onwards.
+		 */
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -1013,12 +1055,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1176,7 +1224,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1185,23 +1233,33 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 
 		case AUTH_REQ_SASL_CONT:
 		case AUTH_REQ_SASL_FIN:
-			if (conn->sasl_state == NULL)
 			{
-				appendPQExpBufferStr(&conn->errorMessage,
-									 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
-				return STATUS_ERROR;
-			}
-			oldmsglen = conn->errorMessage.len;
-			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
-			{
-				/* Use this message if pg_SASL_continue didn't supply one */
-				if (conn->errorMessage.len == oldmsglen)
+				bool		final = false;
+
+				if (conn->sasl_state == NULL)
+				{
 					appendPQExpBufferStr(&conn->errorMessage,
-										 "fe_sendauth: error in SASL authentication\n");
-				return STATUS_ERROR;
+										 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
+					return STATUS_ERROR;
+				}
+				oldmsglen = conn->errorMessage.len;
+
+				if (areq == AUTH_REQ_SASL_FIN)
+					final = true;
+
+				if (pg_SASL_continue(conn, payloadlen, final, async) != STATUS_OK)
+				{
+					/*
+					 * Append a generic error message unless pg_SASL_continue
+					 * did set a more specific one already.
+					 */
+					if (conn->errorMessage.len == oldmsglen)
+						appendPQExpBufferStr(&conn->errorMessage,
+											 "fe_sendauth: error in SASL authentication\n");
+					return STATUS_ERROR;
+				}
+				break;
 			}
-			break;
 
 		default:
 			libpq_append_conn_error(conn, "authentication method %u not supported", areq);
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index df0a68b0b21..1d4991f8996 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -19,7 +19,8 @@
 
 
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index e1cea790f9e..85d1ca2864f 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -507,6 +507,19 @@ pqDropConnection(PGconn *conn, bool flushInput)
 	conn->cmd_queue_recycle = NULL;
 
 	/* Free authentication/encryption state */
+	if (conn->cleanup_async_auth)
+	{
+		/*
+		 * Any in-progress async authentication should be torn down first so
+		 * that cleanup_async_auth() can depend on the other authentication
+		 * state if necessary.
+		 */
+		conn->cleanup_async_auth(conn);
+		conn->cleanup_async_auth = NULL;
+	}
+	conn->async_auth = NULL;
+	/* cleanup_async_auth() should have done this, but make sure */
+	conn->altsock = PGINVALID_SOCKET;
 #ifdef ENABLE_GSS
 	{
 		OM_uint32	min_s;
@@ -2853,6 +2866,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3888,6 +3902,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -4076,7 +4091,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -4113,6 +4138,69 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+
+				if (!conn->async_auth || !conn->cleanup_async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn,
+											"internal error: async authentication has no handler");
+					goto error_return;
+				}
+
+				/* Drive some external authentication work. */
+				status = conn->async_auth(conn);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/* Done. Tear down the async implementation. */
+					conn->cleanup_async_auth(conn);
+					conn->cleanup_async_auth = NULL;
+
+					/*
+					 * Cleanup must unset altsock, both as an indication that
+					 * it's been released, and to stop pqSocketCheck from
+					 * looking at the wrong socket after async auth is done.
+					 */
+					if (conn->altsock != PGINVALID_SOCKET)
+					{
+						Assert(false);
+						libpq_append_conn_error(conn,
+												"internal error: async cleanup did not release polling socket");
+						goto error_return;
+					}
+
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+
+					goto keep_going;
+				}
+
+				/*
+				 * Caller needs to poll some more. conn->async_auth() should
+				 * have assigned an altsock to poll on.
+				 */
+				if (conn->altsock == PGINVALID_SOCKET)
+				{
+					Assert(false);
+					libpq_append_conn_error(conn,
+											"internal error: async authentication did not set a socket for polling");
+					goto error_return;
+				}
+
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4794,6 +4882,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -7445,6 +7534,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 2c60eb5b569..d78445c70af 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1049,34 +1049,43 @@ pqWriteReady(PGconn *conn)
  * or both.  Returns >0 if one or more conditions are met, 0 if it timed
  * out, -1 if an error occurred.
  *
- * If SSL is in use, the SSL buffer is checked prior to checking the socket
- * for read data directly.
+ * If an altsock is set for asynchronous authentication, that will be used in
+ * preference to the "server" socket. Otherwise, if SSL is in use, the SSL
+ * buffer is checked prior to checking the socket for read data directly.
  */
 static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	if (conn->altsock != PGINVALID_SOCKET)
+		sock = conn->altsock;
+	else
 	{
-		libpq_append_conn_error(conn, "invalid socket");
-		return -1;
-	}
+		sock = conn->sock;
+		if (sock == PGINVALID_SOCKET)
+		{
+			libpq_append_conn_error(conn, "invalid socket");
+			return -1;
+		}
 
 #ifdef USE_SSL
-	/* Check for SSL library buffering read bytes */
-	if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
-	{
-		/* short-circuit the select */
-		return 1;
-	}
+		/* Check for SSL library buffering read bytes */
+		if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
+		{
+			/* short-circuit the select */
+			return 1;
+		}
 #endif
+	}
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index cce9ce60c55..a3491faf0c3 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -103,6 +103,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e0d5b5fe0be..2546f9f8a50 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -513,6 +513,12 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	/* Callbacks for external async authentication */
+	PostgresPollingStatusType (*async_auth) (PGconn *conn);
+	void		(*cleanup_async_auth) (PGconn *conn);
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
-- 
2.34.1

v44-0004-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v44-0004-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From a260d9436f08834cf26e6666a680203815ee6175 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v44 4/6] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   42 +
 configure                                     |  279 ++
 configure.ac                                  |   41 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  393 +++
 doc/src/sgml/oauth-validators.sgml            |  402 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/protocol.sgml                    |  133 +-
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   66 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  860 ++++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2635 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1141 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   48 +-
 src/interfaces/libpq/libpq-fe.h               |   82 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  264 ++
 .../modules/oauth_validator/t/001_server.pl   |  551 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  135 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 59 files changed, 8598 insertions(+), 39 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 18e944ca89d..8c518c317e7 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -219,6 +219,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -312,8 +313,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +692,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 7b55c2664a6..86a3750f9e5 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -274,3 +274,45 @@ AC_DEFUN([PGAC_CHECK_STRIP],
   AC_SUBST(STRIP_STATIC_LIB)
   AC_SUBST(STRIP_SHARED_LIB)
 ])# PGAC_CHECK_STRIP
+
+
+
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for required libraries and headers, and test to see whether the current
+# installation of libcurl is threadsafe.
+
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[
+  AC_CHECK_HEADER(curl/curl.h, [],
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+  AC_CHECK_LIB(curl, curl_multi_init, [],
+			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+])],
+  [pgac_cv__libcurl_threadsafe_init=yes],
+  [pgac_cv__libcurl_threadsafe_init=no],
+  [pgac_cv__libcurl_threadsafe_init=unknown])])
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
+              [Define to 1 if curl_global_init() is guaranteed to be threadsafe.])
+  fi
+])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index ceeef9b0915..115a91f8f4a 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,157 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12378,123 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
+fi
+
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
+$as_echo_n "checking for curl_global_init thread safety... " >&6; }
+if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_threadsafe_init=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_threadsafe_init=yes
+else
+  pgac_cv__libcurl_threadsafe_init=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
+$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+
+$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
+
+  fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
diff --git a/configure.ac b/configure.ac
index d713360f340..e8f1a7db9de 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,40 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1328,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..f84085dbac4 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a782f109982..d7bac61a7fe 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3f0a7e9c069..96e433179b9 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1143,6 +1143,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2584,6 +2597,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index e04acf1c208..9a69ffbc5b3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,106 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10129,278 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
@@ -10092,6 +10473,18 @@ int PQisthreadsafe();
    <application>libpq</application> source code for a way to do cooperative
    locking between <application>libpq</application> and your application.
   </para>
+
+  <para>
+   Similarly, if you are using Curl inside your application,
+   <emphasis>and</emphasis> you do not already
+   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
+   libcurl globally</ulink> before starting new threads, you will need to
+   cooperatively lock (again via <function>PQregisterThreadLock</function>)
+   around any code that may initialize libcurl. This restriction is lifted for
+   more recent versions of Curl that are built to support threadsafe
+   initialization; those builds can be identified by the advertisement of a
+   <literal>threadsafe</literal> feature in their version metadata.
+  </para>
  </sect1>
 
 
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..d0bca9196d9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,402 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index fb5dec1172e..3bd9e68e6ce 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1688,11 +1688,11 @@ SELCT 1/0;<!-- this typo is intentional -->
 
   <para>
    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
-   might be added in the future. The below steps illustrate how SASL
-   authentication is performed in general, while the next subsection gives
-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
+   protocols. At the moment, <productname>PostgreSQL</productname> implements three
+   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
+   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
+   authentication is performed in general, while the next subsections give
+   more details on particular mechanisms.
   </para>
 
   <procedure>
@@ -1727,7 +1727,7 @@ SELCT 1/0;<!-- this typo is intentional -->
    <step id="sasl-auth-end">
     <para>
      Finally, when the authentication exchange is completed successfully, the
-     server sends an AuthenticationSASLFinal message, followed
+     server sends an optional AuthenticationSASLFinal message, followed
      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
      contains additional server-to-client data, whose content is particular to the
      selected authentication mechanism. If the authentication mechanism doesn't
@@ -1746,9 +1746,9 @@ SELCT 1/0;<!-- this typo is intentional -->
    <title>SCRAM-SHA-256 Authentication</title>
 
    <para>
-    The implemented SASL mechanisms at the moment
-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
+    <literal>SCRAM-SHA-256</literal>, and its variant with channel
+    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
+    authentication mechanisms. They are described in
     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    </para>
@@ -1850,6 +1850,121 @@ SELCT 1/0;<!-- this typo is intentional -->
     </step>
    </procedure>
   </sect2>
+
+  <sect2 id="sasl-oauthbearer">
+   <title>OAUTHBEARER Authentication</title>
+
+   <para>
+    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
+    authentication. It is described in detail in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
+   </para>
+
+   <para>
+    A typical exchange differs depending on whether or not the client already
+    has a bearer token cached for the current user. If it does not, the exchange
+    will take place over two connections: the first "discovery" connection to
+    obtain OAuth metadata from the server, and the second connection to send
+    the token after the client has obtained it. (libpq does not currently
+    implement a caching method as part of its builtin flow, so it uses the
+    two-connection exchange.)
+   </para>
+
+   <para>
+    This mechanism is client-initiated, like SCRAM. The client initial response
+    consists of the standard "GS2" header used by SCRAM, followed by a list of
+    <literal>key=value</literal> pairs. The only key currently supported by
+    the server is <literal>auth</literal>, which contains the bearer token.
+    <literal>OAUTHBEARER</literal> additionally specifies three optional
+    components of the client initial response (the <literal>authzid</literal> of
+    the GS2 header, and the <structfield>host</structfield> and
+    <structfield>port</structfield> keys) which are currently ignored by the
+    server.
+   </para>
+
+   <para>
+    <literal>OAUTHBEARER</literal> does not support channel binding, and there
+    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
+    server data during a successful authentication, so the
+    AuthenticationSASLFinal message is not used in the exchange.
+   </para>
+
+   <procedure>
+    <title>Example</title>
+    <step>
+     <para>
+      During the first exchange, the server sends an AuthenticationSASL message
+      with the <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message which
+      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
+      client does not already have a valid bearer token for the current user,
+      the <structfield>auth</structfield> field is empty, indicating a discovery
+      connection.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an AuthenticationSASLContinue message containing an error
+      <literal>status</literal> alongside a well-known URI and scopes that the
+      client should use to conduct an OAuth flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Client sends a SASLResponse message containing the empty set (a single
+      <literal>0x01</literal> byte) to finish its half of the discovery
+      exchange.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an ErrorMessage to fail the first exchange.
+     </para>
+     <para>
+      At this point, the client conducts one of many possible OAuth flows to
+      obtain a bearer token, using any metadata that it has been configured with
+      in addition to that provided by the server. (This description is left
+      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
+      mandate any particular method for obtaining a token.)
+     </para>
+     <para>
+      Once it has a token, the client reconnects to the server for the final
+      exchange:
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server once again sends an AuthenticationSASL message with the
+      <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message, but this
+      time the <structfield>auth</structfield> field in the message contains the
+      bearer token that was obtained during the client flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server validates the token according to the instructions of the
+      token provider. If the client is authorized to connect, it sends an
+      AuthenticationOk message to end the SASL exchange.
+     </para>
+    </step>
+   </procedure>
+  </sect2>
  </sect1>
 
  <sect1 id="protocol-replication">
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index 7c474559bdf..0e5e8e8f309 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -347,6 +347,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 8e128f4982a..3b35f1f0c9e 100644
--- a/meson.build
+++ b/meson.build
@@ -854,6 +854,67 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+
+    # Check to see whether the current platform supports threadsafe Curl
+    # initialization.
+    libcurl_threadsafe_init = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+        #ifdef CURL_VERSION_THREADSAFE
+            if (info->features & CURL_VERSION_THREADSAFE)
+                return 0;
+        #endif
+
+            return 1;
+        }''',
+        name: 'test for curl_global_init thread safety',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_threadsafe_init = true
+        message('curl_global_init is threadsafe')
+      elif r.returncode() == 1
+        message('curl_global_init is not threadsafe')
+      else
+        message('curl_global_init failed; assuming not threadsafe')
+      endif
+    endif
+
+    if libcurl_threadsafe_init
+      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
+    endif
+  endif
+
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3034,6 +3095,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3704,6 +3769,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbe11e75bf0..3b620bac5ac 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..6155d63a116
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,860 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked via before_shmem_exit().
+ */
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 38cb9e970d5..db582d2d62c 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4823,6 +4824,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 079efa1baa7..678de38a1c0 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..8fe56267780
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..4fcdda74305
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..c04ee38d086 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -442,6 +445,9 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
+/* Define to 1 if curl_global_init() is guaranteed to be threadsafe. */
+#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
 /* Define to 1 if your compiler understands `typeof' or something similar. */
 #undef HAVE_TYPEOF
 
@@ -663,6 +669,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 6a0def7273c..e9422888e3e 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..2407200ea97
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2635 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	if (!ctx->nested)
+		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+			Assert(!*field->target.scalar);
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define HTTPS_SCHEME "https://"
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * provides an authorization endpoint, and both the token and authorization
+ * endpoint URLs seem reasonable).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/*
+	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
+	 * was present in the discovery document's grant_types_supported list. MS
+	 * Entra does not advertise this grant type, though, and since it doesn't
+	 * make sense to stand up a device_authorization_endpoint without also
+	 * accepting device codes at the token_endpoint, that's the only thing we
+	 * currently require.
+	 */
+
+	/*
+	 * Although libcurl will fail later if the URL contains an unsupported
+	 * scheme, that error message is going to be a bit opaque. This is a
+	 * decent time to bail out if we're not using HTTPS for the endpoints
+	 * we'll use for the flow.
+	 */
+	if (!actx->debugging)
+	{
+		if (pg_strncasecmp(provider->device_authorization_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "device authorization endpoint \"%s\" must use HTTPS",
+					   provider->device_authorization_endpoint);
+			return false;
+		}
+
+		if (pg_strncasecmp(provider->token_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "token endpoint \"%s\" must use HTTPS",
+					   provider->token_endpoint);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Calls curl_global_init() in a thread-safe way.
+ *
+ * libcurl has stringent requirements for the thread context in which you call
+ * curl_global_init(), because it's going to try initializing a bunch of other
+ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
+ * the thread-safety situation, but there's a chicken-and-egg problem at
+ * runtime: you can't check the thread safety until you've initialized libcurl,
+ * which you can't do from within a thread unless you know it's thread-safe...
+ *
+ * Returns true if initialization was successful. Successful or not, this
+ * function will not try to reinitialize Curl on successive calls.
+ */
+static bool
+initialize_curl(PGconn *conn)
+{
+	/*
+	 * Don't let the compiler play tricks with this variable. In the
+	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
+	 * enter simultaneously, but we do care if this gets set transiently to
+	 * PG_BOOL_YES/NO in cases where that's not the final answer.
+	 */
+	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	curl_version_info_data *info;
+#endif
+
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * Lock around the whole function. If a libpq client performs its own work
+	 * with libcurl, it must either ensure that Curl is initialized safely
+	 * before calling us (in which case our call will be a no-op), or else it
+	 * must guard its own calls to curl_global_init() with a registered
+	 * threadlock handler. See PQregisterThreadLock().
+	 */
+	pglock_thread();
+#endif
+
+	/*
+	 * Skip initialization if we've already done it. (Curl tracks the number
+	 * of calls; there's no point in incrementing the counter every time we
+	 * connect.)
+	 */
+	if (init_successful == PG_BOOL_YES)
+		goto done;
+	else if (init_successful == PG_BOOL_NO)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init previously failed during OAuth setup");
+		goto done;
+	}
+
+	/*
+	 * We know we've already initialized Winsock by this point (see
+	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
+	 * we have to tell libcurl to initialize everything else, because other
+	 * pieces of our client executable may already be using libcurl for their
+	 * own purposes. If we initialize libcurl with only a subset of its
+	 * features, we could break those other clients nondeterministically, and
+	 * that would probably be a nightmare to debug.
+	 *
+	 * If some other part of the program has already called this, it's a
+	 * no-op.
+	 */
+	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init failed during OAuth setup");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * If we determined at configure time that the Curl installation is
+	 * threadsafe, our job here is much easier. We simply initialize above
+	 * without any locking (concurrent or duplicated calls are fine in that
+	 * situation), then double-check to make sure the runtime setting agrees,
+	 * to try to catch silent downgrades.
+	 */
+	info = curl_version_info(CURLVERSION_NOW);
+	if (!(info->features & CURL_VERSION_THREADSAFE))
+	{
+		/*
+		 * In a downgrade situation, the damage is already done. Curl global
+		 * state may be corrupted. Be noisy.
+		 */
+		libpq_append_conn_error(conn, "libcurl is no longer threadsafe\n"
+								"\tCurl initialization was reported threadsafe when libpq\n"
+								"\twas compiled, but the currently installed version of\n"
+								"\tlibcurl reports that it is not. Recompile libpq against\n"
+								"\tthe installed version of libcurl.");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+#endif
+
+	init_successful = PG_BOOL_YES;
+
+done:
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	pgunlock_thread();
+#endif
+	return (init_successful == PG_BOOL_YES);
+}
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	if (!initialize_curl(conn))
+		return PGRES_POLLING_FAILED;
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				conn->altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..cc53e2bdd1a
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1141 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..32598721686
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f7..ec7a9236044 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85d1ca2864f..d5051f5e820 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -399,6 +417,7 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 static const pg_fe_sasl_mech *supported_sasl_mechs[] =
 {
 	&pg_scram_mech,
+	&pg_oauth_mech,
 };
 #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
 
@@ -655,6 +674,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1144,7 +1164,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1519,6 +1539,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4111,7 +4135,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4390,6 +4426,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -5002,6 +5041,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5155,6 +5200,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..5f8d608261e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a50..f36f7f19d58 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 1a5a223e1af..4180e35f8cf 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -4,6 +4,7 @@
 # args for executables (which depend on libpq).
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -40,6 +41,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14b..bdfd5f1f8de 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 4f544a042d4..0c2ccc75a63 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..f297ed5c968
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..138a8104622
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..f77a3e115c6
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..4b78c90557c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..12fe70c990b
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,264 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..80f52585896
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,551 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..95cccf90dd8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..f0f23d1d1a8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..8ec09102027
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..bf94f091def
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,135 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index f521ad0b12f..ab7d7452ede 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2515,6 +2515,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2558,7 +2563,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a2644a2e653..f5e29b2cc90 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1951,6 +1958,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3089,6 +3097,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3483,6 +3493,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v44-0005-XXX-fix-libcurl-link-error.patchapplication/octet-stream; name=v44-0005-XXX-fix-libcurl-link-error.patchDownload
From 035a3832b4035811c80533a0b6d3011b56aa48a4 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 13 Jan 2025 12:31:59 -0800
Subject: [PATCH v44 5/6] XXX fix libcurl link error

The ftp/curl port appears to be missing a minimum version dependency on
libssh2, so the following starts showing up after upgrading to curl
8.11.1_1:

    libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

But 13.3 is EOL, so it's not clear if anyone would be interested in a
bug report, and a FreeBSD 14 Cirrus image is in progress. Hack past it
for now.
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 8c518c317e7..97bb38c72c6 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -165,6 +165,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.34.1

v44-0006-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v44-0006-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 5e360725bf97f53d70ab83d4d9abf07d3f0d74da Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v44 6/6] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  196 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2659 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 +++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6440 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 97bb38c72c6..a6fab60bfd8 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -318,6 +318,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -402,8 +403,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 3b35f1f0c9e..3ee040a90ff 100644
--- a/meson.build
+++ b/meson.build
@@ -3408,6 +3408,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3576,6 +3579,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..236057cd99e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 00000000000..0e8f027b2ec
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 00000000000..b0695b6287e
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 00000000000..acf339a5899
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 00000000000..20e72a404aa
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,196 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        sock.settimeout(BLOCKING_TIMEOUT)
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 00000000000..8372376ede4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 00000000000..ea1aeaed487
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2659 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def handle_discovery_connection(sock, discovery=None, *, response=None):
+    """
+    Helper for all tests that expect an initial discovery connection from the
+    client. The provided discovery URI will be used in a standard error response
+    from the server (or response may be set, to provide a custom dictionary),
+    and the SASL exchange will be failed.
+
+    By default, the client is expected to complete the entire handshake. Set
+    finish to False if the client should immediately disconnect when it receives
+    the error response.
+    """
+    if response is None:
+        response = {"status": "invalid_token"}
+        if discovery is not None:
+            response["openid-configuration"] = discovery
+
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        initial = start_oauth_handshake(conn)
+
+        # For discovery, the client should send an empty auth header. See RFC
+        # 7628, Sec. 4.3.
+        auth = get_auth_value(initial)
+        assert auth == b""
+
+        # The discovery handshake is doomed to fail.
+        fail_oauth_handshake(conn, response)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "authorization_code",
+                    "urn:ietf:params:oauth:grant-type:device_code",
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+            except ssl.SSLError as err:
+                # FIXME OpenSSL 3.4 introduced an incompatibility with Python's
+                # TLS error handling, resulting in a bogus "[SYS] unknown error"
+                # on some platforms. Hopefully this is fixed in 2025's set of
+                # maintenance releases and this case can be removed.
+                #
+                #     https://github.com/python/cpython/issues/127257
+                #
+                if "[SYS] unknown error" in str(err):
+                    return
+                raise
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PGPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PGOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PGOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PGOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PGPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PGOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize(
+    "success, abnormal_failure",
+    [
+        pytest.param(True, False, id="success"),
+        pytest.param(False, False, id="normal failure"),
+        pytest.param(False, True, id="abnormal failure"),
+    ],
+)
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+    abnormal_failure,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Client should reconnect.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            elif abnormal_failure:
+                # Send an empty error response, which should result in a
+                # mechanism-level failure in the client. This test ensures that
+                # the client doesn't try a third connection for this case.
+                expected_error = "server sent error response without a status"
+                fail_oauth_handshake(conn, {})
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    with sock:
+        handle_discovery_connection(sock, discovery_uri)
+
+    # Expect the client to connect again.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment (no path)",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server/path/",
+            None,
+            id="extra empty segment (with path)",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+    with sock:
+        if expected_error and not server_discovery:
+            # If the client already knows the URL, it should disconnect as soon
+            # as it realizes it's not valid.
+            expect_disconnected_handshake(sock)
+        else:
+            # Otherwise, it should complete the connection.
+            handle_discovery_connection(sock, discovery_uri)
+
+    # The client should not reconnect.
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        startup = pq3.recv1(conn, cls=pq3.Startup)
+        assert startup.proto == pq3.protocol(3, 0)
+
+        pq3.send(
+            conn,
+            pq3.types.AuthnRequest,
+            type=pq3.authn.SASL,
+            body=[b"OAUTHBEARER", b""],
+        )
+
+        # The client should disconnect at this point.
+        assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    with sock:
+        expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Second connection sends the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+    token_sent = threading.Event()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token,
+        # and signal the main thread to continue.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+        token_sent.set()
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # At this point the client is talking to the authorization server. Wait for
+    # that to succeed so we don't run into the accept() timeout.
+    token_sent.wait()
+
+    # Client should reconnect and send the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PGOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    if retries >= 0:
+        # First connection is a discovery request, which should result in the
+        # hook being invoked.
+        with sock:
+            handle_discovery_connection(sock, discovery_uri)
+
+        # Client should reconnect to send the token.
+        sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        handle_discovery_connection(sock, response=fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "server rejected OAuth bearer token: invalid_request",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(
+    accept, auth_data_cb, sasl_err, resp_type, resp_payload, expected_error
+):
+    wkuri = f"https://256.256.256.256/.well-known/openid-configuration"
+    sock, client = accept(
+        oauth_issuer=wkuri,
+        oauth_client_id="some-id",
+    )
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which returns a token directly so
+        we don't need an openid_provider instance.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.token = secrets.token_urlsafe().encode()
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            if "openid-configuration" in sasl_err:
+                sasl_err["openid-configuration"] = wkuri
+
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an
+            # invalid one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        handle_discovery_connection(sock, to_http(openid_provider.discovery_uri))
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("auth_type", [pq3.authn.OK, pq3.authn.SASLFinal])
+def test_discovery_incorrectly_permits_connection(accept, auth_type):
+    """
+    Incorrectly responds to a client's discovery request with AuthenticationOK
+    or AuthenticationSASLFinal. require_auth=oauth should catch the former, and
+    the mechanism itself should catch the latter.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+        require_auth="oauth",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Incorrectly log the client in. It should immediately disconnect.
+            pq3.send(conn, pq3.types.AuthnRequest, type=auth_type)
+            assert not conn.read(1), "client sent unexpected data"
+
+    if auth_type == pq3.authn.OK:
+        expected_error = "server did not complete authentication"
+    else:
+        expected_error = "server sent unexpected additional OAuth data"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_no_discovery_url_provided(accept):
+    """
+    Tests what happens when the client doesn't know who to contact and the
+    server doesn't tell it.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        handle_discovery_connection(sock, discovery=None)
+
+    expected_error = "no discovery metadata was provided"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("change_between_connections", [False, True])
+def test_discovery_url_changes(accept, openid_provider, change_between_connections):
+    """
+    Ensures that the client complains if the server agrees on the issuer, but
+    disagrees on the discovery URL to be used.
+    """
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "DEV",
+            "user_code": "USER",
+            "interval": 0,
+            "verification_uri": "https://example.org",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Have the client connect.
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    other_wkuri = f"{openid_provider.issuer}/.well-known/oauth-authorization-server"
+
+    if not change_between_connections:
+        # Immediately respond with the wrong URL.
+        with sock:
+            handle_discovery_connection(sock, other_wkuri)
+
+    else:
+        # First connection; use the right URL to begin with.
+        with sock:
+            handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+        # Second connection. Reject the token and switch the URL.
+        sock, _ = accept()
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+                get_auth_value(initial)
+
+                # Ignore the token; fail with a different discovery URL.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": other_wkuri,
+                }
+                fail_oauth_handshake(conn, resp)
+
+    expected_error = rf"server's discovery document has moved to {other_wkuri} \(previous location was {openid_provider.discovery_uri}\)"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 00000000000..1a73865ee47
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 00000000000..e137df852ef
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 00000000000..ef809e288af
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 00000000000..ab7a6e7fb96
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 00000000000..0dfcffb83e0
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 00000000000..42af80c73ee
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 00000000000..85534b9cc99
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 00000000000..415748b9a66
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 00000000000..2839343ffa1
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 00000000000..02126dba792
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 00000000000..dee4855fc0b
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 00000000000..7c6817de31c
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 00000000000..075c02c1ca6
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 00000000000..804307ee120
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba7..ffdf760d79a 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.34.1

#197Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#196)
4 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 28 Jan 2025, at 01:59, Jacob Champion <jacob.champion@enterprisedb.com> wrote:
On Mon, Jan 27, 2025 at 2:50 PM Daniel Gustafsson <daniel@yesql.se> wrote:

Unless there are objections I aim at committing these patches reasonably soon
to lower the barrier for getting OAuth support committed.

After staring at the patchset even more I committed patches 0001 and 0002 today
as a preparatory step for getting OAuth in. I will work on the 0003 (which is
now 0001) next.

Attached is a v45 which is v44 without the now committer patches to keep the
CFBot happy.

--
Daniel Gustafsson

Attachments:

v45-0004-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v45-0004-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patch; x-unix-mode=0644Download
From e587019c58f2f571ccbb69bcfbb9ead13ecf4ca3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v45 4/4] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  196 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2659 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 +++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6440 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 97bb38c72c..a6fab60bfd 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -318,6 +318,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -402,8 +403,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 3b35f1f0c9..3ee040a90f 100644
--- a/meson.build
+++ b/meson.build
@@ -3408,6 +3408,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3576,6 +3579,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86..236057cd99 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..20e72a404a
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,196 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        sock.settimeout(BLOCKING_TIMEOUT)
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..ea1aeaed48
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2659 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def handle_discovery_connection(sock, discovery=None, *, response=None):
+    """
+    Helper for all tests that expect an initial discovery connection from the
+    client. The provided discovery URI will be used in a standard error response
+    from the server (or response may be set, to provide a custom dictionary),
+    and the SASL exchange will be failed.
+
+    By default, the client is expected to complete the entire handshake. Set
+    finish to False if the client should immediately disconnect when it receives
+    the error response.
+    """
+    if response is None:
+        response = {"status": "invalid_token"}
+        if discovery is not None:
+            response["openid-configuration"] = discovery
+
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        initial = start_oauth_handshake(conn)
+
+        # For discovery, the client should send an empty auth header. See RFC
+        # 7628, Sec. 4.3.
+        auth = get_auth_value(initial)
+        assert auth == b""
+
+        # The discovery handshake is doomed to fail.
+        fail_oauth_handshake(conn, response)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "authorization_code",
+                    "urn:ietf:params:oauth:grant-type:device_code",
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+            except ssl.SSLError as err:
+                # FIXME OpenSSL 3.4 introduced an incompatibility with Python's
+                # TLS error handling, resulting in a bogus "[SYS] unknown error"
+                # on some platforms. Hopefully this is fixed in 2025's set of
+                # maintenance releases and this case can be removed.
+                #
+                #     https://github.com/python/cpython/issues/127257
+                #
+                if "[SYS] unknown error" in str(err):
+                    return
+                raise
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PGPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PGOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PGOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PGOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PGPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PGOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize(
+    "success, abnormal_failure",
+    [
+        pytest.param(True, False, id="success"),
+        pytest.param(False, False, id="normal failure"),
+        pytest.param(False, True, id="abnormal failure"),
+    ],
+)
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+    abnormal_failure,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Client should reconnect.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            elif abnormal_failure:
+                # Send an empty error response, which should result in a
+                # mechanism-level failure in the client. This test ensures that
+                # the client doesn't try a third connection for this case.
+                expected_error = "server sent error response without a status"
+                fail_oauth_handshake(conn, {})
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    with sock:
+        handle_discovery_connection(sock, discovery_uri)
+
+    # Expect the client to connect again.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment (no path)",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server/path/",
+            None,
+            id="extra empty segment (with path)",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+    with sock:
+        if expected_error and not server_discovery:
+            # If the client already knows the URL, it should disconnect as soon
+            # as it realizes it's not valid.
+            expect_disconnected_handshake(sock)
+        else:
+            # Otherwise, it should complete the connection.
+            handle_discovery_connection(sock, discovery_uri)
+
+    # The client should not reconnect.
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        startup = pq3.recv1(conn, cls=pq3.Startup)
+        assert startup.proto == pq3.protocol(3, 0)
+
+        pq3.send(
+            conn,
+            pq3.types.AuthnRequest,
+            type=pq3.authn.SASL,
+            body=[b"OAUTHBEARER", b""],
+        )
+
+        # The client should disconnect at this point.
+        assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    with sock:
+        expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Second connection sends the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+    token_sent = threading.Event()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token,
+        # and signal the main thread to continue.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+        token_sent.set()
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # At this point the client is talking to the authorization server. Wait for
+    # that to succeed so we don't run into the accept() timeout.
+    token_sent.wait()
+
+    # Client should reconnect and send the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PGOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    if retries >= 0:
+        # First connection is a discovery request, which should result in the
+        # hook being invoked.
+        with sock:
+            handle_discovery_connection(sock, discovery_uri)
+
+        # Client should reconnect to send the token.
+        sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        handle_discovery_connection(sock, response=fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "server rejected OAuth bearer token: invalid_request",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(
+    accept, auth_data_cb, sasl_err, resp_type, resp_payload, expected_error
+):
+    wkuri = f"https://256.256.256.256/.well-known/openid-configuration"
+    sock, client = accept(
+        oauth_issuer=wkuri,
+        oauth_client_id="some-id",
+    )
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which returns a token directly so
+        we don't need an openid_provider instance.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.token = secrets.token_urlsafe().encode()
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            if "openid-configuration" in sasl_err:
+                sasl_err["openid-configuration"] = wkuri
+
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an
+            # invalid one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        handle_discovery_connection(sock, to_http(openid_provider.discovery_uri))
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("auth_type", [pq3.authn.OK, pq3.authn.SASLFinal])
+def test_discovery_incorrectly_permits_connection(accept, auth_type):
+    """
+    Incorrectly responds to a client's discovery request with AuthenticationOK
+    or AuthenticationSASLFinal. require_auth=oauth should catch the former, and
+    the mechanism itself should catch the latter.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+        require_auth="oauth",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Incorrectly log the client in. It should immediately disconnect.
+            pq3.send(conn, pq3.types.AuthnRequest, type=auth_type)
+            assert not conn.read(1), "client sent unexpected data"
+
+    if auth_type == pq3.authn.OK:
+        expected_error = "server did not complete authentication"
+    else:
+        expected_error = "server sent unexpected additional OAuth data"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_no_discovery_url_provided(accept):
+    """
+    Tests what happens when the client doesn't know who to contact and the
+    server doesn't tell it.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        handle_discovery_connection(sock, discovery=None)
+
+    expected_error = "no discovery metadata was provided"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("change_between_connections", [False, True])
+def test_discovery_url_changes(accept, openid_provider, change_between_connections):
+    """
+    Ensures that the client complains if the server agrees on the issuer, but
+    disagrees on the discovery URL to be used.
+    """
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "DEV",
+            "user_code": "USER",
+            "interval": 0,
+            "verification_uri": "https://example.org",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Have the client connect.
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    other_wkuri = f"{openid_provider.issuer}/.well-known/oauth-authorization-server"
+
+    if not change_between_connections:
+        # Immediately respond with the wrong URL.
+        with sock:
+            handle_discovery_connection(sock, other_wkuri)
+
+    else:
+        # First connection; use the right URL to begin with.
+        with sock:
+            handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+        # Second connection. Reject the token and switch the URL.
+        sock, _ = accept()
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+                get_auth_value(initial)
+
+                # Ignore the token; fail with a different discovery URL.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": other_wkuri,
+                }
+                fail_oauth_handshake(conn, resp)
+
+    expected_error = rf"server's discovery document has moved to {other_wkuri} \(previous location was {openid_provider.discovery_uri}\)"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..e137df852e
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..42af80c73e
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..415748b9a6
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..2839343ffa
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba..ffdf760d79 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.39.3 (Apple Git-146)

v45-0003-XXX-fix-libcurl-link-error.patchapplication/octet-stream; name=v45-0003-XXX-fix-libcurl-link-error.patch; x-unix-mode=0644Download
From aa842c3b82a1e7f23275c7faa724187eb201e153 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 13 Jan 2025 12:31:59 -0800
Subject: [PATCH v45 3/4] XXX fix libcurl link error

The ftp/curl port appears to be missing a minimum version dependency on
libssh2, so the following starts showing up after upgrading to curl
8.11.1_1:

    libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

But 13.3 is EOL, so it's not clear if anyone would be interested in a
bug report, and a FreeBSD 14 Cirrus image is in progress. Hack past it
for now.
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 8c518c317e..97bb38c72c 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -165,6 +165,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.39.3 (Apple Git-146)

v45-0002-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v45-0002-Add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From cf667a32c5565f5f916a920299ce1d623c1e6a1e Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v45 2/4] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   42 +
 configure                                     |  279 ++
 configure.ac                                  |   41 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  393 +++
 doc/src/sgml/oauth-validators.sgml            |  402 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/protocol.sgml                    |  133 +-
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   66 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  860 ++++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2635 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1141 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   48 +-
 src/interfaces/libpq/libpq-fe.h               |   82 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  264 ++
 .../modules/oauth_validator/t/001_server.pl   |  551 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  135 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 59 files changed, 8598 insertions(+), 39 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 18e944ca89..8c518c317e 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -20,7 +20,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -164,7 +164,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -219,6 +219,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -312,8 +313,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -689,8 +692,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 7b55c2664a..86a3750f9e 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -274,3 +274,45 @@ AC_DEFUN([PGAC_CHECK_STRIP],
   AC_SUBST(STRIP_STATIC_LIB)
   AC_SUBST(STRIP_SHARED_LIB)
 ])# PGAC_CHECK_STRIP
+
+
+
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for required libraries and headers, and test to see whether the current
+# installation of libcurl is threadsafe.
+
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[
+  AC_CHECK_HEADER(curl/curl.h, [],
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+  AC_CHECK_LIB(curl, curl_multi_init, [],
+			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+])],
+  [pgac_cv__libcurl_threadsafe_init=yes],
+  [pgac_cv__libcurl_threadsafe_init=no],
+  [pgac_cv__libcurl_threadsafe_init=unknown])])
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
+              [Define to 1 if curl_global_init() is guaranteed to be threadsafe.])
+  fi
+])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index ceeef9b091..115a91f8f4 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,157 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12378,123 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
+fi
+
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
+$as_echo_n "checking for curl_global_init thread safety... " >&6; }
+if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_threadsafe_init=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_threadsafe_init=yes
+else
+  pgac_cv__libcurl_threadsafe_init=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
+$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+
+$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
+
+  fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
diff --git a/configure.ac b/configure.ac
index d713360f34..e8f1a7db9d 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,40 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1328,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85a..f84085dbac 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index a782f10998..d7bac61a7f 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3f0a7e9c06..96e433179b 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1143,6 +1143,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2584,6 +2597,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index e04acf1c20..9a69ffbc5b 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,106 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10129,278 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
@@ -10092,6 +10473,18 @@ int PQisthreadsafe();
    <application>libpq</application> source code for a way to do cooperative
    locking between <application>libpq</application> and your application.
   </para>
+
+  <para>
+   Similarly, if you are using Curl inside your application,
+   <emphasis>and</emphasis> you do not already
+   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
+   libcurl globally</ulink> before starting new threads, you will need to
+   cooperatively lock (again via <function>PQregisterThreadLock</function>)
+   around any code that may initialize libcurl. This restriction is lifted for
+   more recent versions of Curl that are built to support threadsafe
+   initialization; those builds can be identified by the advertisement of a
+   <literal>threadsafe</literal> feature in their version metadata.
+  </para>
  </sect1>
 
 
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..d0bca9196d
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,402 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..af476c82fc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index fb5dec1172..3bd9e68e6c 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1688,11 +1688,11 @@ SELCT 1/0;<!-- this typo is intentional -->
 
   <para>
    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
-   might be added in the future. The below steps illustrate how SASL
-   authentication is performed in general, while the next subsection gives
-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
+   protocols. At the moment, <productname>PostgreSQL</productname> implements three
+   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
+   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
+   authentication is performed in general, while the next subsections give
+   more details on particular mechanisms.
   </para>
 
   <procedure>
@@ -1727,7 +1727,7 @@ SELCT 1/0;<!-- this typo is intentional -->
    <step id="sasl-auth-end">
     <para>
      Finally, when the authentication exchange is completed successfully, the
-     server sends an AuthenticationSASLFinal message, followed
+     server sends an optional AuthenticationSASLFinal message, followed
      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
      contains additional server-to-client data, whose content is particular to the
      selected authentication mechanism. If the authentication mechanism doesn't
@@ -1746,9 +1746,9 @@ SELCT 1/0;<!-- this typo is intentional -->
    <title>SCRAM-SHA-256 Authentication</title>
 
    <para>
-    The implemented SASL mechanisms at the moment
-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
+    <literal>SCRAM-SHA-256</literal>, and its variant with channel
+    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
+    authentication mechanisms. They are described in
     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    </para>
@@ -1850,6 +1850,121 @@ SELCT 1/0;<!-- this typo is intentional -->
     </step>
    </procedure>
   </sect2>
+
+  <sect2 id="sasl-oauthbearer">
+   <title>OAUTHBEARER Authentication</title>
+
+   <para>
+    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
+    authentication. It is described in detail in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
+   </para>
+
+   <para>
+    A typical exchange differs depending on whether or not the client already
+    has a bearer token cached for the current user. If it does not, the exchange
+    will take place over two connections: the first "discovery" connection to
+    obtain OAuth metadata from the server, and the second connection to send
+    the token after the client has obtained it. (libpq does not currently
+    implement a caching method as part of its builtin flow, so it uses the
+    two-connection exchange.)
+   </para>
+
+   <para>
+    This mechanism is client-initiated, like SCRAM. The client initial response
+    consists of the standard "GS2" header used by SCRAM, followed by a list of
+    <literal>key=value</literal> pairs. The only key currently supported by
+    the server is <literal>auth</literal>, which contains the bearer token.
+    <literal>OAUTHBEARER</literal> additionally specifies three optional
+    components of the client initial response (the <literal>authzid</literal> of
+    the GS2 header, and the <structfield>host</structfield> and
+    <structfield>port</structfield> keys) which are currently ignored by the
+    server.
+   </para>
+
+   <para>
+    <literal>OAUTHBEARER</literal> does not support channel binding, and there
+    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
+    server data during a successful authentication, so the
+    AuthenticationSASLFinal message is not used in the exchange.
+   </para>
+
+   <procedure>
+    <title>Example</title>
+    <step>
+     <para>
+      During the first exchange, the server sends an AuthenticationSASL message
+      with the <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message which
+      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
+      client does not already have a valid bearer token for the current user,
+      the <structfield>auth</structfield> field is empty, indicating a discovery
+      connection.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an AuthenticationSASLContinue message containing an error
+      <literal>status</literal> alongside a well-known URI and scopes that the
+      client should use to conduct an OAuth flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Client sends a SASLResponse message containing the empty set (a single
+      <literal>0x01</literal> byte) to finish its half of the discovery
+      exchange.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an ErrorMessage to fail the first exchange.
+     </para>
+     <para>
+      At this point, the client conducts one of many possible OAuth flows to
+      obtain a bearer token, using any metadata that it has been configured with
+      in addition to that provided by the server. (This description is left
+      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
+      mandate any particular method for obtaining a token.)
+     </para>
+     <para>
+      Once it has a token, the client reconnects to the server for the final
+      exchange:
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server once again sends an AuthenticationSASL message with the
+      <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message, but this
+      time the <structfield>auth</structfield> field in the message contains the
+      bearer token that was obtained during the client flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server validates the token according to the instructions of the
+      token provider. If the client is authorized to connect, it sends an
+      AuthenticationOk message to end the SASL exchange.
+     </para>
+    </step>
+   </procedure>
+  </sect2>
  </sect1>
 
  <sect1 id="protocol-replication">
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index 7c474559bd..0e5e8e8f30 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -347,6 +347,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 8e128f4982..3b35f1f0c9 100644
--- a/meson.build
+++ b/meson.build
@@ -854,6 +854,67 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+
+    # Check to see whether the current platform supports threadsafe Curl
+    # initialization.
+    libcurl_threadsafe_init = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+        #ifdef CURL_VERSION_THREADSAFE
+            if (info->features & CURL_VERSION_THREADSAFE)
+                return 0;
+        #endif
+
+            return 1;
+        }''',
+        name: 'test for curl_global_init thread safety',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_threadsafe_init = true
+        message('curl_global_init is threadsafe')
+      elif r.returncode() == 1
+        message('curl_global_init is not threadsafe')
+      else
+        message('curl_global_init failed; assuming not threadsafe')
+      endif
+    endif
+
+    if libcurl_threadsafe_init
+      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
+    endif
+  endif
+
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3034,6 +3095,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3704,6 +3769,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc..702c451714 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbe11e75bf..3b620bac5a 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..6155d63a11
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,860 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked via before_shmem_exit().
+ */
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc82..0f65014e64 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d..332fad2783 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e..31aa2faae1 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a3..b64c8dea97 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c45..b62c3d944c 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 38cb9e970d..db582d2d62 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4823,6 +4824,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 079efa1baa..678de38a1c 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de3..25b5742068 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7..3657f182db 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..4fcdda7430
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798ab..c04ee38d08 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -442,6 +445,9 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
+/* Define to 1 if curl_global_init() is guaranteed to be threadsafe. */
+#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
 /* Define to 1 if your compiler understands `typeof' or something similar. */
 #undef HAVE_TYPEOF
 
@@ -663,6 +669,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 6a0def7273..e9422888e3 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca..9b789cbec0 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..2407200ea9
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2635 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	if (!ctx->nested)
+		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+			Assert(!*field->target.scalar);
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define HTTPS_SCHEME "https://"
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * provides an authorization endpoint, and both the token and authorization
+ * endpoint URLs seem reasonable).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/*
+	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
+	 * was present in the discovery document's grant_types_supported list. MS
+	 * Entra does not advertise this grant type, though, and since it doesn't
+	 * make sense to stand up a device_authorization_endpoint without also
+	 * accepting device codes at the token_endpoint, that's the only thing we
+	 * currently require.
+	 */
+
+	/*
+	 * Although libcurl will fail later if the URL contains an unsupported
+	 * scheme, that error message is going to be a bit opaque. This is a
+	 * decent time to bail out if we're not using HTTPS for the endpoints
+	 * we'll use for the flow.
+	 */
+	if (!actx->debugging)
+	{
+		if (pg_strncasecmp(provider->device_authorization_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "device authorization endpoint \"%s\" must use HTTPS",
+					   provider->device_authorization_endpoint);
+			return false;
+		}
+
+		if (pg_strncasecmp(provider->token_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "token endpoint \"%s\" must use HTTPS",
+					   provider->token_endpoint);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Calls curl_global_init() in a thread-safe way.
+ *
+ * libcurl has stringent requirements for the thread context in which you call
+ * curl_global_init(), because it's going to try initializing a bunch of other
+ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
+ * the thread-safety situation, but there's a chicken-and-egg problem at
+ * runtime: you can't check the thread safety until you've initialized libcurl,
+ * which you can't do from within a thread unless you know it's thread-safe...
+ *
+ * Returns true if initialization was successful. Successful or not, this
+ * function will not try to reinitialize Curl on successive calls.
+ */
+static bool
+initialize_curl(PGconn *conn)
+{
+	/*
+	 * Don't let the compiler play tricks with this variable. In the
+	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
+	 * enter simultaneously, but we do care if this gets set transiently to
+	 * PG_BOOL_YES/NO in cases where that's not the final answer.
+	 */
+	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	curl_version_info_data *info;
+#endif
+
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * Lock around the whole function. If a libpq client performs its own work
+	 * with libcurl, it must either ensure that Curl is initialized safely
+	 * before calling us (in which case our call will be a no-op), or else it
+	 * must guard its own calls to curl_global_init() with a registered
+	 * threadlock handler. See PQregisterThreadLock().
+	 */
+	pglock_thread();
+#endif
+
+	/*
+	 * Skip initialization if we've already done it. (Curl tracks the number
+	 * of calls; there's no point in incrementing the counter every time we
+	 * connect.)
+	 */
+	if (init_successful == PG_BOOL_YES)
+		goto done;
+	else if (init_successful == PG_BOOL_NO)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init previously failed during OAuth setup");
+		goto done;
+	}
+
+	/*
+	 * We know we've already initialized Winsock by this point (see
+	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
+	 * we have to tell libcurl to initialize everything else, because other
+	 * pieces of our client executable may already be using libcurl for their
+	 * own purposes. If we initialize libcurl with only a subset of its
+	 * features, we could break those other clients nondeterministically, and
+	 * that would probably be a nightmare to debug.
+	 *
+	 * If some other part of the program has already called this, it's a
+	 * no-op.
+	 */
+	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init failed during OAuth setup");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * If we determined at configure time that the Curl installation is
+	 * threadsafe, our job here is much easier. We simply initialize above
+	 * without any locking (concurrent or duplicated calls are fine in that
+	 * situation), then double-check to make sure the runtime setting agrees,
+	 * to try to catch silent downgrades.
+	 */
+	info = curl_version_info(CURLVERSION_NOW);
+	if (!(info->features & CURL_VERSION_THREADSAFE))
+	{
+		/*
+		 * In a downgrade situation, the damage is already done. Curl global
+		 * state may be corrupted. Be noisy.
+		 */
+		libpq_append_conn_error(conn, "libcurl is no longer threadsafe\n"
+								"\tCurl initialization was reported threadsafe when libpq\n"
+								"\twas compiled, but the currently installed version of\n"
+								"\tlibcurl reports that it is not. Recompile libpq against\n"
+								"\tthe installed version of libcurl.");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+#endif
+
+	init_successful = PG_BOOL_YES;
+
+done:
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	pgunlock_thread();
+#endif
+	return (init_successful == PG_BOOL_YES);
+}
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	if (!initialize_curl(conn))
+		return PGRES_POLLING_FAILED;
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				conn->altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..cc53e2bdd1
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1141 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..3259872168
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f..ec7a923604 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f899..de98e0d20c 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85d1ca2864..d5051f5e82 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -399,6 +417,7 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 static const pg_fe_sasl_mech *supported_sasl_mechs[] =
 {
 	&pg_scram_mech,
+	&pg_oauth_mech,
 };
 #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
 
@@ -655,6 +674,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1144,7 +1164,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1519,6 +1539,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4111,7 +4135,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4390,6 +4426,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -5002,6 +5041,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5155,6 +5200,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c..5f8d608261 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a5..f36f7f19d5 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 1a5a223e1a..4180e35f8c 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -4,6 +4,7 @@
 # args for executables (which depend on libpq).
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -40,6 +41,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a4..60e13d5023 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6..4ce22ccbdf 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14..bdfd5f1f8d 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 4f544a042d..0c2ccc75a6 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..f297ed5c96
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 0000000000..138a810462
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 0000000000..f77a3e115c
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..4b78c90557
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 0000000000..12fe70c990
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,264 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..80f5258589
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,551 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 0000000000..95cccf90dd
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 0000000000..f0f23d1d1a
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..8ec0910202
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..bf94f091de
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,135 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index f521ad0b12..ab7d7452ed 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2515,6 +2515,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2558,7 +2563,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e92..7dccf4614a 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a2644a2e65..f5e29b2cc9 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1951,6 +1958,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3089,6 +3097,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3483,6 +3493,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.39.3 (Apple Git-146)

v45-0001-libpq-handle-asynchronous-actions-during-SASL.patchapplication/octet-stream; name=v45-0001-libpq-handle-asynchronous-actions-during-SASL.patch; x-unix-mode=0644Download
From 3b34da567c98c856dec4c6f736749799b2f679e7 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 8 Jan 2025 09:30:05 -0800
Subject: [PATCH v45 1/4] libpq: handle asynchronous actions during SASL

This adds the ability for a SASL mechanism to signal to PQconnectPoll()
that some arbitrary work must be done, external to the Postgres
connection, before authentication can continue. The intent is for the
upcoming OAUTHBEARER mechanism to make use of this functionality.

To ensure that threads are not blocked waiting for the SASL mechanism to
make long-running calls, the mechanism communicates with the top-level
client via the "altsock": a file or socket descriptor, opaque to this
layer of libpq, which is signaled when work is ready to be done again.
This socket temporarily takes the place of the standard connection
descriptor, so PQsocket() clients should continue to operate correctly
using their existing polling implementations.

A mechanism should set an authentication callback (conn->async_auth())
and a cleanup callback (conn->cleanup_async_auth()), return SASL_ASYNC
during the exchange, and assign conn->altsock during the first call to
async_auth(). When the cleanup callback is called, either because
authentication has succeeded or because the connection is being
dropped, the altsock must be released and disconnected from the PGconn.
---
 src/interfaces/libpq/fe-auth-sasl.h  |  11 ++-
 src/interfaces/libpq/fe-auth-scram.c |   6 +-
 src/interfaces/libpq/fe-auth.c       | 120 ++++++++++++++++++++-------
 src/interfaces/libpq/fe-auth.h       |   3 +-
 src/interfaces/libpq/fe-connect.c    |  93 ++++++++++++++++++++-
 src/interfaces/libpq/fe-misc.c       |  35 +++++---
 src/interfaces/libpq/libpq-fe.h      |   2 +
 src/interfaces/libpq/libpq-int.h     |   6 ++
 8 files changed, 227 insertions(+), 49 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-sasl.h b/src/interfaces/libpq/fe-auth-sasl.h
index f0c6213909..f06f547c07 100644
--- a/src/interfaces/libpq/fe-auth-sasl.h
+++ b/src/interfaces/libpq/fe-auth-sasl.h
@@ -30,6 +30,7 @@ typedef enum
 	SASL_COMPLETE = 0,
 	SASL_FAILED,
 	SASL_CONTINUE,
+	SASL_ASYNC,
 } SASLStatus;
 
 /*
@@ -77,6 +78,8 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	state:	   The opaque mechanism state returned by init()
 	 *
+	 *	final:	   true if the server has sent a final exchange outcome
+	 *
 	 *	input:	   The challenge data sent by the server, or NULL when
 	 *			   generating a client-first initial response (that is, when
 	 *			   the server expects the client to send a message to start
@@ -101,12 +104,18 @@ typedef struct pg_fe_sasl_mech
 	 *
 	 *	SASL_CONTINUE:	The output buffer is filled with a client response.
 	 *					Additional server challenge is expected
+	 *	SASL_ASYNC:		Some asynchronous processing external to the
+	 *					connection needs to be done before a response can be
+	 *					generated. The mechanism is responsible for setting up
+	 *					conn->async_auth/cleanup_async_auth appropriately
+	 *					before returning.
 	 *	SASL_COMPLETE:	The SASL exchange has completed successfully.
 	 *	SASL_FAILED:	The exchange has failed and the connection should be
 	 *					dropped.
 	 *--------
 	 */
-	SASLStatus	(*exchange) (void *state, char *input, int inputlen,
+	SASLStatus	(*exchange) (void *state, bool final,
+							 char *input, int inputlen,
 							 char **output, int *outputlen);
 
 	/*--------
diff --git a/src/interfaces/libpq/fe-auth-scram.c b/src/interfaces/libpq/fe-auth-scram.c
index 557e9c568b..fe18615197 100644
--- a/src/interfaces/libpq/fe-auth-scram.c
+++ b/src/interfaces/libpq/fe-auth-scram.c
@@ -24,7 +24,8 @@
 /* The exported SCRAM callback mechanism. */
 static void *scram_init(PGconn *conn, const char *password,
 						const char *sasl_mechanism);
-static SASLStatus scram_exchange(void *opaq, char *input, int inputlen,
+static SASLStatus scram_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
 								 char **output, int *outputlen);
 static bool scram_channel_bound(void *opaq);
 static void scram_free(void *opaq);
@@ -205,7 +206,8 @@ scram_free(void *opaq)
  * Exchange a SCRAM message with backend.
  */
 static SASLStatus
-scram_exchange(void *opaq, char *input, int inputlen,
+scram_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
 			   char **output, int *outputlen)
 {
 	fe_scram_state *state = (fe_scram_state *) opaq;
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 70753d8ec2..761ee8f88f 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -430,7 +430,7 @@ pg_SSPI_startup(PGconn *conn, int use_negotiate, int payloadlen)
  * Initialize SASL authentication exchange.
  */
 static int
-pg_SASL_init(PGconn *conn, int payloadlen)
+pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 {
 	char	   *initialresponse = NULL;
 	int			initialresponselen;
@@ -448,7 +448,7 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 		goto error;
 	}
 
-	if (conn->sasl_state)
+	if (conn->sasl_state && !conn->async_auth)
 	{
 		libpq_append_conn_error(conn, "duplicate SASL authentication request");
 		goto error;
@@ -607,26 +607,54 @@ pg_SASL_init(PGconn *conn, int payloadlen)
 
 	Assert(conn->sasl);
 
-	/*
-	 * Initialize the SASL state information with all the information gathered
-	 * during the initial exchange.
-	 *
-	 * Note: Only tls-unique is supported for the moment.
-	 */
-	conn->sasl_state = conn->sasl->init(conn,
-										password,
-										selected_mechanism);
 	if (!conn->sasl_state)
-		goto oom_error;
+	{
+		/*
+		 * Initialize the SASL state information with all the information
+		 * gathered during the initial exchange.
+		 *
+		 * Note: Only tls-unique is supported for the moment.
+		 */
+		conn->sasl_state = conn->sasl->init(conn,
+											password,
+											selected_mechanism);
+		if (!conn->sasl_state)
+			goto oom_error;
+	}
+	else
+	{
+		/*
+		 * This is only possible if we're returning from an async loop.
+		 * Disconnect it now.
+		 */
+		Assert(conn->async_auth);
+		conn->async_auth = NULL;
+	}
 
 	/* Get the mechanism-specific Initial Client Response, if any */
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, false,
 								  NULL, -1,
 								  &initialresponse, &initialresponselen);
 
 	if (status == SASL_FAILED)
 		goto error;
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 *
+		 * In non-assertion builds, this postcondition is enforced at time of
+		 * use in PQconnectPoll().
+		 */
+		Assert(conn->async_auth);
+		Assert(conn->cleanup_async_auth);
+
+		*async = true;
+		return STATUS_OK;
+	}
+
 	/*
 	 * Build a SASLInitialResponse message, and send it.
 	 */
@@ -671,7 +699,7 @@ oom_error:
  * the protocol.
  */
 static int
-pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
+pg_SASL_continue(PGconn *conn, int payloadlen, bool final, bool *async)
 {
 	char	   *output;
 	int			outputlen;
@@ -701,11 +729,25 @@ pg_SASL_continue(PGconn *conn, int payloadlen, bool final)
 	/* For safety and convenience, ensure the buffer is NULL-terminated. */
 	challenge[payloadlen] = '\0';
 
-	status = conn->sasl->exchange(conn->sasl_state,
+	status = conn->sasl->exchange(conn->sasl_state, final,
 								  challenge, payloadlen,
 								  &output, &outputlen);
 	free(challenge);			/* don't need the input anymore */
 
+	if (status == SASL_ASYNC)
+	{
+		/*
+		 * The mechanism should have set up the necessary callbacks; all we
+		 * need to do is signal the caller.
+		 */
+		*async = true;
+
+		/*
+		 * The mechanism may optionally generate some output to send before
+		 * switching over to async auth, so continue onwards.
+		 */
+	}
+
 	if (final && status == SASL_CONTINUE)
 	{
 		if (outputlen != 0)
@@ -1013,12 +1055,18 @@ check_expected_areq(AuthRequest areq, PGconn *conn)
  * it. We are responsible for reading any remaining extra data, specific
  * to the authentication method. 'payloadlen' is the remaining length in
  * the message.
+ *
+ * If *async is set to true on return, the client doesn't yet have enough
+ * information to respond, and the caller must temporarily switch to
+ * conn->async_auth() to continue driving the exchange.
  */
 int
-pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
+pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn, bool *async)
 {
 	int			oldmsglen;
 
+	*async = false;
+
 	if (!check_expected_areq(areq, conn))
 		return STATUS_ERROR;
 
@@ -1176,7 +1224,7 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 			 * The request contains the name (as assigned by IANA) of the
 			 * authentication mechanism.
 			 */
-			if (pg_SASL_init(conn, payloadlen) != STATUS_OK)
+			if (pg_SASL_init(conn, payloadlen, async) != STATUS_OK)
 			{
 				/* pg_SASL_init already set the error message */
 				return STATUS_ERROR;
@@ -1185,23 +1233,33 @@ pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn)
 
 		case AUTH_REQ_SASL_CONT:
 		case AUTH_REQ_SASL_FIN:
-			if (conn->sasl_state == NULL)
 			{
-				appendPQExpBufferStr(&conn->errorMessage,
-									 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
-				return STATUS_ERROR;
-			}
-			oldmsglen = conn->errorMessage.len;
-			if (pg_SASL_continue(conn, payloadlen,
-								 (areq == AUTH_REQ_SASL_FIN)) != STATUS_OK)
-			{
-				/* Use this message if pg_SASL_continue didn't supply one */
-				if (conn->errorMessage.len == oldmsglen)
+				bool		final = false;
+
+				if (conn->sasl_state == NULL)
+				{
 					appendPQExpBufferStr(&conn->errorMessage,
-										 "fe_sendauth: error in SASL authentication\n");
-				return STATUS_ERROR;
+										 "fe_sendauth: invalid authentication request from server: AUTH_REQ_SASL_CONT without AUTH_REQ_SASL\n");
+					return STATUS_ERROR;
+				}
+				oldmsglen = conn->errorMessage.len;
+
+				if (areq == AUTH_REQ_SASL_FIN)
+					final = true;
+
+				if (pg_SASL_continue(conn, payloadlen, final, async) != STATUS_OK)
+				{
+					/*
+					 * Append a generic error message unless pg_SASL_continue
+					 * did set a more specific one already.
+					 */
+					if (conn->errorMessage.len == oldmsglen)
+						appendPQExpBufferStr(&conn->errorMessage,
+											 "fe_sendauth: error in SASL authentication\n");
+					return STATUS_ERROR;
+				}
+				break;
 			}
-			break;
 
 		default:
 			libpq_append_conn_error(conn, "authentication method %u not supported", areq);
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index df0a68b0b2..1d4991f899 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -19,7 +19,8 @@
 
 
 /* Prototypes for functions in fe-auth.c */
-extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn);
+extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
+						   bool *async);
 extern char *pg_fe_getusername(uid_t user_id, PQExpBuffer errorMessage);
 extern char *pg_fe_getauthname(PQExpBuffer errorMessage);
 
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index e1cea790f9..85d1ca2864 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -507,6 +507,19 @@ pqDropConnection(PGconn *conn, bool flushInput)
 	conn->cmd_queue_recycle = NULL;
 
 	/* Free authentication/encryption state */
+	if (conn->cleanup_async_auth)
+	{
+		/*
+		 * Any in-progress async authentication should be torn down first so
+		 * that cleanup_async_auth() can depend on the other authentication
+		 * state if necessary.
+		 */
+		conn->cleanup_async_auth(conn);
+		conn->cleanup_async_auth = NULL;
+	}
+	conn->async_auth = NULL;
+	/* cleanup_async_auth() should have done this, but make sure */
+	conn->altsock = PGINVALID_SOCKET;
 #ifdef ENABLE_GSS
 	{
 		OM_uint32	min_s;
@@ -2853,6 +2866,7 @@ PQconnectPoll(PGconn *conn)
 		case CONNECTION_NEEDED:
 		case CONNECTION_GSS_STARTUP:
 		case CONNECTION_CHECK_TARGET:
+		case CONNECTION_AUTHENTICATING:
 			break;
 
 		default:
@@ -3888,6 +3902,7 @@ keep_going:						/* We will come back to here until there is
 				int			avail;
 				AuthRequest areq;
 				int			res;
+				bool		async;
 
 				/*
 				 * Scan the message from current point (note that if we find
@@ -4076,7 +4091,17 @@ keep_going:						/* We will come back to here until there is
 				 * Note that conn->pghost must be non-NULL if we are going to
 				 * avoid the Kerberos code doing a hostname look-up.
 				 */
-				res = pg_fe_sendauth(areq, msgLength, conn);
+				res = pg_fe_sendauth(areq, msgLength, conn, &async);
+
+				if (async && (res == STATUS_OK))
+				{
+					/*
+					 * We'll come back later once we're ready to respond.
+					 * Don't consume the request yet.
+					 */
+					conn->status = CONNECTION_AUTHENTICATING;
+					goto keep_going;
+				}
 
 				/*
 				 * OK, we have processed the message; mark data consumed.  We
@@ -4113,6 +4138,69 @@ keep_going:						/* We will come back to here until there is
 				goto keep_going;
 			}
 
+		case CONNECTION_AUTHENTICATING:
+			{
+				PostgresPollingStatusType status;
+
+				if (!conn->async_auth || !conn->cleanup_async_auth)
+				{
+					/* programmer error; should not happen */
+					libpq_append_conn_error(conn,
+											"internal error: async authentication has no handler");
+					goto error_return;
+				}
+
+				/* Drive some external authentication work. */
+				status = conn->async_auth(conn);
+
+				if (status == PGRES_POLLING_FAILED)
+					goto error_return;
+
+				if (status == PGRES_POLLING_OK)
+				{
+					/* Done. Tear down the async implementation. */
+					conn->cleanup_async_auth(conn);
+					conn->cleanup_async_auth = NULL;
+
+					/*
+					 * Cleanup must unset altsock, both as an indication that
+					 * it's been released, and to stop pqSocketCheck from
+					 * looking at the wrong socket after async auth is done.
+					 */
+					if (conn->altsock != PGINVALID_SOCKET)
+					{
+						Assert(false);
+						libpq_append_conn_error(conn,
+												"internal error: async cleanup did not release polling socket");
+						goto error_return;
+					}
+
+					/*
+					 * Reenter the authentication exchange with the server. We
+					 * didn't consume the message that started external
+					 * authentication, so it'll be reprocessed as if we just
+					 * received it.
+					 */
+					conn->status = CONNECTION_AWAITING_RESPONSE;
+
+					goto keep_going;
+				}
+
+				/*
+				 * Caller needs to poll some more. conn->async_auth() should
+				 * have assigned an altsock to poll on.
+				 */
+				if (conn->altsock == PGINVALID_SOCKET)
+				{
+					Assert(false);
+					libpq_append_conn_error(conn,
+											"internal error: async authentication did not set a socket for polling");
+					goto error_return;
+				}
+
+				return status;
+			}
+
 		case CONNECTION_AUTH_OK:
 			{
 				/*
@@ -4794,6 +4882,7 @@ pqMakeEmptyPGconn(void)
 	conn->verbosity = PQERRORS_DEFAULT;
 	conn->show_context = PQSHOW_CONTEXT_ERRORS;
 	conn->sock = PGINVALID_SOCKET;
+	conn->altsock = PGINVALID_SOCKET;
 	conn->Pfdebug = NULL;
 
 	/*
@@ -7445,6 +7534,8 @@ PQsocket(const PGconn *conn)
 {
 	if (!conn)
 		return -1;
+	if (conn->altsock != PGINVALID_SOCKET)
+		return conn->altsock;
 	return (conn->sock != PGINVALID_SOCKET) ? conn->sock : -1;
 }
 
diff --git a/src/interfaces/libpq/fe-misc.c b/src/interfaces/libpq/fe-misc.c
index 2c60eb5b56..d78445c70a 100644
--- a/src/interfaces/libpq/fe-misc.c
+++ b/src/interfaces/libpq/fe-misc.c
@@ -1049,34 +1049,43 @@ pqWriteReady(PGconn *conn)
  * or both.  Returns >0 if one or more conditions are met, 0 if it timed
  * out, -1 if an error occurred.
  *
- * If SSL is in use, the SSL buffer is checked prior to checking the socket
- * for read data directly.
+ * If an altsock is set for asynchronous authentication, that will be used in
+ * preference to the "server" socket. Otherwise, if SSL is in use, the SSL
+ * buffer is checked prior to checking the socket for read data directly.
  */
 static int
 pqSocketCheck(PGconn *conn, int forRead, int forWrite, pg_usec_time_t end_time)
 {
 	int			result;
+	pgsocket	sock;
 
 	if (!conn)
 		return -1;
-	if (conn->sock == PGINVALID_SOCKET)
+
+	if (conn->altsock != PGINVALID_SOCKET)
+		sock = conn->altsock;
+	else
 	{
-		libpq_append_conn_error(conn, "invalid socket");
-		return -1;
-	}
+		sock = conn->sock;
+		if (sock == PGINVALID_SOCKET)
+		{
+			libpq_append_conn_error(conn, "invalid socket");
+			return -1;
+		}
 
 #ifdef USE_SSL
-	/* Check for SSL library buffering read bytes */
-	if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
-	{
-		/* short-circuit the select */
-		return 1;
-	}
+		/* Check for SSL library buffering read bytes */
+		if (forRead && conn->ssl_in_use && pgtls_read_pending(conn))
+		{
+			/* short-circuit the select */
+			return 1;
+		}
 #endif
+	}
 
 	/* We will retry as long as we get EINTR */
 	do
-		result = PQsocketPoll(conn->sock, forRead, forWrite, end_time);
+		result = PQsocketPoll(sock, forRead, forWrite, end_time);
 	while (result < 0 && SOCK_ERRNO == EINTR);
 
 	if (result < 0)
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index cce9ce60c5..a3491faf0c 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -103,6 +103,8 @@ typedef enum
 	CONNECTION_CHECK_STANDBY,	/* Checking if server is in standby mode. */
 	CONNECTION_ALLOCATED,		/* Waiting for connection attempt to be
 								 * started.  */
+	CONNECTION_AUTHENTICATING,	/* Authentication is in progress with some
+								 * external system. */
 } ConnStatusType;
 
 typedef enum
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index e0d5b5fe0b..2546f9f8a5 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -513,6 +513,12 @@ struct pg_conn
 										 * know which auth response we're
 										 * sending */
 
+	/* Callbacks for external async authentication */
+	PostgresPollingStatusType (*async_auth) (PGconn *conn);
+	void		(*cleanup_async_auth) (PGconn *conn);
+	pgsocket	altsock;		/* alternative socket for client to poll */
+
+
 	/* Transient state needed while establishing connection */
 	PGTargetServerType target_server_type;	/* desired session properties */
 	PGLoadBalanceType load_balance_type;	/* desired load balancing
-- 
2.39.3 (Apple Git-146)

#198Daniel Gustafsson
daniel@yesql.se
In reply to: Daniel Gustafsson (#197)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 31 Jan 2025, at 17:23, Daniel Gustafsson <daniel@yesql.se> wrote:

On 28 Jan 2025, at 01:59, Jacob Champion <jacob.champion@enterprisedb.com> wrote:
On Mon, Jan 27, 2025 at 2:50 PM Daniel Gustafsson <daniel@yesql.se> wrote:

Unless there are objections I aim at committing these patches reasonably soon
to lower the barrier for getting OAuth support committed.

After staring at the patchset even more I committed patches 0001 and 0002 today
as a preparatory step for getting OAuth in. I will work on the 0003 (which is
now 0001) next.

After more staring and commitmessage tweaking I pushed the v45-0001 patch and
it has so far built green in a number of BF animals (as well as in CI).

This to pave the way for the main OAUTHBEARER patch in this set, which I hope
we can reach a final version of very soon.

Attached is a v46 which is v45 minus the now committed patch.

--
Daniel Gustafsson

Attachments:

v46-0003-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v46-0003-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patch; x-unix-mode=0644Download
From 0a748be718000156a8380733134d8029ba9b7654 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v46 3/3] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  196 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2659 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 +++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6440 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 3afea832bc..06efe5f9b0 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -321,6 +321,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -405,8 +406,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 80a6b1d57d..a0391a904a 100644
--- a/meson.build
+++ b/meson.build
@@ -3424,6 +3424,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3592,6 +3595,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86..236057cd99 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 0000000000..0e8f027b2e
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 0000000000..b0695b6287
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 0000000000..acf339a589
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 0000000000..20e72a404a
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,196 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        sock.settimeout(BLOCKING_TIMEOUT)
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 0000000000..8372376ede
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 0000000000..ea1aeaed48
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2659 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def handle_discovery_connection(sock, discovery=None, *, response=None):
+    """
+    Helper for all tests that expect an initial discovery connection from the
+    client. The provided discovery URI will be used in a standard error response
+    from the server (or response may be set, to provide a custom dictionary),
+    and the SASL exchange will be failed.
+
+    By default, the client is expected to complete the entire handshake. Set
+    finish to False if the client should immediately disconnect when it receives
+    the error response.
+    """
+    if response is None:
+        response = {"status": "invalid_token"}
+        if discovery is not None:
+            response["openid-configuration"] = discovery
+
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        initial = start_oauth_handshake(conn)
+
+        # For discovery, the client should send an empty auth header. See RFC
+        # 7628, Sec. 4.3.
+        auth = get_auth_value(initial)
+        assert auth == b""
+
+        # The discovery handshake is doomed to fail.
+        fail_oauth_handshake(conn, response)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "authorization_code",
+                    "urn:ietf:params:oauth:grant-type:device_code",
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+            except ssl.SSLError as err:
+                # FIXME OpenSSL 3.4 introduced an incompatibility with Python's
+                # TLS error handling, resulting in a bogus "[SYS] unknown error"
+                # on some platforms. Hopefully this is fixed in 2025's set of
+                # maintenance releases and this case can be removed.
+                #
+                #     https://github.com/python/cpython/issues/127257
+                #
+                if "[SYS] unknown error" in str(err):
+                    return
+                raise
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PGPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+    ]
+
+
+class PGOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PGOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PGOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PGPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PGOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize(
+    "success, abnormal_failure",
+    [
+        pytest.param(True, False, id="success"),
+        pytest.param(False, False, id="normal failure"),
+        pytest.param(False, True, id="abnormal failure"),
+    ],
+)
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+    abnormal_failure,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Client should reconnect.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            elif abnormal_failure:
+                # Send an empty error response, which should result in a
+                # mechanism-level failure in the client. This test ensures that
+                # the client doesn't try a third connection for this case.
+                expected_error = "server sent error response without a status"
+                fail_oauth_handshake(conn, {})
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    with sock:
+        handle_discovery_connection(sock, discovery_uri)
+
+    # Expect the client to connect again.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment (no path)",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server/path/",
+            None,
+            id="extra empty segment (with path)",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+    with sock:
+        if expected_error and not server_discovery:
+            # If the client already knows the URL, it should disconnect as soon
+            # as it realizes it's not valid.
+            expect_disconnected_handshake(sock)
+        else:
+            # Otherwise, it should complete the connection.
+            handle_discovery_connection(sock, discovery_uri)
+
+    # The client should not reconnect.
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        startup = pq3.recv1(conn, cls=pq3.Startup)
+        assert startup.proto == pq3.protocol(3, 0)
+
+        pq3.send(
+            conn,
+            pq3.types.AuthnRequest,
+            type=pq3.authn.SASL,
+            body=[b"OAUTHBEARER", b""],
+        )
+
+        # The client should disconnect at this point.
+        assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    with sock:
+        expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Second connection sends the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+    token_sent = threading.Event()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token,
+        # and signal the main thread to continue.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+        token_sent.set()
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # At this point the client is talking to the authorization server. Wait for
+    # that to succeed so we don't run into the accept() timeout.
+    token_sent.wait()
+
+    # Client should reconnect and send the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PGOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    if retries >= 0:
+        # First connection is a discovery request, which should result in the
+        # hook being invoked.
+        with sock:
+            handle_discovery_connection(sock, discovery_uri)
+
+        # Client should reconnect to send the token.
+        sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        handle_discovery_connection(sock, response=fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "server rejected OAuth bearer token: invalid_request",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(
+    accept, auth_data_cb, sasl_err, resp_type, resp_payload, expected_error
+):
+    wkuri = f"https://256.256.256.256/.well-known/openid-configuration"
+    sock, client = accept(
+        oauth_issuer=wkuri,
+        oauth_client_id="some-id",
+    )
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which returns a token directly so
+        we don't need an openid_provider instance.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.token = secrets.token_urlsafe().encode()
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            if "openid-configuration" in sasl_err:
+                sasl_err["openid-configuration"] = wkuri
+
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an
+            # invalid one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        handle_discovery_connection(sock, to_http(openid_provider.discovery_uri))
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("auth_type", [pq3.authn.OK, pq3.authn.SASLFinal])
+def test_discovery_incorrectly_permits_connection(accept, auth_type):
+    """
+    Incorrectly responds to a client's discovery request with AuthenticationOK
+    or AuthenticationSASLFinal. require_auth=oauth should catch the former, and
+    the mechanism itself should catch the latter.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+        require_auth="oauth",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Incorrectly log the client in. It should immediately disconnect.
+            pq3.send(conn, pq3.types.AuthnRequest, type=auth_type)
+            assert not conn.read(1), "client sent unexpected data"
+
+    if auth_type == pq3.authn.OK:
+        expected_error = "server did not complete authentication"
+    else:
+        expected_error = "server sent unexpected additional OAuth data"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_no_discovery_url_provided(accept):
+    """
+    Tests what happens when the client doesn't know who to contact and the
+    server doesn't tell it.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        handle_discovery_connection(sock, discovery=None)
+
+    expected_error = "no discovery metadata was provided"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("change_between_connections", [False, True])
+def test_discovery_url_changes(accept, openid_provider, change_between_connections):
+    """
+    Ensures that the client complains if the server agrees on the issuer, but
+    disagrees on the discovery URL to be used.
+    """
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "DEV",
+            "user_code": "USER",
+            "interval": 0,
+            "verification_uri": "https://example.org",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Have the client connect.
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    other_wkuri = f"{openid_provider.issuer}/.well-known/oauth-authorization-server"
+
+    if not change_between_connections:
+        # Immediately respond with the wrong URL.
+        with sock:
+            handle_discovery_connection(sock, other_wkuri)
+
+    else:
+        # First connection; use the right URL to begin with.
+        with sock:
+            handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+        # Second connection. Reject the token and switch the URL.
+        sock, _ = accept()
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+                get_auth_value(initial)
+
+                # Ignore the token; fail with a different discovery URL.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": other_wkuri,
+                }
+                fail_oauth_handshake(conn, resp)
+
+    expected_error = rf"server's discovery document has moved to {other_wkuri} \(previous location was {openid_provider.discovery_uri}\)"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 0000000000..1a73865ee4
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 0000000000..e137df852e
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 0000000000..ef809e288a
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 0000000000..ab7a6e7fb9
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 0000000000..0dfcffb83e
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 0000000000..42af80c73e
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 0000000000..85534b9cc9
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 0000000000..415748b9a6
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 0000000000..2839343ffa
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 0000000000..02126dba79
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 0000000000..dee4855fc0
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 0000000000..7c6817de31
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 0000000000..075c02c1ca
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 0000000000..804307ee12
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba..ffdf760d79 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.39.3 (Apple Git-146)

v46-0002-XXX-fix-libcurl-link-error.patchapplication/octet-stream; name=v46-0002-XXX-fix-libcurl-link-error.patch; x-unix-mode=0644Download
From dde6a4afb3b702c9265465d73c65b684bf1b9381 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 13 Jan 2025 12:31:59 -0800
Subject: [PATCH v46 2/3] XXX fix libcurl link error

The ftp/curl port appears to be missing a minimum version dependency on
libssh2, so the following starts showing up after upgrading to curl
8.11.1_1:

    libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

But 13.3 is EOL, so it's not clear if anyone would be interested in a
bug report, and a FreeBSD 14 Cirrus image is in progress. Hack past it
for now.
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index c192a07770..3afea832bc 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -168,6 +168,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.39.3 (Apple Git-146)

v46-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v46-0001-Add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 93f059c214722f19d1881d8241905412ebe1faab Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v46 1/3] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   42 +
 configure                                     |  279 ++
 configure.ac                                  |   41 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  393 +++
 doc/src/sgml/oauth-validators.sgml            |  402 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/protocol.sgml                    |  133 +-
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   66 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  860 ++++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2635 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1141 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   48 +-
 src/interfaces/libpq/libpq-fe.h               |   82 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  264 ++
 .../modules/oauth_validator/t/001_server.pl   |  551 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  135 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 59 files changed, 8598 insertions(+), 39 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index cfe2117e02..c192a07770 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -167,7 +167,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -222,6 +222,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -315,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -692,8 +695,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 7b55c2664a..86a3750f9e 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -274,3 +274,45 @@ AC_DEFUN([PGAC_CHECK_STRIP],
   AC_SUBST(STRIP_STATIC_LIB)
   AC_SUBST(STRIP_SHARED_LIB)
 ])# PGAC_CHECK_STRIP
+
+
+
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for required libraries and headers, and test to see whether the current
+# installation of libcurl is threadsafe.
+
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[
+  AC_CHECK_HEADER(curl/curl.h, [],
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+  AC_CHECK_LIB(curl, curl_multi_init, [],
+			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+])],
+  [pgac_cv__libcurl_threadsafe_init=yes],
+  [pgac_cv__libcurl_threadsafe_init=no],
+  [pgac_cv__libcurl_threadsafe_init=unknown])])
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
+              [Define to 1 if curl_global_init() is guaranteed to be threadsafe.])
+  fi
+])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0ffcaeb436..33422d2411 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,157 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12378,123 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
+fi
+
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
+$as_echo_n "checking for curl_global_init thread safety... " >&6; }
+if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_threadsafe_init=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_threadsafe_init=yes
+else
+  pgac_cv__libcurl_threadsafe_init=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
+$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+
+$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
+
+  fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
diff --git a/configure.ac b/configure.ac
index f56681e0d9..b6d02f5ecc 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,40 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1328,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85a..f84085dbac 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 38244409e3..d53595f895 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c..25fb99cee6 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3f0a7e9c06..96e433179b 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1143,6 +1143,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2584,6 +2597,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index e04acf1c20..9a69ffbc5b 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,106 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10129,278 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
@@ -10092,6 +10473,18 @@ int PQisthreadsafe();
    <application>libpq</application> source code for a way to do cooperative
    locking between <application>libpq</application> and your application.
   </para>
+
+  <para>
+   Similarly, if you are using Curl inside your application,
+   <emphasis>and</emphasis> you do not already
+   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
+   libcurl globally</ulink> before starting new threads, you will need to
+   cooperatively lock (again via <function>PQregisterThreadLock</function>)
+   around any code that may initialize libcurl. This restriction is lifted for
+   more recent versions of Curl that are built to support threadsafe
+   initialization; those builds can be identified by the advertisement of a
+   <literal>threadsafe</literal> feature in their version metadata.
+  </para>
  </sect1>
 
 
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 0000000000..d0bca9196d
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,402 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c5850..af476c82fc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index fb5dec1172..3bd9e68e6c 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1688,11 +1688,11 @@ SELCT 1/0;<!-- this typo is intentional -->
 
   <para>
    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
-   might be added in the future. The below steps illustrate how SASL
-   authentication is performed in general, while the next subsection gives
-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
+   protocols. At the moment, <productname>PostgreSQL</productname> implements three
+   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
+   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
+   authentication is performed in general, while the next subsections give
+   more details on particular mechanisms.
   </para>
 
   <procedure>
@@ -1727,7 +1727,7 @@ SELCT 1/0;<!-- this typo is intentional -->
    <step id="sasl-auth-end">
     <para>
      Finally, when the authentication exchange is completed successfully, the
-     server sends an AuthenticationSASLFinal message, followed
+     server sends an optional AuthenticationSASLFinal message, followed
      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
      contains additional server-to-client data, whose content is particular to the
      selected authentication mechanism. If the authentication mechanism doesn't
@@ -1746,9 +1746,9 @@ SELCT 1/0;<!-- this typo is intentional -->
    <title>SCRAM-SHA-256 Authentication</title>
 
    <para>
-    The implemented SASL mechanisms at the moment
-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
+    <literal>SCRAM-SHA-256</literal>, and its variant with channel
+    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
+    authentication mechanisms. They are described in
     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    </para>
@@ -1850,6 +1850,121 @@ SELCT 1/0;<!-- this typo is intentional -->
     </step>
    </procedure>
   </sect2>
+
+  <sect2 id="sasl-oauthbearer">
+   <title>OAUTHBEARER Authentication</title>
+
+   <para>
+    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
+    authentication. It is described in detail in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
+   </para>
+
+   <para>
+    A typical exchange differs depending on whether or not the client already
+    has a bearer token cached for the current user. If it does not, the exchange
+    will take place over two connections: the first "discovery" connection to
+    obtain OAuth metadata from the server, and the second connection to send
+    the token after the client has obtained it. (libpq does not currently
+    implement a caching method as part of its builtin flow, so it uses the
+    two-connection exchange.)
+   </para>
+
+   <para>
+    This mechanism is client-initiated, like SCRAM. The client initial response
+    consists of the standard "GS2" header used by SCRAM, followed by a list of
+    <literal>key=value</literal> pairs. The only key currently supported by
+    the server is <literal>auth</literal>, which contains the bearer token.
+    <literal>OAUTHBEARER</literal> additionally specifies three optional
+    components of the client initial response (the <literal>authzid</literal> of
+    the GS2 header, and the <structfield>host</structfield> and
+    <structfield>port</structfield> keys) which are currently ignored by the
+    server.
+   </para>
+
+   <para>
+    <literal>OAUTHBEARER</literal> does not support channel binding, and there
+    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
+    server data during a successful authentication, so the
+    AuthenticationSASLFinal message is not used in the exchange.
+   </para>
+
+   <procedure>
+    <title>Example</title>
+    <step>
+     <para>
+      During the first exchange, the server sends an AuthenticationSASL message
+      with the <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message which
+      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
+      client does not already have a valid bearer token for the current user,
+      the <structfield>auth</structfield> field is empty, indicating a discovery
+      connection.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an AuthenticationSASLContinue message containing an error
+      <literal>status</literal> alongside a well-known URI and scopes that the
+      client should use to conduct an OAuth flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Client sends a SASLResponse message containing the empty set (a single
+      <literal>0x01</literal> byte) to finish its half of the discovery
+      exchange.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an ErrorMessage to fail the first exchange.
+     </para>
+     <para>
+      At this point, the client conducts one of many possible OAuth flows to
+      obtain a bearer token, using any metadata that it has been configured with
+      in addition to that provided by the server. (This description is left
+      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
+      mandate any particular method for obtaining a token.)
+     </para>
+     <para>
+      Once it has a token, the client reconnects to the server for the final
+      exchange:
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server once again sends an AuthenticationSASL message with the
+      <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message, but this
+      time the <structfield>auth</structfield> field in the message contains the
+      bearer token that was obtained during the client flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server validates the token according to the instructions of the
+      token provider. If the client is authorized to connect, it sends an
+      AuthenticationOk message to end the SASL exchange.
+     </para>
+    </step>
+   </procedure>
+  </sect2>
  </sect1>
 
  <sect1 id="protocol-replication">
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index 7c474559bd..0e5e8e8f30 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -347,6 +347,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 1ceadb9a83..80a6b1d57d 100644
--- a/meson.build
+++ b/meson.build
@@ -854,6 +854,67 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+
+    # Check to see whether the current platform supports threadsafe Curl
+    # initialization.
+    libcurl_threadsafe_init = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+        #ifdef CURL_VERSION_THREADSAFE
+            if (info->features & CURL_VERSION_THREADSAFE)
+                return 0;
+        #endif
+
+            return 1;
+        }''',
+        name: 'test for curl_global_init thread safety',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_threadsafe_init = true
+        message('curl_global_init is threadsafe')
+      elif r.returncode() == 1
+        message('curl_global_init is not threadsafe')
+      else
+        message('curl_global_init failed; assuming not threadsafe')
+      endif
+    endif
+
+    if libcurl_threadsafe_init
+      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
+    endif
+  endif
+
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3044,6 +3105,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3720,6 +3785,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc..702c451714 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbe11e75bf..3b620bac5a 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a4..98eb2a8242 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 0000000000..6155d63a11
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,860 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked via before_shmem_exit().
+ */
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc82..0f65014e64 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d..332fad2783 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e..31aa2faae1 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a3..b64c8dea97 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c45..b62c3d944c 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index ce7534d4d2..7747a09c2a 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4832,6 +4833,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index c40b7a3121..9184ea0f1d 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 0000000000..8fe5626778
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de3..25b5742068 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7..3657f182db 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 0000000000..4fcdda7430
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798ab..c04ee38d08 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -442,6 +445,9 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
+/* Define to 1 if curl_global_init() is guaranteed to be threadsafe. */
+#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
 /* Define to 1 if your compiler understands `typeof' or something similar. */
 #undef HAVE_TYPEOF
 
@@ -663,6 +669,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 701810a272..90b0b65db6 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca..9b789cbec0 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 0000000000..2407200ea9
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2635 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	if (!ctx->nested)
+		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+			Assert(!*field->target.scalar);
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define HTTPS_SCHEME "https://"
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * provides an authorization endpoint, and both the token and authorization
+ * endpoint URLs seem reasonable).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/*
+	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
+	 * was present in the discovery document's grant_types_supported list. MS
+	 * Entra does not advertise this grant type, though, and since it doesn't
+	 * make sense to stand up a device_authorization_endpoint without also
+	 * accepting device codes at the token_endpoint, that's the only thing we
+	 * currently require.
+	 */
+
+	/*
+	 * Although libcurl will fail later if the URL contains an unsupported
+	 * scheme, that error message is going to be a bit opaque. This is a
+	 * decent time to bail out if we're not using HTTPS for the endpoints
+	 * we'll use for the flow.
+	 */
+	if (!actx->debugging)
+	{
+		if (pg_strncasecmp(provider->device_authorization_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "device authorization endpoint \"%s\" must use HTTPS",
+					   provider->device_authorization_endpoint);
+			return false;
+		}
+
+		if (pg_strncasecmp(provider->token_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "token endpoint \"%s\" must use HTTPS",
+					   provider->token_endpoint);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Calls curl_global_init() in a thread-safe way.
+ *
+ * libcurl has stringent requirements for the thread context in which you call
+ * curl_global_init(), because it's going to try initializing a bunch of other
+ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
+ * the thread-safety situation, but there's a chicken-and-egg problem at
+ * runtime: you can't check the thread safety until you've initialized libcurl,
+ * which you can't do from within a thread unless you know it's thread-safe...
+ *
+ * Returns true if initialization was successful. Successful or not, this
+ * function will not try to reinitialize Curl on successive calls.
+ */
+static bool
+initialize_curl(PGconn *conn)
+{
+	/*
+	 * Don't let the compiler play tricks with this variable. In the
+	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
+	 * enter simultaneously, but we do care if this gets set transiently to
+	 * PG_BOOL_YES/NO in cases where that's not the final answer.
+	 */
+	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	curl_version_info_data *info;
+#endif
+
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * Lock around the whole function. If a libpq client performs its own work
+	 * with libcurl, it must either ensure that Curl is initialized safely
+	 * before calling us (in which case our call will be a no-op), or else it
+	 * must guard its own calls to curl_global_init() with a registered
+	 * threadlock handler. See PQregisterThreadLock().
+	 */
+	pglock_thread();
+#endif
+
+	/*
+	 * Skip initialization if we've already done it. (Curl tracks the number
+	 * of calls; there's no point in incrementing the counter every time we
+	 * connect.)
+	 */
+	if (init_successful == PG_BOOL_YES)
+		goto done;
+	else if (init_successful == PG_BOOL_NO)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init previously failed during OAuth setup");
+		goto done;
+	}
+
+	/*
+	 * We know we've already initialized Winsock by this point (see
+	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
+	 * we have to tell libcurl to initialize everything else, because other
+	 * pieces of our client executable may already be using libcurl for their
+	 * own purposes. If we initialize libcurl with only a subset of its
+	 * features, we could break those other clients nondeterministically, and
+	 * that would probably be a nightmare to debug.
+	 *
+	 * If some other part of the program has already called this, it's a
+	 * no-op.
+	 */
+	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init failed during OAuth setup");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * If we determined at configure time that the Curl installation is
+	 * threadsafe, our job here is much easier. We simply initialize above
+	 * without any locking (concurrent or duplicated calls are fine in that
+	 * situation), then double-check to make sure the runtime setting agrees,
+	 * to try to catch silent downgrades.
+	 */
+	info = curl_version_info(CURLVERSION_NOW);
+	if (!(info->features & CURL_VERSION_THREADSAFE))
+	{
+		/*
+		 * In a downgrade situation, the damage is already done. Curl global
+		 * state may be corrupted. Be noisy.
+		 */
+		libpq_append_conn_error(conn, "libcurl is no longer threadsafe\n"
+								"\tCurl initialization was reported threadsafe when libpq\n"
+								"\twas compiled, but the currently installed version of\n"
+								"\tlibcurl reports that it is not. Recompile libpq against\n"
+								"\tthe installed version of libcurl.");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+#endif
+
+	init_successful = PG_BOOL_YES;
+
+done:
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	pgunlock_thread();
+#endif
+	return (init_successful == PG_BOOL_YES);
+}
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	if (!initialize_curl(conn))
+		return PGRES_POLLING_FAILED;
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				conn->altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 0000000000..cc53e2bdd1
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1141 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 0000000000..3259872168
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f..ec7a923604 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f899..de98e0d20c 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85d1ca2864..d5051f5e82 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -399,6 +417,7 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 static const pg_fe_sasl_mech *supported_sasl_mechs[] =
 {
 	&pg_scram_mech,
+	&pg_oauth_mech,
 };
 #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
 
@@ -655,6 +674,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1144,7 +1164,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1519,6 +1539,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4111,7 +4135,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4390,6 +4426,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -5002,6 +5041,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5155,6 +5200,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c..5f8d608261 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a5..f36f7f19d5 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index dd64d291b3..19f4a52a97 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -37,6 +38,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a4..60e13d5023 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6..4ce22ccbdf 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14..bdfd5f1f8d 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 4f544a042d..0c2ccc75a6 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 0000000000..5dcb3ff972
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 0000000000..f297ed5c96
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 0000000000..138a810462
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 0000000000..f77a3e115c
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 0000000000..4b78c90557
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 0000000000..12fe70c990
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,264 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 0000000000..80f5258589
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,551 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 0000000000..95cccf90dd
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 0000000000..f0f23d1d1a
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 0000000000..8ec0910202
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 0000000000..bf94f091de
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,135 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index f521ad0b12..ab7d7452ed 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2515,6 +2515,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2558,7 +2563,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e92..7dccf4614a 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9a3bee93de..f3e3592eb7 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1951,6 +1958,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3089,6 +3097,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3485,6 +3495,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.39.3 (Apple Git-146)

#199Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#198)
12 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Feb 6, 2025 at 2:02 PM Daniel Gustafsson <daniel@yesql.se> wrote:

Attached is a v46 which is v45 minus the now committed patch.

Thank you! Attached is v47, which creeps ever closer to the finish line.

For ease of review, v47-0001 is identical to v46-0001. The new changes
are split into separate fixup! commits which I'll squash in the next
round. They're ordered roughly in order of increasing complexity:

- 0002 removes and/or rewrites TODO comments that I do not plan to implement.
- 0003 makes the kqueue implementation register a one-shot timer
rather than a repeating timer, to match the epoll implementation.

- 0004 fixes a bug in backend cleanup:

I noticed that there was a "private state cookie changed" error in
some of the test logs, but none of the tests had actually failed.
Changing that to a PANIC revealed that before_shmem_exit() is too late
to run the cleanup function, since the state allocation has already
been released. I've swapped that out for a reset callback.

- 0005 warns at configure time if libcurl doesn't have a nonblocking
DNS implementation.
- 0006 augments bare Asserts during client-side JSON parsing with code
that will fail gracefully in production builds as well.
- 0007 escapes binary data during the printing of libcurl debug
output. (If you're having a bad enough day to need the debug spray,
you're probably not in the mood for the sound of a hundred BELs.)
- 0008 parses and passes through the expires_in and optional
verification_uri_complete fields from the device endpoint to any
custom user prompt. (We do not use them ourselves, at the moment. But
after seeing some nice demos of RHEL/PAM/sssd support for device flow
QR codes at FOSDEM, I think we're definitely going to want to make
those available to devs.)

- 0009 is gold-plating for the OAUTH_STEP_WAIT_INTERVAL state:

If PQconnectPoll client calls us early while we're waiting for the
ping interval to expire, we will immediately send the next request
even if we should be waiting. That bothers me a bit, because if our
implementation gets a tempban from an OAuth provider because one of
our clients accidentally implemented a busy-loop, I think we're likely
to get the blame. Ideally we should kick back up to the caller and
tell them to wait longer, instead.

Checking to see if the timer has expired is easy enough for
epoll/timerfd, but I wasn't able to find an easy way to do that with a
single kqueue. Instead, I split the kqueue in two and treat the second
one as the timer. (If it becomes readable, the timer has expired.)
There is an additional advantage in that I get to remove some `#ifdef
HAVE_SYS_EPOLL_H` sections; the two implementations are closer in
spirit now.

Thanks,
--Jacob

Attachments:

since-v46.diff.txttext/plain; charset=US-ASCII; name=since-v46.diff.txtDownload
 1:  ac5b3e053a3 =  1:  6747b7cc795 Add OAUTHBEARER SASL mechanism
 -:  ----------- >  2:  483129c1ca9 fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  3:  75d98784ded fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  4:  fd60ceb4c84 fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  5:  595362ef2c1 fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  6:  f73c042adc9 fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  7:  298839b69f0 fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  8:  1cf48a8f835 fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  9:  27135876559 fixup! Add OAUTHBEARER SASL mechanism
 2:  4190ef1bac7 = 10:  d8c1f298080 XXX fix libcurl link error
 3:  11cf045bd21 ! 11:  dbf305d0489 DO NOT MERGE: Add pytest suite for OAuth
    @@ src/test/python/client/test_oauth.py (new)
     +    _fields_ = [
     +        ("verification_uri", ctypes.c_char_p),
     +        ("user_code", ctypes.c_char_p),
    ++        ("verification_uri_complete", ctypes.c_char_p),
    ++        ("expires_in", ctypes.c_int),
     +    ]
     +
     +
    @@ src/test/python/client/test_oauth.py (new)
     +        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
     +        assert call.verification_uri.decode() == verification_url
     +        assert call.user_code.decode() == user_code
    ++        assert call.verification_uri_complete is None
    ++        assert call.expires_in == 5
     +
     +    if not success:
     +        # The client should not try to connect again.
v47-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v47-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 6747b7cc795aab4c8a227ee123349106c5d2ec0a Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v47 01/11] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   42 +
 configure                                     |  279 ++
 configure.ac                                  |   41 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  393 +++
 doc/src/sgml/oauth-validators.sgml            |  402 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/protocol.sgml                    |  133 +-
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |   66 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  860 ++++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2635 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1141 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   48 +-
 src/interfaces/libpq/libpq-fe.h               |   82 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  264 ++
 .../modules/oauth_validator/t/001_server.pl   |  551 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  135 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 59 files changed, 8598 insertions(+), 39 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index cfe2117e02e..c192a077701 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -167,7 +167,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -222,6 +222,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -315,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -692,8 +695,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 7b55c2664a6..86a3750f9e5 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -274,3 +274,45 @@ AC_DEFUN([PGAC_CHECK_STRIP],
   AC_SUBST(STRIP_STATIC_LIB)
   AC_SUBST(STRIP_SHARED_LIB)
 ])# PGAC_CHECK_STRIP
+
+
+
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for required libraries and headers, and test to see whether the current
+# installation of libcurl is threadsafe.
+
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[
+  AC_CHECK_HEADER(curl/curl.h, [],
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+  AC_CHECK_LIB(curl, curl_multi_init, [],
+			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+])],
+  [pgac_cv__libcurl_threadsafe_init=yes],
+  [pgac_cv__libcurl_threadsafe_init=no],
+  [pgac_cv__libcurl_threadsafe_init=unknown])])
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
+              [Define to 1 if curl_global_init() is guaranteed to be threadsafe.])
+  fi
+])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0ffcaeb4367..33422d24112 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,157 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12378,123 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
+fi
+
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
+$as_echo_n "checking for curl_global_init thread safety... " >&6; }
+if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_threadsafe_init=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_threadsafe_init=yes
+else
+  pgac_cv__libcurl_threadsafe_init=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
+$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+
+$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
+
+  fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
diff --git a/configure.ac b/configure.ac
index f56681e0d91..b6d02f5ecc7 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,40 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1328,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..f84085dbac4 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 38244409e3c..d53595f8951 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3f0a7e9c069..96e433179b9 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1143,6 +1143,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2584,6 +2597,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index e04acf1c208..9a69ffbc5b3 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,106 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10129,278 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
@@ -10092,6 +10473,18 @@ int PQisthreadsafe();
    <application>libpq</application> source code for a way to do cooperative
    locking between <application>libpq</application> and your application.
   </para>
+
+  <para>
+   Similarly, if you are using Curl inside your application,
+   <emphasis>and</emphasis> you do not already
+   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
+   libcurl globally</ulink> before starting new threads, you will need to
+   cooperatively lock (again via <function>PQregisterThreadLock</function>)
+   around any code that may initialize libcurl. This restriction is lifted for
+   more recent versions of Curl that are built to support threadsafe
+   initialization; those builds can be identified by the advertisement of a
+   <literal>threadsafe</literal> feature in their version metadata.
+  </para>
  </sect1>
 
 
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..d0bca9196d9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,402 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index fb5dec1172e..3bd9e68e6ce 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1688,11 +1688,11 @@ SELCT 1/0;<!-- this typo is intentional -->
 
   <para>
    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
-   might be added in the future. The below steps illustrate how SASL
-   authentication is performed in general, while the next subsection gives
-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
+   protocols. At the moment, <productname>PostgreSQL</productname> implements three
+   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
+   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
+   authentication is performed in general, while the next subsections give
+   more details on particular mechanisms.
   </para>
 
   <procedure>
@@ -1727,7 +1727,7 @@ SELCT 1/0;<!-- this typo is intentional -->
    <step id="sasl-auth-end">
     <para>
      Finally, when the authentication exchange is completed successfully, the
-     server sends an AuthenticationSASLFinal message, followed
+     server sends an optional AuthenticationSASLFinal message, followed
      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
      contains additional server-to-client data, whose content is particular to the
      selected authentication mechanism. If the authentication mechanism doesn't
@@ -1746,9 +1746,9 @@ SELCT 1/0;<!-- this typo is intentional -->
    <title>SCRAM-SHA-256 Authentication</title>
 
    <para>
-    The implemented SASL mechanisms at the moment
-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
+    <literal>SCRAM-SHA-256</literal>, and its variant with channel
+    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
+    authentication mechanisms. They are described in
     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    </para>
@@ -1850,6 +1850,121 @@ SELCT 1/0;<!-- this typo is intentional -->
     </step>
    </procedure>
   </sect2>
+
+  <sect2 id="sasl-oauthbearer">
+   <title>OAUTHBEARER Authentication</title>
+
+   <para>
+    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
+    authentication. It is described in detail in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
+   </para>
+
+   <para>
+    A typical exchange differs depending on whether or not the client already
+    has a bearer token cached for the current user. If it does not, the exchange
+    will take place over two connections: the first "discovery" connection to
+    obtain OAuth metadata from the server, and the second connection to send
+    the token after the client has obtained it. (libpq does not currently
+    implement a caching method as part of its builtin flow, so it uses the
+    two-connection exchange.)
+   </para>
+
+   <para>
+    This mechanism is client-initiated, like SCRAM. The client initial response
+    consists of the standard "GS2" header used by SCRAM, followed by a list of
+    <literal>key=value</literal> pairs. The only key currently supported by
+    the server is <literal>auth</literal>, which contains the bearer token.
+    <literal>OAUTHBEARER</literal> additionally specifies three optional
+    components of the client initial response (the <literal>authzid</literal> of
+    the GS2 header, and the <structfield>host</structfield> and
+    <structfield>port</structfield> keys) which are currently ignored by the
+    server.
+   </para>
+
+   <para>
+    <literal>OAUTHBEARER</literal> does not support channel binding, and there
+    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
+    server data during a successful authentication, so the
+    AuthenticationSASLFinal message is not used in the exchange.
+   </para>
+
+   <procedure>
+    <title>Example</title>
+    <step>
+     <para>
+      During the first exchange, the server sends an AuthenticationSASL message
+      with the <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message which
+      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
+      client does not already have a valid bearer token for the current user,
+      the <structfield>auth</structfield> field is empty, indicating a discovery
+      connection.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an AuthenticationSASLContinue message containing an error
+      <literal>status</literal> alongside a well-known URI and scopes that the
+      client should use to conduct an OAuth flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Client sends a SASLResponse message containing the empty set (a single
+      <literal>0x01</literal> byte) to finish its half of the discovery
+      exchange.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an ErrorMessage to fail the first exchange.
+     </para>
+     <para>
+      At this point, the client conducts one of many possible OAuth flows to
+      obtain a bearer token, using any metadata that it has been configured with
+      in addition to that provided by the server. (This description is left
+      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
+      mandate any particular method for obtaining a token.)
+     </para>
+     <para>
+      Once it has a token, the client reconnects to the server for the final
+      exchange:
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server once again sends an AuthenticationSASL message with the
+      <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message, but this
+      time the <structfield>auth</structfield> field in the message contains the
+      bearer token that was obtained during the client flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server validates the token according to the instructions of the
+      token provider. If the client is authorized to connect, it sends an
+      AuthenticationOk message to end the SASL exchange.
+     </para>
+    </step>
+   </procedure>
+  </sect2>
  </sect1>
 
  <sect1 id="protocol-replication">
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index 7c474559bdf..0e5e8e8f309 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -347,6 +347,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 1ceadb9a830..80a6b1d57d6 100644
--- a/meson.build
+++ b/meson.build
@@ -854,6 +854,67 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+
+    # Check to see whether the current platform supports threadsafe Curl
+    # initialization.
+    libcurl_threadsafe_init = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+        #ifdef CURL_VERSION_THREADSAFE
+            if (info->features & CURL_VERSION_THREADSAFE)
+                return 0;
+        #endif
+
+            return 1;
+        }''',
+        name: 'test for curl_global_init thread safety',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_threadsafe_init = true
+        message('curl_global_init is threadsafe')
+      elif r.returncode() == 1
+        message('curl_global_init is not threadsafe')
+      else
+        message('curl_global_init failed; assuming not threadsafe')
+      endif
+    endif
+
+    if libcurl_threadsafe_init
+      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
+    endif
+  endif
+
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3044,6 +3105,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3720,6 +3785,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbe11e75bf0..3b620bac5ac 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..6155d63a116
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,860 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(int code, Datum arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	before_shmem_exit(shutdown_validator_library, 0);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked via before_shmem_exit().
+ */
+static void
+shutdown_validator_library(int code, Datum arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index ce7534d4d23..7747a09c2a9 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4832,6 +4833,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index c40b7a3121e..9184ea0f1d4 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..8fe56267780
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..4fcdda74305
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..c04ee38d086 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -442,6 +445,9 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
+/* Define to 1 if curl_global_init() is guaranteed to be threadsafe. */
+#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
 /* Define to 1 if your compiler understands `typeof' or something similar. */
 #undef HAVE_TYPEOF
 
@@ -663,6 +669,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 701810a272a..90b0b65db6f 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..2407200ea97
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2635 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+#ifdef HAVE_SYS_EPOLL_H
+	int			timerfd;		/* a timerfd for signaling async timeouts */
+#endif
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+#ifdef HAVE_SYS_EPOLL_H
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+#endif
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 *
+		 * TODO: this code relies on assertions too much. We need to exit
+		 * sanely on internal logic errors, to avoid turning bugs into
+		 * vulnerabilities.
+		 */
+		Assert(!ctx->active);
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+	if (!ctx->nested)
+		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * This assumes that no target arrays can contain other arrays, which
+		 * we check in the array_start callback.
+		 */
+		Assert(ctx->nested == 2);
+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(ctx->nested == 1);
+			Assert(!*field->target.scalar);
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			Assert(ctx->nested == 2);
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ *
+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
+ * code expiration time?
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(interval_str, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 1;				/* don't fall through in release builds */
+	}
+
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - expires_in
+		 */
+
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - scope (only required if different than requested -- TODO check)
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * A timerfd is always part of the set when using epoll; it's just disabled
+ * when we're not using it.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer). Rather than continually
+ * adding and removing the timer, we keep it in the set at all times and just
+ * disarm it when it's not needed.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, timeout, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+#endif
+
+	return true;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * TODO: maybe just signal drive_request() to immediately call back in the
+	 * (timeout == 0) case?
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *const end = data + size;
+	const char *prefix;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call.
+	 */
+	while (data < end)
+	{
+		size_t		len = end - data;
+		char	   *eol = memchr(data, '\n', len);
+
+		if (eol)
+			len = eol - data + 1;
+
+		/* TODO: handle unprintables */
+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
+				eol ? "" : "\n");
+
+		data += len;
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	curl_version_info_data *curl_info;
+
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * Extract information about the libcurl we are linked against.
+	 */
+	curl_info = curl_version_info(CURLVERSION_NOW);
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+	if (!curl_info->ares_num)
+	{
+		/* No alternative resolver, TODO: warn about timeouts */
+	}
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/* TODO: would anyone use this in "real" situations, or just testing? */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 *
+	 * TODO: Encoding support?
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define HTTPS_SCHEME "https://"
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * provides an authorization endpoint, and both the token and authorization
+ * endpoint URLs seem reasonable).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/*
+	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
+	 * was present in the discovery document's grant_types_supported list. MS
+	 * Entra does not advertise this grant type, though, and since it doesn't
+	 * make sense to stand up a device_authorization_endpoint without also
+	 * accepting device codes at the token_endpoint, that's the only thing we
+	 * currently require.
+	 */
+
+	/*
+	 * Although libcurl will fail later if the URL contains an unsupported
+	 * scheme, that error message is going to be a bit opaque. This is a
+	 * decent time to bail out if we're not using HTTPS for the endpoints
+	 * we'll use for the flow.
+	 */
+	if (!actx->debugging)
+	{
+		if (pg_strncasecmp(provider->device_authorization_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "device authorization endpoint \"%s\" must use HTTPS",
+					   provider->device_authorization_endpoint);
+			return false;
+		}
+
+		if (pg_strncasecmp(provider->token_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "token endpoint \"%s\" must use HTTPS",
+					   provider->token_endpoint);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		/* TODO: optional fields */
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Calls curl_global_init() in a thread-safe way.
+ *
+ * libcurl has stringent requirements for the thread context in which you call
+ * curl_global_init(), because it's going to try initializing a bunch of other
+ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
+ * the thread-safety situation, but there's a chicken-and-egg problem at
+ * runtime: you can't check the thread safety until you've initialized libcurl,
+ * which you can't do from within a thread unless you know it's thread-safe...
+ *
+ * Returns true if initialization was successful. Successful or not, this
+ * function will not try to reinitialize Curl on successive calls.
+ */
+static bool
+initialize_curl(PGconn *conn)
+{
+	/*
+	 * Don't let the compiler play tricks with this variable. In the
+	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
+	 * enter simultaneously, but we do care if this gets set transiently to
+	 * PG_BOOL_YES/NO in cases where that's not the final answer.
+	 */
+	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	curl_version_info_data *info;
+#endif
+
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * Lock around the whole function. If a libpq client performs its own work
+	 * with libcurl, it must either ensure that Curl is initialized safely
+	 * before calling us (in which case our call will be a no-op), or else it
+	 * must guard its own calls to curl_global_init() with a registered
+	 * threadlock handler. See PQregisterThreadLock().
+	 */
+	pglock_thread();
+#endif
+
+	/*
+	 * Skip initialization if we've already done it. (Curl tracks the number
+	 * of calls; there's no point in incrementing the counter every time we
+	 * connect.)
+	 */
+	if (init_successful == PG_BOOL_YES)
+		goto done;
+	else if (init_successful == PG_BOOL_NO)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init previously failed during OAuth setup");
+		goto done;
+	}
+
+	/*
+	 * We know we've already initialized Winsock by this point (see
+	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
+	 * we have to tell libcurl to initialize everything else, because other
+	 * pieces of our client executable may already be using libcurl for their
+	 * own purposes. If we initialize libcurl with only a subset of its
+	 * features, we could break those other clients nondeterministically, and
+	 * that would probably be a nightmare to debug.
+	 *
+	 * If some other part of the program has already called this, it's a
+	 * no-op.
+	 */
+	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init failed during OAuth setup");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * If we determined at configure time that the Curl installation is
+	 * threadsafe, our job here is much easier. We simply initialize above
+	 * without any locking (concurrent or duplicated calls are fine in that
+	 * situation), then double-check to make sure the runtime setting agrees,
+	 * to try to catch silent downgrades.
+	 */
+	info = curl_version_info(CURLVERSION_NOW);
+	if (!(info->features & CURL_VERSION_THREADSAFE))
+	{
+		/*
+		 * In a downgrade situation, the damage is already done. Curl global
+		 * state may be corrupted. Be noisy.
+		 */
+		libpq_append_conn_error(conn, "libcurl is no longer threadsafe\n"
+								"\tCurl initialization was reported threadsafe when libpq\n"
+								"\twas compiled, but the currently installed version of\n"
+								"\tlibcurl reports that it is not. Recompile libpq against\n"
+								"\tthe installed version of libcurl.");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+#endif
+
+	init_successful = PG_BOOL_YES;
+
+done:
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	pgunlock_thread();
+#endif
+	return (init_successful == PG_BOOL_YES);
+}
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	if (!initialize_curl(conn))
+		return PGRES_POLLING_FAILED;
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+#ifdef HAVE_SYS_EPOLL_H
+		actx->timerfd = -1;
+#endif
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				/* TODO check that the timer has expired */
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+#ifdef HAVE_SYS_EPOLL_H
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer. (This isn't possible for kqueue.)
+				 */
+				conn->altsock = actx->timerfd;
+#endif
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..cc53e2bdd1a
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1141 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..32598721686
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f7..ec7a9236044 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85d1ca2864f..d5051f5e820 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -399,6 +417,7 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 static const pg_fe_sasl_mech *supported_sasl_mechs[] =
 {
 	&pg_scram_mech,
+	&pg_oauth_mech,
 };
 #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
 
@@ -655,6 +674,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1144,7 +1164,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1519,6 +1539,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4111,7 +4135,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4390,6 +4426,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -5002,6 +5041,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5155,6 +5200,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..5f8d608261e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,83 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a50..f36f7f19d58 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index dd64d291b3e..19f4a52a97a 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -37,6 +38,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14b..bdfd5f1f8de 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 4f544a042d4..0c2ccc75a63 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..f297ed5c968
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..138a8104622
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..f77a3e115c6
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..4b78c90557c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..12fe70c990b
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,264 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	conn = PQconnectdb(conninfo);
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "Connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..80f52585896
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,551 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..95cccf90dd8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..f0f23d1d1a8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..8ec09102027
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires-in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..bf94f091def
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,135 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index f521ad0b12f..ab7d7452ede 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2515,6 +2515,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2558,7 +2563,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9a3bee93dec..f3e3592eb77 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1951,6 +1958,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3089,6 +3097,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3485,6 +3495,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v47-0002-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v47-0002-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 483129c1ca931f60b76ef93bf2f646cea2f00568 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Thu, 6 Feb 2025 13:16:04 -0800
Subject: [PATCH v47 02/11] fixup! Add OAUTHBEARER SASL mechanism

---
 src/interfaces/libpq/fe-auth-oauth-curl.c | 35 +++++++++++++++++------
 1 file changed, 26 insertions(+), 9 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 2407200ea97..eeddace7060 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -973,11 +973,20 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
 		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
 
-		/*
-		 * The following fields are technically REQUIRED, but we don't use
-		 * them anywhere yet:
+		/*---
+		 * We currently have no use for the following OPTIONAL fields:
+		 *
+		 * - expires_in: This will be important for maintaining a token cache,
+		 *               but we do not yet implement one.
 		 *
-		 * - scope (only required if different than requested -- TODO check)
+		 * - refresh_token: Ditto.
+		 *
+		 * - scope: This is only sent when the authorization server sees fit to
+		 *          change our scope request. It's not clear what we should do
+		 *          about this; either it's been done as a matter of policy, or
+		 *          the user has explicitly denied part of the authorization,
+		 *          and either way the server-side validator is in a better
+		 *          place to complain if the change isn't acceptable.
 		 */
 
 		{0},
@@ -1252,8 +1261,11 @@ register_timer(CURLM *curlm, long timeout, void *ctx)
 	struct async_ctx *actx = ctx;
 
 	/*
-	 * TODO: maybe just signal drive_request() to immediately call back in the
-	 * (timeout == 0) case?
+	 * There might be an optimization opportunity here: if timeout == 0, we
+	 * could signal drive_request to immediately call
+	 * curl_multi_socket_action, rather than returning all the way up the
+	 * stack only to come right back. But it's not clear that the additional
+	 * code complexity is worth it.
 	 */
 	if (!set_timer(actx, timeout))
 		return -1;				/* actx_error already called */
@@ -1415,7 +1427,14 @@ setup_curl_handles(struct async_ctx *actx)
 		CHECK_SETOPT(actx, popt, protos, return false);
 	}
 
-	/* TODO: would anyone use this in "real" situations, or just testing? */
+	/*
+	 * If we're in debug mode, allow the developer to change the trusted CA
+	 * list. For now, this is not something we expose outside of the UNSAFE
+	 * mode, because it's not clear that it's useful in production: both libpq
+	 * and the user's browser must trust the same authorization servers for
+	 * the flow to work at all, so any changes to the roots are likely to be
+	 * done system-wide.
+	 */
 	if (actx->debugging)
 	{
 		const char *env;
@@ -1824,8 +1843,6 @@ check_issuer(struct async_ctx *actx, PGconn *conn)
 	 *    of the authorization server where the authorization request was
 	 *    sent to. This comparison MUST use simple string comparison as defined
 	 *    in Section 6.2.1 of [RFC3986].
-	 *
-	 * TODO: Encoding support?
 	 */
 	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
 	{
-- 
2.34.1

v47-0003-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v47-0003-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 75d98784dedc6d5e5468bd7d2c63322ff83c09d7 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 4 Feb 2025 17:00:47 -0800
Subject: [PATCH v47 03/11] fixup! Add OAUTHBEARER SASL mechanism

---
 src/interfaces/libpq/fe-auth-oauth-curl.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index eeddace7060..dc47a7bdf11 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -1239,7 +1239,7 @@ set_timer(struct async_ctx *actx, long timeout)
 #ifdef HAVE_SYS_EVENT_H
 	struct kevent ev;
 
-	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
 		   0, timeout, 0);
 	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
 	{
-- 
2.34.1

v47-0004-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v47-0004-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From fd60ceb4c849c2a13b0201b57753506b60718e6e Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Thu, 6 Feb 2025 20:22:03 -0800
Subject: [PATCH v47 04/11] fixup! Add OAUTHBEARER SASL mechanism

---
 src/backend/libpq/auth-oauth.c               | 13 +++++++++----
 src/test/modules/oauth_validator/validator.c |  2 +-
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 6155d63a116..d910cbcb161 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -39,7 +39,7 @@ static int	oauth_exchange(void *opaq, const char *input, int inputlen,
 						   char **output, int *outputlen, const char **logdetail);
 
 static void load_validator_library(const char *libname);
-static void shutdown_validator_library(int code, Datum arg);
+static void shutdown_validator_library(void *arg);
 
 static ValidatorModuleState *validator_module_state;
 static const OAuthValidatorCallbacks *ValidatorCallbacks;
@@ -737,6 +737,7 @@ static void
 load_validator_library(const char *libname)
 {
 	OAuthValidatorModuleInit validator_init;
+	MemoryContextCallback *mcb;
 
 	Assert(libname && *libname);
 
@@ -761,15 +762,19 @@ load_validator_library(const char *libname)
 	if (ValidatorCallbacks->startup_cb != NULL)
 		ValidatorCallbacks->startup_cb(validator_module_state);
 
-	before_shmem_exit(shutdown_validator_library, 0);
+	/* Shut down the library before cleaning up its state. */
+	mcb = palloc0(sizeof(*mcb));
+	mcb->func = shutdown_validator_library;
+
+	MemoryContextRegisterResetCallback(CurrentMemoryContext, mcb);
 }
 
 /*
  * Call the validator module's shutdown callback, if one is provided. This is
- * invoked via before_shmem_exit().
+ * invoked during memory context reset.
  */
 static void
-shutdown_validator_library(int code, Datum arg)
+shutdown_validator_library(void *arg)
 {
 	if (ValidatorCallbacks->shutdown_cb != NULL)
 		ValidatorCallbacks->shutdown_cb(validator_module_state);
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
index bf94f091def..ef9bbb2866f 100644
--- a/src/test/modules/oauth_validator/validator.c
+++ b/src/test/modules/oauth_validator/validator.c
@@ -100,7 +100,7 @@ validator_shutdown(ValidatorModuleState *state)
 {
 	/* Check to make sure our private state still exists. */
 	if (state->private_data != PRIVATE_COOKIE)
-		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
+		elog(PANIC, "oauth_validator: private state cookie changed to %p in shutdown",
 			 state->private_data);
 }
 
-- 
2.34.1

v47-0005-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v47-0005-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 595362ef2c16bbca684a5d8e9fd6c416894444b0 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 4 Feb 2025 11:37:44 -0800
Subject: [PATCH v47 05/11] fixup! Add OAUTHBEARER SASL mechanism

---
 config/programs.m4                        | 23 ++++++++++
 configure                                 | 53 +++++++++++++++++++++++
 meson.build                               | 34 +++++++++++++++
 src/interfaces/libpq/fe-auth-oauth-curl.c | 15 ++-----
 4 files changed, 114 insertions(+), 11 deletions(-)

diff --git a/config/programs.m4 b/config/programs.m4
index 86a3750f9e5..ead427046f5 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -315,4 +315,27 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
     AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
               [Define to 1 if curl_global_init() is guaranteed to be threadsafe.])
   fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  AC_CACHE_CHECK([for curl support for asynchronous DNS], [pgac_cv__libcurl_async_dns],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+])],
+  [pgac_cv__libcurl_async_dns=yes],
+  [pgac_cv__libcurl_async_dns=no],
+  [pgac_cv__libcurl_async_dns=unknown])])
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    AC_MSG_WARN([
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.])
+  fi
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 33422d24112..93fddd69981 100755
--- a/configure
+++ b/configure
@@ -12493,6 +12493,59 @@ $as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
 
   fi
 
+  # Warn if a thread-friendly DNS resolver isn't built.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl support for asynchronous DNS" >&5
+$as_echo_n "checking for curl support for asynchronous DNS... " >&6; }
+if ${pgac_cv__libcurl_async_dns+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_async_dns=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_async_dns=yes
+else
+  pgac_cv__libcurl_async_dns=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_async_dns" >&5
+$as_echo "$pgac_cv__libcurl_async_dns" >&6; }
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&5
+$as_echo "$as_me: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&2;}
+  fi
+
 fi
 
 if test "$with_gssapi" = yes ; then
diff --git a/meson.build b/meson.build
index 80a6b1d57d6..96e5f0f6434 100644
--- a/meson.build
+++ b/meson.build
@@ -907,6 +907,40 @@ if not libcurlopt.disabled()
     if libcurl_threadsafe_init
       cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
     endif
+
+    # Warn if a thread-friendly DNS resolver isn't built.
+    libcurl_async_dns = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+            return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+        }''',
+        name: 'test for curl support for asynchronous DNS',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_async_dns = true
+      endif
+    endif
+
+    if not libcurl_async_dns
+      warning('''
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.''')
+    endif
   endif
 
 else
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index dc47a7bdf11..b6255834f00 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -1339,8 +1339,6 @@ debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
 static bool
 setup_curl_handles(struct async_ctx *actx)
 {
-	curl_version_info_data *curl_info;
-
 	/*
 	 * Create our multi handle. This encapsulates the entire conversation with
 	 * libcurl for this connection.
@@ -1353,11 +1351,6 @@ setup_curl_handles(struct async_ctx *actx)
 		return false;
 	}
 
-	/*
-	 * Extract information about the libcurl we are linked against.
-	 */
-	curl_info = curl_version_info(CURLVERSION_NOW);
-
 	/*
 	 * The multi handle tells us what to wait on using two callbacks. These
 	 * will manipulate actx->mux as needed.
@@ -1382,12 +1375,12 @@ setup_curl_handles(struct async_ctx *actx)
 	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
 	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
 	 * see pg_fe_run_oauth_flow().
+	 *
+	 * NB: If libcurl is not built against a friendly DNS resolver (c-ares or
+	 * threaded), setting this option prevents DNS lookups from timing out
+	 * correctly. We warn about this situation at configure time.
 	 */
 	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
-	if (!curl_info->ares_num)
-	{
-		/* No alternative resolver, TODO: warn about timeouts */
-	}
 
 	if (actx->debugging)
 	{
-- 
2.34.1

v47-0006-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v47-0006-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From f73c042adc97544c10542c4818afb703cc20ed7a Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 3 Feb 2025 16:19:25 +0100
Subject: [PATCH v47 06/11] fixup! Add OAUTHBEARER SASL mechanism

---
 src/interfaces/libpq/fe-auth-oauth-curl.c | 74 ++++++++++++++++++-----
 src/interfaces/libpq/fe-auth-oauth.c      | 14 ++++-
 2 files changed, 73 insertions(+), 15 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index b6255834f00..6c8333789f1 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -461,12 +461,15 @@ oauth_json_object_field_start(void *state, char *name, bool isnull)
 		/*
 		 * We should never start parsing a new field while a previous one is
 		 * still active.
-		 *
-		 * TODO: this code relies on assertions too much. We need to exit
-		 * sanely on internal logic errors, to avoid turning bugs into
-		 * vulnerabilities.
 		 */
-		Assert(!ctx->active);
+		if (ctx->active)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: started field '%s' before field '%s' was finished",
+								  name, ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
 
 		while (field->name)
 		{
@@ -506,8 +509,19 @@ oauth_json_object_end(void *state)
 	struct oauth_parse *ctx = state;
 
 	--ctx->nested;
-	if (!ctx->nested)
-		Assert(!ctx->active);	/* all fields should be fully processed */
+
+	/*
+	 * All fields should be fully processed by the end of the top-level
+	 * object.
+	 */
+	if (!ctx->nested && ctx->active)
+	{
+		Assert(false);
+		oauth_parse_set_error(ctx,
+							  "internal error: field '%s' still active at end of object",
+							  ctx->active->name);
+		return JSON_SEM_ACTION_FAILED;
+	}
 
 	return JSON_SUCCESS;
 }
@@ -546,11 +560,18 @@ oauth_json_array_end(void *state)
 	if (ctx->active)
 	{
 		/*
-		 * This assumes that no target arrays can contain other arrays, which
-		 * we check in the array_start callback.
+		 * Clear the target (which should be an array inside the top-level
+		 * object). For this to be safe, no target arrays can contain other
+		 * arrays; we check for that in the array_start callback.
 		 */
-		Assert(ctx->nested == 2);
-		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
+		if (ctx->nested != 2 || ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: found unexpected array end while parsing field '%s'",
+								  ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
 
 		ctx->active = NULL;
 	}
@@ -597,8 +618,25 @@ oauth_json_scalar(void *state, char *token, JsonTokenType type)
 
 		if (field->type != JSON_TOKEN_ARRAY_START)
 		{
-			Assert(ctx->nested == 1);
-			Assert(!*field->target.scalar);
+			/* Ensure that we're parsing the top-level keys... */
+			if (ctx->nested != 1)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar target found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* ...and that a result has not already been set. */
+			if (*field->target.scalar)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar field '%s' would be assigned twice",
+									  ctx->active->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
 
 			*field->target.scalar = strdup(token);
 			if (!*field->target.scalar)
@@ -612,7 +650,15 @@ oauth_json_scalar(void *state, char *token, JsonTokenType type)
 		{
 			struct curl_slist *temp;
 
-			Assert(ctx->nested == 2);
+			/* The target array should be inside the top-level object. */
+			if (ctx->nested != 2)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: array member found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
 
 			/* Note that curl_slist_append() makes a copy of the token. */
 			temp = curl_slist_append(*field->target.array, token);
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cc53e2bdd1a..8beae9604c7 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -208,6 +208,7 @@ oauth_json_object_field_start(void *state, char *name, bool isnull)
 {
 	struct json_ctx *ctx = state;
 
+	/* Only top-level keys are considered. */
 	if (ctx->nested == 1)
 	{
 		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
@@ -264,7 +265,18 @@ oauth_json_scalar(void *state, char *token, JsonTokenType type)
 
 	if (ctx->target_field)
 	{
-		Assert(ctx->nested == 1);
+		if (ctx->nested != 1)
+		{
+			/*
+			 * ctx->target_field should not have been set for nested keys.
+			 * Assert and don't continue any further for production builds.
+			 */
+			Assert(false);
+			oauth_json_set_error(ctx,	/* don't bother translating */
+								 "internal error: target scalar found at nesting level %d during OAUTHBEARER parsing",
+								 ctx->nested);
+			return JSON_SEM_ACTION_FAILED;
+		}
 
 		/*
 		 * We don't allow duplicate field names; error out if the target has
-- 
2.34.1

v47-0007-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v47-0007-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 298839b69f04d01ec5ba2e77c3b9f7d351384809 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Thu, 6 Feb 2025 14:56:04 -0800
Subject: [PATCH v47 07/11] fixup! Add OAUTHBEARER SASL mechanism

---
 src/interfaces/libpq/fe-auth-oauth-curl.c | 48 +++++++++++++++++------
 1 file changed, 37 insertions(+), 11 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 6c8333789f1..993ca3bdab9 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -1329,8 +1329,9 @@ static int
 debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
 			   void *clientp)
 {
-	const char *const end = data + size;
 	const char *prefix;
+	bool		printed_prefix = false;
+	PQExpBufferData buf;
 
 	/* Prefixes are modeled off of the default libcurl debug output. */
 	switch (type)
@@ -1353,25 +1354,50 @@ debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
 			return 0;
 	}
 
+	initPQExpBuffer(&buf);
+
 	/*
 	 * Split the output into lines for readability; sometimes multiple headers
-	 * are included in a single call.
+	 * are included in a single call. We also don't allow unprintable ASCII
+	 * through without a basic <XX> escape.
 	 */
-	while (data < end)
+	for (int i = 0; i < size; i++)
 	{
-		size_t		len = end - data;
-		char	   *eol = memchr(data, '\n', len);
+		char		c = data[i];
 
-		if (eol)
-			len = eol - data + 1;
+		if (!printed_prefix)
+		{
+			appendPQExpBuffer(&buf, "%s ", prefix);
+			printed_prefix = true;
+		}
 
-		/* TODO: handle unprintables */
-		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
-				eol ? "" : "\n");
+		if (c >= 0x20 && c <= 0x7E)
+			appendPQExpBufferChar(&buf, c);
+		else if ((type == CURLINFO_HEADER_IN
+				  || type == CURLINFO_HEADER_OUT
+				  || type == CURLINFO_TEXT)
+				 && (c == '\r' || c == '\n'))
+		{
+			/*
+			 * Don't bother emitting <0D><0A> for headers and text; it's not
+			 * helpful noise.
+			 */
+		}
+		else
+			appendPQExpBuffer(&buf, "<%02X>", c);
 
-		data += len;
+		if (c == '\n')
+		{
+			appendPQExpBufferChar(&buf, c);
+			printed_prefix = false;
+		}
 	}
 
+	if (printed_prefix)
+		appendPQExpBufferChar(&buf, '\n');	/* finish the line */
+
+	fprintf(stderr, "%s", buf.data);
+	termPQExpBuffer(&buf);
 	return 0;
 }
 
-- 
2.34.1

v47-0008-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v47-0008-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 1cf48a8f83505a0cce1f94f8b0b563d4dcdd547a Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 5 Feb 2025 15:49:01 -0800
Subject: [PATCH v47 08/11] fixup! Add OAUTHBEARER SASL mechanism

---
 doc/src/sgml/libpq.sgml                       | 13 +++
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 86 ++++++++++++++-----
 src/interfaces/libpq/libpq-fe.h               |  3 +
 .../modules/oauth_validator/t/oauth_server.py |  2 +-
 4 files changed, 80 insertions(+), 24 deletions(-)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 9a69ffbc5b3..ddfc2a27c50 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10228,6 +10228,9 @@ typedef struct _PGpromptOAuthDevice
 {
     const char *verification_uri;   /* verification URI to visit */
     const char *user_code;          /* user code to enter */
+    const char *verification_uri_complete;  /* optional combination of URI and
+                                             * code, or NULL */
+    int         expires_in;         /* seconds until user code expires */
 } PGpromptOAuthDevice;
 </synopsis>
         </para>
@@ -10246,6 +10249,16 @@ typedef struct _PGpromptOAuthDevice
          <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
          flow</link>, this authdata type will not be used.
         </para>
+        <para>
+         If a non-NULL <structfield>verification_uri_complete</structfield> is
+         provided, it may optionally be used for non-textual verification (for
+         example, by displaying a QR code). The URL and user code should still
+         be displayed to the end user in this case, because the code will be
+         manually confirmed by the provider, and the URL lets users continue
+         even if they can't use the non-textual method. Review the RFC's
+         <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">notes
+         on non-textual verification</ulink>.
+        </para>
        </listitem>
       </varlistentry>
 
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 993ca3bdab9..02c5b50afcd 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -76,9 +76,12 @@ struct device_authz
 	char	   *device_code;
 	char	   *user_code;
 	char	   *verification_uri;
+	char	   *verification_uri_complete;
+	char	   *expires_in_str;
 	char	   *interval_str;
 
 	/* Fields below are parsed from the corresponding string above. */
+	int			expires_in;
 	int			interval;
 };
 
@@ -88,6 +91,8 @@ free_device_authz(struct device_authz *authz)
 	free(authz->device_code);
 	free(authz->user_code);
 	free(authz->verification_uri);
+	free(authz->verification_uri_complete);
+	free(authz->expires_in_str);
 	free(authz->interval_str);
 }
 
@@ -853,20 +858,12 @@ parse_provider(struct async_ctx *actx, struct provider *provider)
 }
 
 /*
- * Parses the "interval" JSON number, corresponding to the number of seconds to
- * wait between token endpoint requests.
- *
- * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
- * practicality, round any fractional intervals up to the next second, and clamp
- * the result at a minimum of one. (Zero-second intervals would result in an
- * expensive network polling loop.) Tests may remove the lower bound with
- * PGOAUTHDEBUG, for improved performance.
- *
- * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
- * code expiration time?
+ * Parses a valid JSON number into a double. The input must have come from
+ * pg_parse_json(), so that we know the lexer has validated it; there's no
+ * in-band signal for invalid formats.
  */
-static int
-parse_interval(struct async_ctx *actx, const char *interval_str)
+static double
+parse_json_number(const char *s)
 {
 	double		parsed;
 	int			cnt;
@@ -875,7 +872,7 @@ parse_interval(struct async_ctx *actx, const char *interval_str)
 	 * The JSON lexer has already validated the number, which is stricter than
 	 * the %f format, so we should be good to use sscanf().
 	 */
-	cnt = sscanf(interval_str, "%lf", &parsed);
+	cnt = sscanf(s, "%lf", &parsed);
 
 	if (cnt != 1)
 	{
@@ -884,9 +881,28 @@ parse_interval(struct async_ctx *actx, const char *interval_str)
 		 * either way a developer needs to take a look.
 		 */
 		Assert(cnt == 1);
-		return 1;				/* don't fall through in release builds */
+		return 0;
 	}
 
+	return parsed;
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(interval_str);
 	parsed = ceil(parsed);
 
 	if (parsed < 1)
@@ -898,6 +914,31 @@ parse_interval(struct async_ctx *actx, const char *interval_str)
 	return parsed;
 }
 
+/*
+ * Parses the "expires_in" JSON number, corresponding to the number of seconds
+ * remaining in the lifetime of the device code request.
+ *
+ * Similar to parse_interval, but we have even fewer requirements for reasonable
+ * values since we don't use the expiration time directly (it's passed to the
+ * PQAUTHDATA_PROMPT_OAUTH_DEVICE hook, in case the application wants to do
+ * something with it). We simply round and clamp to int range.
+ */
+static int
+parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(expires_in_str);
+	parsed = round(parsed);
+
+	if (INT_MAX <= parsed)
+		return INT_MAX;
+	else if (parsed <= INT_MIN)
+		return INT_MIN;
+
+	return parsed;
+}
+
 /*
  * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
  */
@@ -908,6 +949,7 @@ parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
 		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
 		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
 		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+		{"expires_in", JSON_TOKEN_NUMBER, {&authz->expires_in_str}, REQUIRED},
 
 		/*
 		 * Some services (Google, Azure) spell verification_uri differently.
@@ -915,13 +957,7 @@ parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
 		 */
 		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
 
-		/*
-		 * The following fields are technically REQUIRED, but we don't use
-		 * them anywhere yet:
-		 *
-		 * - expires_in
-		 */
-
+		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
 		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
 
 		{0},
@@ -945,6 +981,9 @@ parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
 		authz->interval = 5;
 	}
 
+	Assert(authz->expires_in_str);	/* ensured by parse_oauth_json() */
+	authz->expires_in = parse_expires_in(actx, authz->expires_in_str);
+
 	return true;
 }
 
@@ -2301,7 +2340,8 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 	PGpromptOAuthDevice prompt = {
 		.verification_uri = actx->authz.verification_uri,
 		.user_code = actx->authz.user_code,
-		/* TODO: optional fields */
+		.verification_uri_complete = actx->authz.verification_uri_complete,
+		.expires_in = actx->authz.expires_in,
 	};
 
 	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 5f8d608261e..b7399dee58e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -733,6 +733,9 @@ typedef struct _PGpromptOAuthDevice
 {
 	const char *verification_uri;	/* verification URI to visit */
 	const char *user_code;		/* user code to enter */
+	const char *verification_uri_complete;	/* optional combination of URI and
+											 * code, or NULL */
+	int			expires_in;		/* seconds until user code expires */
 } PGpromptOAuthDevice;
 
 /* for PGoauthBearerRequest.async() */
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
index 8ec09102027..4faf3323d38 100755
--- a/src/test/modules/oauth_validator/t/oauth_server.py
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -298,7 +298,7 @@ class OAuthHandler(http.server.BaseHTTPRequestHandler):
             "device_code": "postgres",
             "user_code": "postgresuser",
             self._uri_spelling: uri,
-            "expires-in": 5,
+            "expires_in": 5,
             **self._response_padding,
         }
 
-- 
2.34.1

v47-0009-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v47-0009-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 27135876559a1c5b5cac7e68c43403f257f391fd Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 4 Feb 2025 17:00:47 -0800
Subject: [PATCH v47 09/11] fixup! Add OAUTHBEARER SASL mechanism

---
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 131 +++++++++++++++---
 .../oauth_validator/oauth_hook_client.c       |  35 ++++-
 .../modules/oauth_validator/t/001_server.pl   |  15 ++
 3 files changed, 159 insertions(+), 22 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 02c5b50afcd..96c5096e4ca 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -167,9 +167,7 @@ struct async_ctx
 {
 	enum OAuthStep step;		/* where are we in the flow? */
 
-#ifdef HAVE_SYS_EPOLL_H
-	int			timerfd;		/* a timerfd for signaling async timeouts */
-#endif
+	int			timerfd;		/* descriptor for signaling async timeouts */
 	pgsocket	mux;			/* the multiplexer socket containing all
 								 * descriptors tracked by libcurl, plus the
 								 * timerfd */
@@ -275,10 +273,8 @@ free_async_ctx(PGconn *conn, struct async_ctx *actx)
 
 	if (actx->mux != PGINVALID_SOCKET)
 		close(actx->mux);
-#ifdef HAVE_SYS_EPOLL_H
 	if (actx->timerfd >= 0)
 		close(actx->timerfd);
-#endif
 
 	free(actx);
 }
@@ -1089,8 +1085,9 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
  * select() on instead of the Postgres socket during OAuth negotiation.
  *
  * This is just an epoll set or kqueue abstracting multiple other descriptors.
- * A timerfd is always part of the set when using epoll; it's just disabled
- * when we're not using it.
+ * For epoll, the timerfd is always part of the set; it's just disabled when
+ * we're not using it. For kqueue, the "timerfd" is actually a second kqueue
+ * instance which is only added to the set when needed.
  */
 static bool
 setup_multiplexer(struct async_ctx *actx)
@@ -1128,6 +1125,19 @@ setup_multiplexer(struct async_ctx *actx)
 		return false;
 	}
 
+	/*
+	 * Originally, we set EVFILT_TIMER directly on the top-level multiplexer.
+	 * This makes it difficult to implement timer_expired(), though, so now we
+	 * set EVFILT_TIMER on a separate actx->timerfd, which is chained to
+	 * actx->mux while the timer is active.
+	 */
+	actx->timerfd = kqueue();
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timer kqueue: %m");
+		return false;
+	}
+
 	return true;
 #endif
 
@@ -1286,9 +1296,12 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 
 /*
  * Enables or disables the timer in the multiplexer set. The timeout value is
- * in milliseconds (negative values disable the timer). Rather than continually
- * adding and removing the timer, we keep it in the set at all times and just
- * disarm it when it's not needed.
+ * in milliseconds (negative values disable the timer).
+ *
+ * For epoll, rather than continually adding and removing the timer, we keep it
+ * in the set at all times and just disarm it when it's not needed. For kqueue,
+ * the timer is removed completely when disabled to prevent stale timeouts from
+ * remaining in the queue.
  */
 static bool
 set_timer(struct async_ctx *actx, long timeout)
@@ -1320,20 +1333,87 @@ set_timer(struct async_ctx *actx, long timeout)
 		actx_error(actx, "setting timerfd to %ld: %m", timeout);
 		return false;
 	}
+
+	return true;
 #endif
 #ifdef HAVE_SYS_EVENT_H
 	struct kevent ev;
 
+	/* Enable/disable the timer itself. */
 	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
 		   0, timeout, 0);
-	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	if (kevent(actx->timerfd, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
 	{
 		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
 		return false;
 	}
-#endif
+
+	/*
+	 * Add/remove the timer to/from the mux. (In contrast with epoll, if we
+	 * allowed the timer to remain registered here after being disabled, the
+	 * mux queue would retain any previous stale timeout notifications and
+	 * remain readable.)
+	 */
+	EV_SET(&ev, actx->timerfd, EVFILT_READ, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, 0, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "could not update timer on kqueue: %m");
+		return false;
+	}
 
 	return true;
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return false;
+}
+
+/*
+ * Returns 1 if the timeout in the multiplexer set has expired since the last
+ * call to set_timer(), 0 if the timer is still running, or -1 (with an
+ * actx_error() report) if the timer cannot be queried.
+ */
+static int
+timer_expired(struct async_ctx *actx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timerfd_gettime(actx->timerfd, &spec) < 0)
+	{
+		actx_error(actx, "getting timerfd value: %m");
+		return -1;
+	}
+
+	/*
+	 * This implementation assumes we're using single-shot timers. If you
+	 * change to using intervals, you'll need to reimplement this function
+	 * too, possibly with the read() or select() interfaces for timerfd.
+	 */
+	Assert(spec.it_interval.tv_sec == 0
+		   && spec.it_interval.tv_nsec == 0);
+
+	/* If the remaining time to expiration is zero, we're done. */
+	return (spec.it_value.tv_sec == 0
+			&& spec.it_value.tv_nsec == 0);
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	int			res;
+
+	/* Is the timer queue ready? */
+	res = PQsocketPoll(actx->timerfd, 1 /* forRead */ , 0, 0);
+	if (res < 0)
+	{
+		actx_error(actx, "checking kqueue for timeout: %m");
+		return -1;
+	}
+
+	return (res > 0);
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return -1;
 }
 
 /*
@@ -2510,9 +2590,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 		}
 
 		actx->mux = PGINVALID_SOCKET;
-#ifdef HAVE_SYS_EPOLL_H
 		actx->timerfd = -1;
-#endif
 
 		/* Should we enable unsafe features? */
 		actx->debugging = oauth_unsafe_debugging_enabled();
@@ -2556,10 +2634,28 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 						/* not done yet */
 						return status;
 					}
+
+					break;
 				}
 
 			case OAUTH_STEP_WAIT_INTERVAL:
-				/* TODO check that the timer has expired */
+
+				/*
+				 * The client application is supposed to wait until our timer
+				 * expires before calling PQconnectPoll() again, but that
+				 * might not happen. To avoid sending a token request early,
+				 * check the timer before continuing.
+				 */
+				if (!timer_expired(actx))
+				{
+					conn->altsock = actx->timerfd;
+					return PGRES_POLLING_READING;
+				}
+
+				/* Disable the expired timer. */
+				if (!set_timer(actx, -1))
+					goto error_return;
+
 				break;
 		}
 
@@ -2633,15 +2729,12 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 				if (!set_timer(actx, actx->authz.interval * 1000))
 					goto error_return;
 
-#ifdef HAVE_SYS_EPOLL_H
-
 				/*
 				 * No Curl requests are running, so we can simplify by having
 				 * the client wait directly on the timerfd rather than the
-				 * multiplexer. (This isn't possible for kqueue.)
+				 * multiplexer.
 				 */
 				conn->altsock = actx->timerfd;
-#endif
 
 				actx->step = OAUTH_STEP_WAIT_INTERVAL;
 				actx->running = 1;
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
index 12fe70c990b..fc003030ff8 100644
--- a/src/test/modules/oauth_validator/oauth_hook_client.c
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -40,14 +40,16 @@ usage(char *argv[])
 	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
 	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
 		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
-	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
+	printf(" --no-hook				don't install OAuth hooks\n");
 	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
 	printf(" --token TOKEN			use the provided TOKEN value\n");
+	printf(" --stress-async			busy-loop on PQconnectPoll rather than polling\n");
 }
 
 /* --options */
 static bool no_hook = false;
 static bool hang_forever = false;
+static bool stress_async = false;
 static const char *expected_uri = NULL;
 static const char *expected_scope = NULL;
 static const char *misbehave_mode = NULL;
@@ -65,6 +67,7 @@ main(int argc, char *argv[])
 		{"token", required_argument, NULL, 1003},
 		{"hang-forever", no_argument, NULL, 1004},
 		{"misbehave", required_argument, NULL, 1005},
+		{"stress-async", no_argument, NULL, 1006},
 		{0}
 	};
 
@@ -104,6 +107,10 @@ main(int argc, char *argv[])
 				misbehave_mode = optarg;
 				break;
 
+			case 1006:			/* --stress-async */
+				stress_async = true;
+				break;
+
 			default:
 				usage(argv);
 				return 1;
@@ -122,10 +129,32 @@ main(int argc, char *argv[])
 	PQsetAuthDataHook(handle_auth_data);
 
 	/* Connect. (All the actual work is in the hook.) */
-	conn = PQconnectdb(conninfo);
+	if (stress_async)
+	{
+		/*
+		 * Perform an asynchronous connection, busy-looping on PQconnectPoll()
+		 * without actually waiting on socket events. This stresses code paths
+		 * that rely on asynchronous work to be done before continuing with
+		 * the next step in the flow.
+		 */
+		PostgresPollingStatusType res;
+
+		conn = PQconnectStart(conninfo);
+
+		do
+		{
+			res = PQconnectPoll(conn);
+		} while (res != PGRES_POLLING_FAILED && res != PGRES_POLLING_OK);
+	}
+	else
+	{
+		/* Perform a standard synchronous connection. */
+		conn = PQconnectdb(conninfo);
+	}
+
 	if (PQstatus(conn) != CONNECTION_OK)
 	{
-		fprintf(stderr, "Connection to database failed: %s\n",
+		fprintf(stderr, "connection to database failed: %s\n",
 				PQerrorMessage(conn));
 		PQfinish(conn);
 		return 1;
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 80f52585896..f0b918390fd 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -389,6 +389,21 @@ $node->connect_fails(
 	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
 );
 
+# Stress test: make sure our builtin flow operates correctly even if the client
+# application isn't respecting PGRES_POLLING_READING/WRITING signals returned
+# from PQconnectPoll().
+$base_connstr =
+  "$common_connstr port=" . $node->port . " host=" . $node->host;
+my @cmd = (
+	"oauth_hook_client", "--no-hook", "--stress-async",
+	connstr(stage => 'all', retries => 1, interval => 1));
+
+note "running '" . join("' '", @cmd) . "'";
+my ($stdout, $stderr) = run_command(\@cmd);
+
+like($stdout, qr/connection succeeded/, "stress-async: stdout matches");
+unlike($stderr, qr/connection to database failed/, "stress-async: stderr matches");
+
 #
 # This section of tests reconfigures the validator module itself, rather than
 # the OAuth server.
-- 
2.34.1

v47-0010-XXX-fix-libcurl-link-error.patchapplication/octet-stream; name=v47-0010-XXX-fix-libcurl-link-error.patchDownload
From d8c1f2980802d1bb7561bdd2c4662174b9e43087 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 13 Jan 2025 12:31:59 -0800
Subject: [PATCH v47 10/11] XXX fix libcurl link error

The ftp/curl port appears to be missing a minimum version dependency on
libssh2, so the following starts showing up after upgrading to curl
8.11.1_1:

    libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

But 13.3 is EOL, so it's not clear if anyone would be interested in a
bug report, and a FreeBSD 14 Cirrus image is in progress. Hack past it
for now.
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index c192a077701..3afea832bc9 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -168,6 +168,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.34.1

v47-0011-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/octet-stream; name=v47-0011-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From dbf305d048966be086b4bf62c12aa4631c40ae88 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v47 11/11] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  196 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2663 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 +++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  118 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6444 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 3afea832bc9..06efe5f9b0a 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -321,6 +321,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -405,8 +406,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 96e5f0f6434..6e60c8d3dae 100644
--- a/meson.build
+++ b/meson.build
@@ -3458,6 +3458,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3626,6 +3629,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..236057cd99e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 00000000000..0e8f027b2ec
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 00000000000..b0695b6287e
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 00000000000..acf339a5899
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 00000000000..20e72a404aa
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,196 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        sock.settimeout(BLOCKING_TIMEOUT)
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 00000000000..8372376ede4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 00000000000..b8f260cf97a
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2663 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def handle_discovery_connection(sock, discovery=None, *, response=None):
+    """
+    Helper for all tests that expect an initial discovery connection from the
+    client. The provided discovery URI will be used in a standard error response
+    from the server (or response may be set, to provide a custom dictionary),
+    and the SASL exchange will be failed.
+
+    By default, the client is expected to complete the entire handshake. Set
+    finish to False if the client should immediately disconnect when it receives
+    the error response.
+    """
+    if response is None:
+        response = {"status": "invalid_token"}
+        if discovery is not None:
+            response["openid-configuration"] = discovery
+
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        initial = start_oauth_handshake(conn)
+
+        # For discovery, the client should send an empty auth header. See RFC
+        # 7628, Sec. 4.3.
+        auth = get_auth_value(initial)
+        assert auth == b""
+
+        # The discovery handshake is doomed to fail.
+        fail_oauth_handshake(conn, response)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "authorization_code",
+                    "urn:ietf:params:oauth:grant-type:device_code",
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+            except ssl.SSLError as err:
+                # FIXME OpenSSL 3.4 introduced an incompatibility with Python's
+                # TLS error handling, resulting in a bogus "[SYS] unknown error"
+                # on some platforms. Hopefully this is fixed in 2025's set of
+                # maintenance releases and this case can be removed.
+                #
+                #     https://github.com/python/cpython/issues/127257
+                #
+                if "[SYS] unknown error" in str(err):
+                    return
+                raise
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PGPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+        ("verification_uri_complete", ctypes.c_char_p),
+        ("expires_in", ctypes.c_int),
+    ]
+
+
+class PGOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PGOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PGOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PGPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PGOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize(
+    "success, abnormal_failure",
+    [
+        pytest.param(True, False, id="success"),
+        pytest.param(False, False, id="normal failure"),
+        pytest.param(False, True, id="abnormal failure"),
+    ],
+)
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+    abnormal_failure,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Client should reconnect.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            elif abnormal_failure:
+                # Send an empty error response, which should result in a
+                # mechanism-level failure in the client. This test ensures that
+                # the client doesn't try a third connection for this case.
+                expected_error = "server sent error response without a status"
+                fail_oauth_handshake(conn, {})
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+        assert call.verification_uri_complete is None
+        assert call.expires_in == 5
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    with sock:
+        handle_discovery_connection(sock, discovery_uri)
+
+    # Expect the client to connect again.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment (no path)",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server/path/",
+            None,
+            id="extra empty segment (with path)",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+    with sock:
+        if expected_error and not server_discovery:
+            # If the client already knows the URL, it should disconnect as soon
+            # as it realizes it's not valid.
+            expect_disconnected_handshake(sock)
+        else:
+            # Otherwise, it should complete the connection.
+            handle_discovery_connection(sock, discovery_uri)
+
+    # The client should not reconnect.
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        startup = pq3.recv1(conn, cls=pq3.Startup)
+        assert startup.proto == pq3.protocol(3, 0)
+
+        pq3.send(
+            conn,
+            pq3.types.AuthnRequest,
+            type=pq3.authn.SASL,
+            body=[b"OAUTHBEARER", b""],
+        )
+
+        # The client should disconnect at this point.
+        assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    with sock:
+        expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Second connection sends the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+    token_sent = threading.Event()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token,
+        # and signal the main thread to continue.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+        token_sent.set()
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # At this point the client is talking to the authorization server. Wait for
+    # that to succeed so we don't run into the accept() timeout.
+    token_sent.wait()
+
+    # Client should reconnect and send the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PGOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    if retries >= 0:
+        # First connection is a discovery request, which should result in the
+        # hook being invoked.
+        with sock:
+            handle_discovery_connection(sock, discovery_uri)
+
+        # Client should reconnect to send the token.
+        sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        handle_discovery_connection(sock, response=fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 1024 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "server rejected OAuth bearer token: invalid_request",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(
+    accept, auth_data_cb, sasl_err, resp_type, resp_payload, expected_error
+):
+    wkuri = f"https://256.256.256.256/.well-known/openid-configuration"
+    sock, client = accept(
+        oauth_issuer=wkuri,
+        oauth_client_id="some-id",
+    )
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which returns a token directly so
+        we don't need an openid_provider instance.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.token = secrets.token_urlsafe().encode()
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            if "openid-configuration" in sasl_err:
+                sasl_err["openid-configuration"] = wkuri
+
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an
+            # invalid one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        handle_discovery_connection(sock, to_http(openid_provider.discovery_uri))
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("auth_type", [pq3.authn.OK, pq3.authn.SASLFinal])
+def test_discovery_incorrectly_permits_connection(accept, auth_type):
+    """
+    Incorrectly responds to a client's discovery request with AuthenticationOK
+    or AuthenticationSASLFinal. require_auth=oauth should catch the former, and
+    the mechanism itself should catch the latter.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+        require_auth="oauth",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Incorrectly log the client in. It should immediately disconnect.
+            pq3.send(conn, pq3.types.AuthnRequest, type=auth_type)
+            assert not conn.read(1), "client sent unexpected data"
+
+    if auth_type == pq3.authn.OK:
+        expected_error = "server did not complete authentication"
+    else:
+        expected_error = "server sent unexpected additional OAuth data"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_no_discovery_url_provided(accept):
+    """
+    Tests what happens when the client doesn't know who to contact and the
+    server doesn't tell it.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        handle_discovery_connection(sock, discovery=None)
+
+    expected_error = "no discovery metadata was provided"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("change_between_connections", [False, True])
+def test_discovery_url_changes(accept, openid_provider, change_between_connections):
+    """
+    Ensures that the client complains if the server agrees on the issuer, but
+    disagrees on the discovery URL to be used.
+    """
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "DEV",
+            "user_code": "USER",
+            "interval": 0,
+            "verification_uri": "https://example.org",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Have the client connect.
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    other_wkuri = f"{openid_provider.issuer}/.well-known/oauth-authorization-server"
+
+    if not change_between_connections:
+        # Immediately respond with the wrong URL.
+        with sock:
+            handle_discovery_connection(sock, other_wkuri)
+
+    else:
+        # First connection; use the right URL to begin with.
+        with sock:
+            handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+        # Second connection. Reject the token and switch the URL.
+        sock, _ = accept()
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+                get_auth_value(initial)
+
+                # Ignore the token; fail with a different discovery URL.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": other_wkuri,
+                }
+                fail_oauth_handshake(conn, resp)
+
+    expected_error = rf"server's discovery document has moved to {other_wkuri} \(previous location was {openid_provider.discovery_uri}\)"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 00000000000..1a73865ee47
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 00000000000..e137df852ef
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 00000000000..ef809e288af
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 00000000000..ab7a6e7fb96
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 00000000000..0dfcffb83e0
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 00000000000..42af80c73ee
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 00000000000..85534b9cc99
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 00000000000..415748b9a66
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,118 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
+											const char *token,
+											const char *role);
+
+static const OAuthValidatorCallbacks callbacks = {
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static ValidatorModuleResult *
+test_validate(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
+
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);	/* TODO: constify? */
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = authn_id;
+	}
+
+	return res;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 00000000000..2839343ffa1
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 00000000000..02126dba792
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 00000000000..dee4855fc0b
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 00000000000..7c6817de31c
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 00000000000..075c02c1ca6
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 00000000000..804307ee120
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba7..ffdf760d79a 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.34.1

#200Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#199)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 7 Feb 2025, at 06:48, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Thu, Feb 6, 2025 at 2:02 PM Daniel Gustafsson <daniel@yesql.se> wrote:

Attached is a v46 which is v45 minus the now committed patch.

Thank you! Attached is v47, which creeps ever closer to the finish line.

Great, thanks! Below are a few quick comments after a first read-through and
compile/test cycle:

- 0005 warns at configure time if libcurl doesn't have a nonblocking
DNS implementation.

Is it really enough to do this at build time? A very small percentage of users
running this will also be building their own libpq so the warning is lost on
them. That being said, I'm not entirely sure what else we could do (bleeping a
warning every time is clearly not userfriendly) so maybe this is a TODO in the
code?

- 0006 augments bare Asserts during client-side JSON parsing with code
that will fail gracefully in production builds as well.

+ oauth_json_set_error(ctx, /* don't bother translating */
With the project style format for translator comments this should be:

+ /* translator: xxx */
+ oauth_json_set_error(ctx,

- 0007 escapes binary data during the printing of libcurl debug
output. (If you're having a bad enough day to need the debug spray,
you're probably not in the mood for the sound of a hundred BELs.)

Does it make sense to prefix the printing in debug_callback() with some header
stating that the following data is debug output from curl and not postgres? I
have a feeling I'm stockholm syndromed by knowing the internals so I'm not sure
if that would be helpful to someone not knowing the implementation details.

- 0008 parses and passes through the expires_in and optional
verification_uri_complete fields from the device endpoint to any
custom user prompt. (We do not use them ourselves, at the moment. But
after seeing some nice demos of RHEL/PAM/sssd support for device flow
QR codes at FOSDEM, I think we're definitely going to want to make
those available to devs.)

Aha, cool! I was a bit surprised to not find a definition of expires_in in RFC
8628, as in what happens if -1 is passed? 8628 seems to broadly speaking fall
into the category of "just dont do the wrong thing and all will be fine" =/.
Another question that comes to mind is how the reciever should interpret the
information since it doesn't know when the device_code/user_code was generated
so it doesn't know how much of expires_in which has already passed. (Which is
not something for us to solve, just a general observation.)

+   even if they can't use the non-textual method. Review the RFC's
+   <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">notes
+   on non-textual verification</ulink>.
To align more with the rest of the documentation I think something along these
lines is better: "For more information, see section 3.3.1 in <ulink ..>RFC
8628</ulink>.
      Assert(cnt == 1);
-     return 1;                       /* don't fall through in release builds */
+     return 0;
While not introduced in this fixup patch, reading it again now I sort of think
we should make that Assert(false) to clearly indicate that we don't think the
assertion will ever pass, we're just asking to error out since we already know
the failure condition holds.

- 0009 is gold-plating for the OAUTH_STEP_WAIT_INTERVAL state:

+ actx_error(actx, "failed to create timer kqueue: %m");
Maybe we should add a translator note explaining that kqueue should not be
translated since it's very easy to mistake it for "queue". Doing it on the
first string including kqueue should be enough I suppose.

--
Daniel Gustafsson

#201Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#200)
9 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Feb 7, 2025 at 12:12 PM Daniel Gustafsson <daniel@yesql.se> wrote:

Is it really enough to do this at build time? A very small percentage of users
running this will also be building their own libpq so the warning is lost on
them. That being said, I'm not entirely sure what else we could do (bleeping a
warning every time is clearly not userfriendly) so maybe this is a TODO in the
code?

I've added a TODO back. At the moment, I don't have any good ideas; if
the user isn't building libpq, they're not going to be able to take
action on the warning anyway, and for many use cases they're probably
not going to care.

+ oauth_json_set_error(ctx, /* don't bother translating */
With the project style format for translator comments this should be:

+ /* translator: xxx */
+ oauth_json_set_error(ctx,

This comment was just meant to draw attention to the lack of
libpq_gettext(). Does it still need a translator note if we don't run
it through translation?

Does it make sense to prefix the printing in debug_callback() with some header
stating that the following data is debug output from curl and not postgres? I
have a feeling I'm stockholm syndromed by knowing the internals so I'm not sure
if that would be helpful to someone not knowing the implementation details.

Seems reasonable; I've added "[libcurl]" to the front.

Aha, cool! I was a bit surprised to not find a definition of expires_in in RFC
8628, as in what happens if -1 is passed? 8628 seems to broadly speaking fall
into the category of "just dont do the wrong thing and all will be fine" =/.

Yup. :(

Another question that comes to mind is how the reciever should interpret the
information since it doesn't know when the device_code/user_code was generated
so it doesn't know how much of expires_in which has already passed. (Which is
not something for us to solve, just a general observation.)

And even if we passed the Date header value through from the server,
that'd do the wrong thing if the clocks are off. I think for now,
expires_in is likely to be a best-effort UX, in the vein of "hey,
maybe type faster if you don't want a timeout."

To align more with the rest of the documentation I think something along these
lines is better: "For more information, see section 3.3.1 in <ulink ..>RFC
8628</ulink>.

Done.

While not introduced in this fixup patch, reading it again now I sort of think
we should make that Assert(false) to clearly indicate that we don't think the
assertion will ever pass, we're just asking to error out since we already know
the failure condition holds.

Done.

Maybe we should add a translator note explaining that kqueue should not be
translated since it's very easy to mistake it for "queue". Doing it on the
first string including kqueue should be enough I suppose.

Done.

--

v48 is attached.

- 0001 contains all of the v47 fixups squashed into v47-0001.
- 0002 contains all the above feedback and rewrites two more commented TODOs.
- 0003 completes a couple of summary paragraphs in the documentation,
and makes it clear that the builtin flow is not currently supported on
Windows.
- 0004 gets a missed pgperltidy and explicitly skips unsupported tests
on Windows.
- 0005 cowardly pulls the MAX_OAUTH_RESPONSE_SIZE down to 256k.

- 0006 gives us additional levers to pull in the event that API or ABI
changes must be backported for security reasons:

Daniel and I talked at FOSDEM about wanting to have additional
guardrails on the server-side validator API. Ideally, we'd wait for
major version boundaries to change APIs, as per usual. But if any bugs
come to light that affect the security of the system, we may want to
have more control over the boundary between the server and the
validator. So I've added two features to the API.

The first is a magic number embedded in the OAuthValidatorCallbacks
struct. Should it ever be necessary to force a recompilation of
validator modules, that number can be bumped in an emergency to allow
the server to reject modules with an older ABI (or otherwise treat
them differently).

The second is state->sversion, added to the ValidatorModuleState
struct, which contains the PG_VERSION_NUM. This currently has no use,
but if there's ever a situation where the ValidatorModule* structs
need to gain new members within a stable release line, this would let
module developers make sense of the situation. It also provides an
easy way for modules to enforce a minimum minor version, for example
if there's a critical security bug in older versions that they'd
rather not deal with.

By adding these fields in addition to the existing module magic
machinery, we've probably doomed them to be unused cruft. But that
seems better than the reverse situation.

Thanks!

--Jacob

Attachments:

since-v47.diff.txttext/plain; charset=US-ASCII; name=since-v47.diff.txtDownload
 1:  6747b7cc795 !  1:  cf86e3bfbbc Add OAUTHBEARER SASL mechanism
    @@ config/programs.m4: AC_DEFUN([PGAC_CHECK_STRIP],
     +    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
     +              [Define to 1 if curl_global_init() is guaranteed to be threadsafe.])
     +  fi
    ++
    ++  # Warn if a thread-friendly DNS resolver isn't built.
    ++  AC_CACHE_CHECK([for curl support for asynchronous DNS], [pgac_cv__libcurl_async_dns],
    ++  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
    ++#include <curl/curl.h>
    ++],[
    ++    curl_version_info_data *info;
    ++
    ++    if (curl_global_init(CURL_GLOBAL_ALL))
    ++        return -1;
    ++
    ++    info = curl_version_info(CURLVERSION_NOW);
    ++    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
    ++])],
    ++  [pgac_cv__libcurl_async_dns=yes],
    ++  [pgac_cv__libcurl_async_dns=no],
    ++  [pgac_cv__libcurl_async_dns=unknown])])
    ++  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
    ++    AC_MSG_WARN([
    ++*** The installed version of libcurl does not support asynchronous DNS
    ++*** lookups. Connection timeouts will not be honored during DNS resolution,
    ++*** which may lead to hangs in client programs.])
    ++  fi
     +])# PGAC_CHECK_LIBCURL
     
      ## configure ##
    @@ configure: fi
     +
     +  fi
     +
    ++  # Warn if a thread-friendly DNS resolver isn't built.
    ++  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl support for asynchronous DNS" >&5
    ++$as_echo_n "checking for curl support for asynchronous DNS... " >&6; }
    ++if ${pgac_cv__libcurl_async_dns+:} false; then :
    ++  $as_echo_n "(cached) " >&6
    ++else
    ++  if test "$cross_compiling" = yes; then :
    ++  pgac_cv__libcurl_async_dns=unknown
    ++else
    ++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
    ++/* end confdefs.h.  */
    ++
    ++#include <curl/curl.h>
    ++
    ++int
    ++main ()
    ++{
    ++
    ++    curl_version_info_data *info;
    ++
    ++    if (curl_global_init(CURL_GLOBAL_ALL))
    ++        return -1;
    ++
    ++    info = curl_version_info(CURLVERSION_NOW);
    ++    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
    ++
    ++  ;
    ++  return 0;
    ++}
    ++_ACEOF
    ++if ac_fn_c_try_run "$LINENO"; then :
    ++  pgac_cv__libcurl_async_dns=yes
    ++else
    ++  pgac_cv__libcurl_async_dns=no
    ++fi
    ++rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
    ++  conftest.$ac_objext conftest.beam conftest.$ac_ext
    ++fi
    ++
    ++fi
    ++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_async_dns" >&5
    ++$as_echo "$pgac_cv__libcurl_async_dns" >&6; }
    ++  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
    ++    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING:
    ++*** The installed version of libcurl does not support asynchronous DNS
    ++*** lookups. Connection timeouts will not be honored during DNS resolution,
    ++*** which may lead to hangs in client programs." >&5
    ++$as_echo "$as_me: WARNING:
    ++*** The installed version of libcurl does not support asynchronous DNS
    ++*** lookups. Connection timeouts will not be honored during DNS resolution,
    ++*** which may lead to hangs in client programs." >&2;}
    ++  fi
    ++
     +fi
     +
      if test "$with_gssapi" = yes ; then
    @@ doc/src/sgml/libpq.sgml: void PQinitSSL(int do_ssl);
     +{
     +    const char *verification_uri;   /* verification URI to visit */
     +    const char *user_code;          /* user code to enter */
    ++    const char *verification_uri_complete;  /* optional combination of URI and
    ++                                             * code, or NULL */
    ++    int         expires_in;         /* seconds until user code expires */
     +} PGpromptOAuthDevice;
     +</synopsis>
     +        </para>
    @@ doc/src/sgml/libpq.sgml: void PQinitSSL(int do_ssl);
     +         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
     +         flow</link>, this authdata type will not be used.
     +        </para>
    ++        <para>
    ++         If a non-NULL <structfield>verification_uri_complete</structfield> is
    ++         provided, it may optionally be used for non-textual verification (for
    ++         example, by displaying a QR code). The URL and user code should still
    ++         be displayed to the end user in this case, because the code will be
    ++         manually confirmed by the provider, and the URL lets users continue
    ++         even if they can't use the non-textual method. Review the RFC's
    ++         <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">notes
    ++         on non-textual verification</ulink>.
    ++        </para>
     +       </listitem>
     +      </varlistentry>
     +
    @@ meson.build: endif
     +    if libcurl_threadsafe_init
     +      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
     +    endif
    ++
    ++    # Warn if a thread-friendly DNS resolver isn't built.
    ++    libcurl_async_dns = false
    ++
    ++    if not meson.is_cross_build()
    ++      r = cc.run('''
    ++        #include <curl/curl.h>
    ++
    ++        int main(void)
    ++        {
    ++            curl_version_info_data *info;
    ++
    ++            if (curl_global_init(CURL_GLOBAL_ALL))
    ++                return -1;
    ++
    ++            info = curl_version_info(CURLVERSION_NOW);
    ++            return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
    ++        }''',
    ++        name: 'test for curl support for asynchronous DNS',
    ++        dependencies: libcurl,
    ++      )
    ++
    ++      assert(r.compiled())
    ++      if r.returncode() == 0
    ++        libcurl_async_dns = true
    ++      endif
    ++    endif
    ++
    ++    if not libcurl_async_dns
    ++      warning('''
    ++*** The installed version of libcurl does not support asynchronous DNS
    ++*** lookups. Connection timeouts will not be honored during DNS resolution,
    ++*** which may lead to hangs in client programs.''')
    ++    endif
     +  endif
     +
     +else
    @@ src/backend/libpq/auth-oauth.c (new)
     +						   char **output, int *outputlen, const char **logdetail);
     +
     +static void load_validator_library(const char *libname);
    -+static void shutdown_validator_library(int code, Datum arg);
    ++static void shutdown_validator_library(void *arg);
     +
     +static ValidatorModuleState *validator_module_state;
     +static const OAuthValidatorCallbacks *ValidatorCallbacks;
    @@ src/backend/libpq/auth-oauth.c (new)
     +load_validator_library(const char *libname)
     +{
     +	OAuthValidatorModuleInit validator_init;
    ++	MemoryContextCallback *mcb;
     +
     +	Assert(libname && *libname);
     +
    @@ src/backend/libpq/auth-oauth.c (new)
     +	if (ValidatorCallbacks->startup_cb != NULL)
     +		ValidatorCallbacks->startup_cb(validator_module_state);
     +
    -+	before_shmem_exit(shutdown_validator_library, 0);
    ++	/* Shut down the library before cleaning up its state. */
    ++	mcb = palloc0(sizeof(*mcb));
    ++	mcb->func = shutdown_validator_library;
    ++
    ++	MemoryContextRegisterResetCallback(CurrentMemoryContext, mcb);
     +}
     +
     +/*
     + * Call the validator module's shutdown callback, if one is provided. This is
    -+ * invoked via before_shmem_exit().
    ++ * invoked during memory context reset.
     + */
     +static void
    -+shutdown_validator_library(int code, Datum arg)
    ++shutdown_validator_library(void *arg)
     +{
     +	if (ValidatorCallbacks->shutdown_cb != NULL)
     +		ValidatorCallbacks->shutdown_cb(validator_module_state);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	char	   *device_code;
     +	char	   *user_code;
     +	char	   *verification_uri;
    ++	char	   *verification_uri_complete;
    ++	char	   *expires_in_str;
     +	char	   *interval_str;
     +
     +	/* Fields below are parsed from the corresponding string above. */
    ++	int			expires_in;
     +	int			interval;
     +};
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	free(authz->device_code);
     +	free(authz->user_code);
     +	free(authz->verification_uri);
    ++	free(authz->verification_uri_complete);
    ++	free(authz->expires_in_str);
     +	free(authz->interval_str);
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +{
     +	enum OAuthStep step;		/* where are we in the flow? */
     +
    -+#ifdef HAVE_SYS_EPOLL_H
    -+	int			timerfd;		/* a timerfd for signaling async timeouts */
    -+#endif
    ++	int			timerfd;		/* descriptor for signaling async timeouts */
     +	pgsocket	mux;			/* the multiplexer socket containing all
     +								 * descriptors tracked by libcurl, plus the
     +								 * timerfd */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +	if (actx->mux != PGINVALID_SOCKET)
     +		close(actx->mux);
    -+#ifdef HAVE_SYS_EPOLL_H
     +	if (actx->timerfd >= 0)
     +		close(actx->timerfd);
    -+#endif
     +
     +	free(actx);
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		/*
     +		 * We should never start parsing a new field while a previous one is
     +		 * still active.
    -+		 *
    -+		 * TODO: this code relies on assertions too much. We need to exit
    -+		 * sanely on internal logic errors, to avoid turning bugs into
    -+		 * vulnerabilities.
     +		 */
    -+		Assert(!ctx->active);
    ++		if (ctx->active)
    ++		{
    ++			Assert(false);
    ++			oauth_parse_set_error(ctx,
    ++								  "internal error: started field '%s' before field '%s' was finished",
    ++								  name, ctx->active->name);
    ++			return JSON_SEM_ACTION_FAILED;
    ++		}
     +
     +		while (field->name)
     +		{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	struct oauth_parse *ctx = state;
     +
     +	--ctx->nested;
    -+	if (!ctx->nested)
    -+		Assert(!ctx->active);	/* all fields should be fully processed */
    ++
    ++	/*
    ++	 * All fields should be fully processed by the end of the top-level
    ++	 * object.
    ++	 */
    ++	if (!ctx->nested && ctx->active)
    ++	{
    ++		Assert(false);
    ++		oauth_parse_set_error(ctx,
    ++							  "internal error: field '%s' still active at end of object",
    ++							  ctx->active->name);
    ++		return JSON_SEM_ACTION_FAILED;
    ++	}
     +
     +	return JSON_SUCCESS;
     +}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	if (ctx->active)
     +	{
     +		/*
    -+		 * This assumes that no target arrays can contain other arrays, which
    -+		 * we check in the array_start callback.
    ++		 * Clear the target (which should be an array inside the top-level
    ++		 * object). For this to be safe, no target arrays can contain other
    ++		 * arrays; we check for that in the array_start callback.
     +		 */
    -+		Assert(ctx->nested == 2);
    -+		Assert(ctx->active->type == JSON_TOKEN_ARRAY_START);
    ++		if (ctx->nested != 2 || ctx->active->type != JSON_TOKEN_ARRAY_START)
    ++		{
    ++			Assert(false);
    ++			oauth_parse_set_error(ctx,
    ++								  "internal error: found unexpected array end while parsing field '%s'",
    ++								  ctx->active->name);
    ++			return JSON_SEM_ACTION_FAILED;
    ++		}
     +
     +		ctx->active = NULL;
     +	}
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +		if (field->type != JSON_TOKEN_ARRAY_START)
     +		{
    -+			Assert(ctx->nested == 1);
    -+			Assert(!*field->target.scalar);
    ++			/* Ensure that we're parsing the top-level keys... */
    ++			if (ctx->nested != 1)
    ++			{
    ++				Assert(false);
    ++				oauth_parse_set_error(ctx,
    ++									  "internal error: scalar target found at nesting level %d",
    ++									  ctx->nested);
    ++				return JSON_SEM_ACTION_FAILED;
    ++			}
    ++
    ++			/* ...and that a result has not already been set. */
    ++			if (*field->target.scalar)
    ++			{
    ++				Assert(false);
    ++				oauth_parse_set_error(ctx,
    ++									  "internal error: scalar field '%s' would be assigned twice",
    ++									  ctx->active->name);
    ++				return JSON_SEM_ACTION_FAILED;
    ++			}
     +
     +			*field->target.scalar = strdup(token);
     +			if (!*field->target.scalar)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		{
     +			struct curl_slist *temp;
     +
    -+			Assert(ctx->nested == 2);
    ++			/* The target array should be inside the top-level object. */
    ++			if (ctx->nested != 2)
    ++			{
    ++				Assert(false);
    ++				oauth_parse_set_error(ctx,
    ++									  "internal error: array member found at nesting level %d",
    ++									  ctx->nested);
    ++				return JSON_SEM_ACTION_FAILED;
    ++			}
     +
     +			/* Note that curl_slist_append() makes a copy of the token. */
     +			temp = curl_slist_append(*field->target.array, token);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +}
     +
     +/*
    ++ * Parses a valid JSON number into a double. The input must have come from
    ++ * pg_parse_json(), so that we know the lexer has validated it; there's no
    ++ * in-band signal for invalid formats.
    ++ */
    ++static double
    ++parse_json_number(const char *s)
    ++{
    ++	double		parsed;
    ++	int			cnt;
    ++
    ++	/*
    ++	 * The JSON lexer has already validated the number, which is stricter than
    ++	 * the %f format, so we should be good to use sscanf().
    ++	 */
    ++	cnt = sscanf(s, "%lf", &parsed);
    ++
    ++	if (cnt != 1)
    ++	{
    ++		/*
    ++		 * Either the lexer screwed up or our assumption above isn't true, and
    ++		 * either way a developer needs to take a look.
    ++		 */
    ++		Assert(cnt == 1);
    ++		return 0;
    ++	}
    ++
    ++	return parsed;
    ++}
    ++
    ++/*
     + * Parses the "interval" JSON number, corresponding to the number of seconds to
     + * wait between token endpoint requests.
     + *
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + * the result at a minimum of one. (Zero-second intervals would result in an
     + * expensive network polling loop.) Tests may remove the lower bound with
     + * PGOAUTHDEBUG, for improved performance.
    -+ *
    -+ * TODO: maybe clamp the upper bound too, based on the libpq timeout and/or the
    -+ * code expiration time?
     + */
     +static int
     +parse_interval(struct async_ctx *actx, const char *interval_str)
     +{
     +	double		parsed;
    -+	int			cnt;
    -+
    -+	/*
    -+	 * The JSON lexer has already validated the number, which is stricter than
    -+	 * the %f format, so we should be good to use sscanf().
    -+	 */
    -+	cnt = sscanf(interval_str, "%lf", &parsed);
    -+
    -+	if (cnt != 1)
    -+	{
    -+		/*
    -+		 * Either the lexer screwed up or our assumption above isn't true, and
    -+		 * either way a developer needs to take a look.
    -+		 */
    -+		Assert(cnt == 1);
    -+		return 1;				/* don't fall through in release builds */
    -+	}
     +
    ++	parsed = parse_json_number(interval_str);
     +	parsed = ceil(parsed);
     +
     +	if (parsed < 1)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +}
     +
     +/*
    ++ * Parses the "expires_in" JSON number, corresponding to the number of seconds
    ++ * remaining in the lifetime of the device code request.
    ++ *
    ++ * Similar to parse_interval, but we have even fewer requirements for reasonable
    ++ * values since we don't use the expiration time directly (it's passed to the
    ++ * PQAUTHDATA_PROMPT_OAUTH_DEVICE hook, in case the application wants to do
    ++ * something with it). We simply round and clamp to int range.
    ++ */
    ++static int
    ++parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
    ++{
    ++	double		parsed;
    ++
    ++	parsed = parse_json_number(expires_in_str);
    ++	parsed = round(parsed);
    ++
    ++	if (INT_MAX <= parsed)
    ++		return INT_MAX;
    ++	else if (parsed <= INT_MIN)
    ++		return INT_MIN;
    ++
    ++	return parsed;
    ++}
    ++
    ++/*
     + * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
     + */
     +static bool
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
     +		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
     +		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
    ++		{"expires_in", JSON_TOKEN_NUMBER, {&authz->expires_in_str}, REQUIRED},
     +
     +		/*
     +		 * Some services (Google, Azure) spell verification_uri differently.
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		 */
     +		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
     +
    -+		/*
    -+		 * The following fields are technically REQUIRED, but we don't use
    -+		 * them anywhere yet:
    -+		 *
    -+		 * - expires_in
    -+		 */
    -+
    ++		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
     +		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
     +
     +		{0},
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		authz->interval = 5;
     +	}
     +
    ++	Assert(authz->expires_in_str);	/* ensured by parse_oauth_json() */
    ++	authz->expires_in = parse_expires_in(actx, authz->expires_in_str);
    ++
     +	return true;
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
     +		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
     +
    -+		/*
    -+		 * The following fields are technically REQUIRED, but we don't use
    -+		 * them anywhere yet:
    ++		/*---
    ++		 * We currently have no use for the following OPTIONAL fields:
     +		 *
    -+		 * - scope (only required if different than requested -- TODO check)
    ++		 * - expires_in: This will be important for maintaining a token cache,
    ++		 *               but we do not yet implement one.
    ++		 *
    ++		 * - refresh_token: Ditto.
    ++		 *
    ++		 * - scope: This is only sent when the authorization server sees fit to
    ++		 *          change our scope request. It's not clear what we should do
    ++		 *          about this; either it's been done as a matter of policy, or
    ++		 *          the user has explicitly denied part of the authorization,
    ++		 *          and either way the server-side validator is in a better
    ++		 *          place to complain if the change isn't acceptable.
     +		 */
     +
     +		{0},
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     + * select() on instead of the Postgres socket during OAuth negotiation.
     + *
     + * This is just an epoll set or kqueue abstracting multiple other descriptors.
    -+ * A timerfd is always part of the set when using epoll; it's just disabled
    -+ * when we're not using it.
    ++ * For epoll, the timerfd is always part of the set; it's just disabled when
    ++ * we're not using it. For kqueue, the "timerfd" is actually a second kqueue
    ++ * instance which is only added to the set when needed.
     + */
     +static bool
     +setup_multiplexer(struct async_ctx *actx)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		return false;
     +	}
     +
    ++	/*
    ++	 * Originally, we set EVFILT_TIMER directly on the top-level multiplexer.
    ++	 * This makes it difficult to implement timer_expired(), though, so now we
    ++	 * set EVFILT_TIMER on a separate actx->timerfd, which is chained to
    ++	 * actx->mux while the timer is active.
    ++	 */
    ++	actx->timerfd = kqueue();
    ++	if (actx->timerfd < 0)
    ++	{
    ++		actx_error(actx, "failed to create timer kqueue: %m");
    ++		return false;
    ++	}
    ++
     +	return true;
     +#endif
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +
     +/*
     + * Enables or disables the timer in the multiplexer set. The timeout value is
    -+ * in milliseconds (negative values disable the timer). Rather than continually
    -+ * adding and removing the timer, we keep it in the set at all times and just
    -+ * disarm it when it's not needed.
    ++ * in milliseconds (negative values disable the timer).
    ++ *
    ++ * For epoll, rather than continually adding and removing the timer, we keep it
    ++ * in the set at all times and just disarm it when it's not needed. For kqueue,
    ++ * the timer is removed completely when disabled to prevent stale timeouts from
    ++ * remaining in the queue.
     + */
     +static bool
     +set_timer(struct async_ctx *actx, long timeout)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		actx_error(actx, "setting timerfd to %ld: %m", timeout);
     +		return false;
     +	}
    ++
    ++	return true;
     +#endif
     +#ifdef HAVE_SYS_EVENT_H
     +	struct kevent ev;
     +
    -+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : EV_ADD,
    ++	/* Enable/disable the timer itself. */
    ++	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
     +		   0, timeout, 0);
    -+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
    ++	if (kevent(actx->timerfd, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
     +	{
     +		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
     +		return false;
     +	}
    -+#endif
    ++
    ++	/*
    ++	 * Add/remove the timer to/from the mux. (In contrast with epoll, if we
    ++	 * allowed the timer to remain registered here after being disabled, the
    ++	 * mux queue would retain any previous stale timeout notifications and
    ++	 * remain readable.)
    ++	 */
    ++	EV_SET(&ev, actx->timerfd, EVFILT_READ, timeout < 0 ? EV_DELETE : EV_ADD,
    ++		   0, 0, 0);
    ++	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
    ++	{
    ++		actx_error(actx, "could not update timer on kqueue: %m");
    ++		return false;
    ++	}
     +
     +	return true;
    ++#endif
    ++
    ++	actx_error(actx, "libpq does not support timers on this platform");
    ++	return false;
    ++}
    ++
    ++/*
    ++ * Returns 1 if the timeout in the multiplexer set has expired since the last
    ++ * call to set_timer(), 0 if the timer is still running, or -1 (with an
    ++ * actx_error() report) if the timer cannot be queried.
    ++ */
    ++static int
    ++timer_expired(struct async_ctx *actx)
    ++{
    ++#if HAVE_SYS_EPOLL_H
    ++	struct itimerspec spec = {0};
    ++
    ++	if (timerfd_gettime(actx->timerfd, &spec) < 0)
    ++	{
    ++		actx_error(actx, "getting timerfd value: %m");
    ++		return -1;
    ++	}
    ++
    ++	/*
    ++	 * This implementation assumes we're using single-shot timers. If you
    ++	 * change to using intervals, you'll need to reimplement this function
    ++	 * too, possibly with the read() or select() interfaces for timerfd.
    ++	 */
    ++	Assert(spec.it_interval.tv_sec == 0
    ++		   && spec.it_interval.tv_nsec == 0);
    ++
    ++	/* If the remaining time to expiration is zero, we're done. */
    ++	return (spec.it_value.tv_sec == 0
    ++			&& spec.it_value.tv_nsec == 0);
    ++#endif
    ++#ifdef HAVE_SYS_EVENT_H
    ++	int			res;
    ++
    ++	/* Is the timer queue ready? */
    ++	res = PQsocketPoll(actx->timerfd, 1 /* forRead */ , 0, 0);
    ++	if (res < 0)
    ++	{
    ++		actx_error(actx, "checking kqueue for timeout: %m");
    ++		return -1;
    ++	}
    ++
    ++	return (res > 0);
    ++#endif
    ++
    ++	actx_error(actx, "libpq does not support timers on this platform");
    ++	return -1;
     +}
     +
     +/*
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	struct async_ctx *actx = ctx;
     +
     +	/*
    -+	 * TODO: maybe just signal drive_request() to immediately call back in the
    -+	 * (timeout == 0) case?
    ++	 * There might be an optimization opportunity here: if timeout == 0, we
    ++	 * could signal drive_request to immediately call
    ++	 * curl_multi_socket_action, rather than returning all the way up the
    ++	 * stack only to come right back. But it's not clear that the additional
    ++	 * code complexity is worth it.
     +	 */
     +	if (!set_timer(actx, timeout))
     +		return -1;				/* actx_error already called */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
     +			   void *clientp)
     +{
    -+	const char *const end = data + size;
     +	const char *prefix;
    ++	bool		printed_prefix = false;
    ++	PQExpBufferData buf;
     +
     +	/* Prefixes are modeled off of the default libcurl debug output. */
     +	switch (type)
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +			return 0;
     +	}
     +
    ++	initPQExpBuffer(&buf);
    ++
     +	/*
     +	 * Split the output into lines for readability; sometimes multiple headers
    -+	 * are included in a single call.
    ++	 * are included in a single call. We also don't allow unprintable ASCII
    ++	 * through without a basic <XX> escape.
     +	 */
    -+	while (data < end)
    ++	for (int i = 0; i < size; i++)
     +	{
    -+		size_t		len = end - data;
    -+		char	   *eol = memchr(data, '\n', len);
    ++		char		c = data[i];
     +
    -+		if (eol)
    -+			len = eol - data + 1;
    ++		if (!printed_prefix)
    ++		{
    ++			appendPQExpBuffer(&buf, "%s ", prefix);
    ++			printed_prefix = true;
    ++		}
     +
    -+		/* TODO: handle unprintables */
    -+		fprintf(stderr, "%s %.*s%s", prefix, (int) len, data,
    -+				eol ? "" : "\n");
    ++		if (c >= 0x20 && c <= 0x7E)
    ++			appendPQExpBufferChar(&buf, c);
    ++		else if ((type == CURLINFO_HEADER_IN
    ++				  || type == CURLINFO_HEADER_OUT
    ++				  || type == CURLINFO_TEXT)
    ++				 && (c == '\r' || c == '\n'))
    ++		{
    ++			/*
    ++			 * Don't bother emitting <0D><0A> for headers and text; it's not
    ++			 * helpful noise.
    ++			 */
    ++		}
    ++		else
    ++			appendPQExpBuffer(&buf, "<%02X>", c);
     +
    -+		data += len;
    ++		if (c == '\n')
    ++		{
    ++			appendPQExpBufferChar(&buf, c);
    ++			printed_prefix = false;
    ++		}
     +	}
     +
    ++	if (printed_prefix)
    ++		appendPQExpBufferChar(&buf, '\n');	/* finish the line */
    ++
    ++	fprintf(stderr, "%s", buf.data);
    ++	termPQExpBuffer(&buf);
     +	return 0;
     +}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +static bool
     +setup_curl_handles(struct async_ctx *actx)
     +{
    -+	curl_version_info_data *curl_info;
    -+
     +	/*
     +	 * Create our multi handle. This encapsulates the entire conversation with
     +	 * libcurl for this connection.
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	}
     +
     +	/*
    -+	 * Extract information about the libcurl we are linked against.
    -+	 */
    -+	curl_info = curl_version_info(CURLVERSION_NOW);
    -+
    -+	/*
     +	 * The multi handle tells us what to wait on using two callbacks. These
     +	 * will manipulate actx->mux as needed.
     +	 */
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
     +	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
     +	 * see pg_fe_run_oauth_flow().
    ++	 *
    ++	 * NB: If libcurl is not built against a friendly DNS resolver (c-ares or
    ++	 * threaded), setting this option prevents DNS lookups from timing out
    ++	 * correctly. We warn about this situation at configure time.
     +	 */
     +	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
    -+	if (!curl_info->ares_num)
    -+	{
    -+		/* No alternative resolver, TODO: warn about timeouts */
    -+	}
     +
     +	if (actx->debugging)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		CHECK_SETOPT(actx, popt, protos, return false);
     +	}
     +
    -+	/* TODO: would anyone use this in "real" situations, or just testing? */
    ++	/*
    ++	 * If we're in debug mode, allow the developer to change the trusted CA
    ++	 * list. For now, this is not something we expose outside of the UNSAFE
    ++	 * mode, because it's not clear that it's useful in production: both libpq
    ++	 * and the user's browser must trust the same authorization servers for
    ++	 * the flow to work at all, so any changes to the roots are likely to be
    ++	 * done system-wide.
    ++	 */
     +	if (actx->debugging)
     +	{
     +		const char *env;
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	 *    of the authorization server where the authorization request was
     +	 *    sent to. This comparison MUST use simple string comparison as defined
     +	 *    in Section 6.2.1 of [RFC3986].
    -+	 *
    -+	 * TODO: Encoding support?
     +	 */
     +	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
     +	{
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +	PGpromptOAuthDevice prompt = {
     +		.verification_uri = actx->authz.verification_uri,
     +		.user_code = actx->authz.user_code,
    -+		/* TODO: optional fields */
    ++		.verification_uri_complete = actx->authz.verification_uri_complete,
    ++		.expires_in = actx->authz.expires_in,
     +	};
     +
     +	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +		}
     +
     +		actx->mux = PGINVALID_SOCKET;
    -+#ifdef HAVE_SYS_EPOLL_H
     +		actx->timerfd = -1;
    -+#endif
     +
     +		/* Should we enable unsafe features? */
     +		actx->debugging = oauth_unsafe_debugging_enabled();
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +						/* not done yet */
     +						return status;
     +					}
    ++
    ++					break;
     +				}
     +
     +			case OAUTH_STEP_WAIT_INTERVAL:
    -+				/* TODO check that the timer has expired */
    ++
    ++				/*
    ++				 * The client application is supposed to wait until our timer
    ++				 * expires before calling PQconnectPoll() again, but that
    ++				 * might not happen. To avoid sending a token request early,
    ++				 * check the timer before continuing.
    ++				 */
    ++				if (!timer_expired(actx))
    ++				{
    ++					conn->altsock = actx->timerfd;
    ++					return PGRES_POLLING_READING;
    ++				}
    ++
    ++				/* Disable the expired timer. */
    ++				if (!set_timer(actx, -1))
    ++					goto error_return;
    ++
     +				break;
     +		}
     +
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c (new)
     +				if (!set_timer(actx, actx->authz.interval * 1000))
     +					goto error_return;
     +
    -+#ifdef HAVE_SYS_EPOLL_H
    -+
     +				/*
     +				 * No Curl requests are running, so we can simplify by having
     +				 * the client wait directly on the timerfd rather than the
    -+				 * multiplexer. (This isn't possible for kqueue.)
    ++				 * multiplexer.
     +				 */
     +				conn->altsock = actx->timerfd;
    -+#endif
     +
     +				actx->step = OAUTH_STEP_WAIT_INTERVAL;
     +				actx->running = 1;
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +{
     +	struct json_ctx *ctx = state;
     +
    ++	/* Only top-level keys are considered. */
     +	if (ctx->nested == 1)
     +	{
     +		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
    @@ src/interfaces/libpq/fe-auth-oauth.c (new)
     +
     +	if (ctx->target_field)
     +	{
    -+		Assert(ctx->nested == 1);
    ++		if (ctx->nested != 1)
    ++		{
    ++			/*
    ++			 * ctx->target_field should not have been set for nested keys.
    ++			 * Assert and don't continue any further for production builds.
    ++			 */
    ++			Assert(false);
    ++			oauth_json_set_error(ctx,	/* don't bother translating */
    ++								 "internal error: target scalar found at nesting level %d during OAUTHBEARER parsing",
    ++								 ctx->nested);
    ++			return JSON_SEM_ACTION_FAILED;
    ++		}
     +
     +		/*
     +		 * We don't allow duplicate field names; error out if the target has
    @@ src/interfaces/libpq/libpq-fe.h: extern int	PQenv2encoding(void);
     +{
     +	const char *verification_uri;	/* verification URI to visit */
     +	const char *user_code;		/* user code to enter */
    ++	const char *verification_uri_complete;	/* optional combination of URI and
    ++											 * code, or NULL */
    ++	int			expires_in;		/* seconds until user code expires */
     +} PGpromptOAuthDevice;
     +
     +/* for PGoauthBearerRequest.async() */
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
     +	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
     +		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
    -+	printf(" --no-hook				don't install OAuth hooks (connection will fail)\n");
    ++	printf(" --no-hook				don't install OAuth hooks\n");
     +	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
     +	printf(" --token TOKEN			use the provided TOKEN value\n");
    ++	printf(" --stress-async			busy-loop on PQconnectPoll rather than polling\n");
     +}
     +
     +/* --options */
     +static bool no_hook = false;
     +static bool hang_forever = false;
    ++static bool stress_async = false;
     +static const char *expected_uri = NULL;
     +static const char *expected_scope = NULL;
     +static const char *misbehave_mode = NULL;
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +		{"token", required_argument, NULL, 1003},
     +		{"hang-forever", no_argument, NULL, 1004},
     +		{"misbehave", required_argument, NULL, 1005},
    ++		{"stress-async", no_argument, NULL, 1006},
     +		{0}
     +	};
     +
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +				misbehave_mode = optarg;
     +				break;
     +
    ++			case 1006:			/* --stress-async */
    ++				stress_async = true;
    ++				break;
    ++
     +			default:
     +				usage(argv);
     +				return 1;
    @@ src/test/modules/oauth_validator/oauth_hook_client.c (new)
     +	PQsetAuthDataHook(handle_auth_data);
     +
     +	/* Connect. (All the actual work is in the hook.) */
    -+	conn = PQconnectdb(conninfo);
    ++	if (stress_async)
    ++	{
    ++		/*
    ++		 * Perform an asynchronous connection, busy-looping on PQconnectPoll()
    ++		 * without actually waiting on socket events. This stresses code paths
    ++		 * that rely on asynchronous work to be done before continuing with
    ++		 * the next step in the flow.
    ++		 */
    ++		PostgresPollingStatusType res;
    ++
    ++		conn = PQconnectStart(conninfo);
    ++
    ++		do
    ++		{
    ++			res = PQconnectPoll(conn);
    ++		} while (res != PGRES_POLLING_FAILED && res != PGRES_POLLING_OK);
    ++	}
    ++	else
    ++	{
    ++		/* Perform a standard synchronous connection. */
    ++		conn = PQconnectdb(conninfo);
    ++	}
    ++
     +	if (PQstatus(conn) != CONNECTION_OK)
     +	{
    -+		fprintf(stderr, "Connection to database failed: %s\n",
    ++		fprintf(stderr, "connection to database failed: %s\n",
     +				PQerrorMessage(conn));
     +		PQfinish(conn);
     +		return 1;
    @@ src/test/modules/oauth_validator/t/001_server.pl (new)
     +	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
     +);
     +
    ++# Stress test: make sure our builtin flow operates correctly even if the client
    ++# application isn't respecting PGRES_POLLING_READING/WRITING signals returned
    ++# from PQconnectPoll().
    ++$base_connstr =
    ++  "$common_connstr port=" . $node->port . " host=" . $node->host;
    ++my @cmd = (
    ++	"oauth_hook_client", "--no-hook", "--stress-async",
    ++	connstr(stage => 'all', retries => 1, interval => 1));
    ++
    ++note "running '" . join("' '", @cmd) . "'";
    ++my ($stdout, $stderr) = run_command(\@cmd);
    ++
    ++like($stdout, qr/connection succeeded/, "stress-async: stdout matches");
    ++unlike($stderr, qr/connection to database failed/, "stress-async: stderr matches");
    ++
     +#
     +# This section of tests reconfigures the validator module itself, rather than
     +# the OAuth server.
    @@ src/test/modules/oauth_validator/t/oauth_server.py (new)
     +            "device_code": "postgres",
     +            "user_code": "postgresuser",
     +            self._uri_spelling: uri,
    -+            "expires-in": 5,
    ++            "expires_in": 5,
     +            **self._response_padding,
     +        }
     +
    @@ src/test/modules/oauth_validator/validator.c (new)
     +{
     +	/* Check to make sure our private state still exists. */
     +	if (state->private_data != PRIVATE_COOKIE)
    -+		elog(ERROR, "oauth_validator: private state cookie changed to %p in shutdown",
    ++		elog(PANIC, "oauth_validator: private state cookie changed to %p in shutdown",
     +			 state->private_data);
     +}
     +
 2:  483129c1ca9 <  -:  ----------- fixup! Add OAUTHBEARER SASL mechanism
 3:  75d98784ded <  -:  ----------- fixup! Add OAUTHBEARER SASL mechanism
 4:  fd60ceb4c84 <  -:  ----------- fixup! Add OAUTHBEARER SASL mechanism
 5:  595362ef2c1 <  -:  ----------- fixup! Add OAUTHBEARER SASL mechanism
 6:  f73c042adc9 <  -:  ----------- fixup! Add OAUTHBEARER SASL mechanism
 7:  298839b69f0 <  -:  ----------- fixup! Add OAUTHBEARER SASL mechanism
 8:  1cf48a8f835 <  -:  ----------- fixup! Add OAUTHBEARER SASL mechanism
 9:  27135876559 <  -:  ----------- fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  2:  9171989a75e fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  3:  1bd03e1de10 fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  4:  0929bfbc5fc fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  5:  be882ef6eae fixup! Add OAUTHBEARER SASL mechanism
 -:  ----------- >  6:  954341052b4 fixup! Add OAUTHBEARER SASL mechanism
10:  d8c1f298080 =  7:  d88a2938e7e XXX fix libcurl link error
11:  dbf305d0489 !  8:  44e5cbc8ad1 DO NOT MERGE: Add pytest suite for OAuth
    @@ src/test/python/client/test_oauth.py (new)
     +            id="invalid request without description",
     +        ),
     +        pytest.param(
    -+            (400, {"error": "invalid_request", "padding": "x" * 1024 * 1024}),
    ++            (400, {"error": "invalid_request", "padding": "x" * 256 * 1024}),
     +            r"failed to obtain device authorization: response is too large",
     +            id="gigantic authz response",
     +        ),
    @@ src/test/python/client/test_oauth.py (new)
     +            id="access denied without description",
     +        ),
     +        pytest.param(
    -+            (400, {"error": "access_denied", "padding": "x" * 1024 * 1024}),
    ++            (400, {"error": "access_denied", "padding": "x" * 256 * 1024}),
     +            r"failed to obtain access token: response is too large",
     +            id="gigantic token response",
     +        ),
    @@ src/test/python/client/test_oauth.py (new)
     +                        "urn:ietf:params:oauth:grant-type:device_code"
     +                    ],
     +                    "device_authorization_endpoint": "https://256.256.256.256/dev",
    -+                    "filler": "x" * 1024 * 1024,
    ++                    "filler": "x" * 256 * 1024,
     +                },
     +            ),
     +            r"failed to fetch OpenID discovery document: response is too large",
    @@ src/test/python/server/oauthtest.c (new)
     +
     +static void test_startup(ValidatorModuleState *state);
     +static void test_shutdown(ValidatorModuleState *state);
    -+static ValidatorModuleResult *test_validate(ValidatorModuleState *state,
    -+											const char *token,
    -+											const char *role);
    ++static bool test_validate(const ValidatorModuleState *state,
    ++						  const char *token,
    ++						  const char *role,
    ++						  ValidatorModuleResult *result);
     +
     +static const OAuthValidatorCallbacks callbacks = {
    ++	PG_OAUTH_VALIDATOR_MAGIC,
    ++
     +	.startup_cb = test_startup,
     +	.shutdown_cb = test_shutdown,
     +	.validate_cb = test_validate,
    @@ src/test/python/server/oauthtest.c (new)
     +{
     +}
     +
    -+static ValidatorModuleResult *
    -+test_validate(ValidatorModuleState *state, const char *token, const char *role)
    ++static bool
    ++test_validate(const ValidatorModuleState *state,
    ++			  const char *token, const char *role,
    ++			  ValidatorModuleResult *res)
     +{
    -+	ValidatorModuleResult *res;
    -+
    -+	res = palloc0(sizeof(ValidatorModuleResult));	/* TODO: palloc context? */
    -+
     +	if (reflect_role)
     +	{
     +		res->authorized = true;
    -+		res->authn_id = pstrdup(role);	/* TODO: constify? */
    ++		res->authn_id = pstrdup(role);
     +	}
     +	else
     +	{
     +		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
     +			res->authorized = true;
     +		if (set_authn_id)
    -+			res->authn_id = authn_id;
    ++			res->authn_id = pstrdup(authn_id);
     +	}
     +
    -+	return res;
    ++	return true;
     +}
     
      ## src/test/python/server/test_oauth.py (new) ##
v48-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/x-patch; name=v48-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From cf86e3bfbbc790147073a75fda5ff19aaed03b88 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v48 1/8] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   65 +
 configure                                     |  332 ++
 configure.ac                                  |   41 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  406 +++
 doc/src/sgml/oauth-validators.sgml            |  402 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/protocol.sgml                    |  133 +-
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |  100 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  865 +++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2850 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1153 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   48 +-
 src/interfaces/libpq/libpq-fe.h               |   85 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  293 ++
 .../modules/oauth_validator/t/001_server.pl   |  566 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  135 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 59 files changed, 9000 insertions(+), 39 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index cfe2117e02e..c192a077701 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -167,7 +167,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -222,6 +222,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -315,8 +316,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -692,8 +695,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 7b55c2664a6..ead427046f5 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -274,3 +274,68 @@ AC_DEFUN([PGAC_CHECK_STRIP],
   AC_SUBST(STRIP_STATIC_LIB)
   AC_SUBST(STRIP_SHARED_LIB)
 ])# PGAC_CHECK_STRIP
+
+
+
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for required libraries and headers, and test to see whether the current
+# installation of libcurl is threadsafe.
+
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[
+  AC_CHECK_HEADER(curl/curl.h, [],
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+  AC_CHECK_LIB(curl, curl_multi_init, [],
+			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+])],
+  [pgac_cv__libcurl_threadsafe_init=yes],
+  [pgac_cv__libcurl_threadsafe_init=no],
+  [pgac_cv__libcurl_threadsafe_init=unknown])])
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
+              [Define to 1 if curl_global_init() is guaranteed to be threadsafe.])
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  AC_CACHE_CHECK([for curl support for asynchronous DNS], [pgac_cv__libcurl_async_dns],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+])],
+  [pgac_cv__libcurl_async_dns=yes],
+  [pgac_cv__libcurl_async_dns=no],
+  [pgac_cv__libcurl_async_dns=unknown])])
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    AC_MSG_WARN([
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.])
+  fi
+])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0ffcaeb4367..93fddd69981 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,157 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12378,176 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
+fi
+
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
+$as_echo_n "checking for curl_global_init thread safety... " >&6; }
+if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_threadsafe_init=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_threadsafe_init=yes
+else
+  pgac_cv__libcurl_threadsafe_init=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
+$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+
+$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
+
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl support for asynchronous DNS" >&5
+$as_echo_n "checking for curl support for asynchronous DNS... " >&6; }
+if ${pgac_cv__libcurl_async_dns+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_async_dns=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_async_dns=yes
+else
+  pgac_cv__libcurl_async_dns=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_async_dns" >&5
+$as_echo "$pgac_cv__libcurl_async_dns" >&6; }
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&5
+$as_echo "$as_me: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&2;}
+  fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
diff --git a/configure.ac b/configure.ac
index f56681e0d91..b6d02f5ecc7 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,40 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1328,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..f84085dbac4 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 38244409e3c..d53595f8951 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3f0a7e9c069..96e433179b9 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1143,6 +1143,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2584,6 +2597,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index e04acf1c208..ddfc2a27c50 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,106 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10129,291 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+    const char *verification_uri_complete;  /* optional combination of URI and
+                                             * code, or NULL */
+    int         expires_in;         /* seconds until user code expires */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+        <para>
+         If a non-NULL <structfield>verification_uri_complete</structfield> is
+         provided, it may optionally be used for non-textual verification (for
+         example, by displaying a QR code). The URL and user code should still
+         be displayed to the end user in this case, because the code will be
+         manually confirmed by the provider, and the URL lets users continue
+         even if they can't use the non-textual method. Review the RFC's
+         <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">notes
+         on non-textual verification</ulink>.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
@@ -10092,6 +10486,18 @@ int PQisthreadsafe();
    <application>libpq</application> source code for a way to do cooperative
    locking between <application>libpq</application> and your application.
   </para>
+
+  <para>
+   Similarly, if you are using Curl inside your application,
+   <emphasis>and</emphasis> you do not already
+   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
+   libcurl globally</ulink> before starting new threads, you will need to
+   cooperatively lock (again via <function>PQregisterThreadLock</function>)
+   around any code that may initialize libcurl. This restriction is lifted for
+   more recent versions of Curl that are built to support threadsafe
+   initialization; those builds can be identified by the advertisement of a
+   <literal>threadsafe</literal> feature in their version metadata.
+  </para>
  </sect1>
 
 
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..d0bca9196d9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,402 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index fb5dec1172e..3bd9e68e6ce 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1688,11 +1688,11 @@ SELCT 1/0;<!-- this typo is intentional -->
 
   <para>
    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
-   might be added in the future. The below steps illustrate how SASL
-   authentication is performed in general, while the next subsection gives
-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
+   protocols. At the moment, <productname>PostgreSQL</productname> implements three
+   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
+   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
+   authentication is performed in general, while the next subsections give
+   more details on particular mechanisms.
   </para>
 
   <procedure>
@@ -1727,7 +1727,7 @@ SELCT 1/0;<!-- this typo is intentional -->
    <step id="sasl-auth-end">
     <para>
      Finally, when the authentication exchange is completed successfully, the
-     server sends an AuthenticationSASLFinal message, followed
+     server sends an optional AuthenticationSASLFinal message, followed
      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
      contains additional server-to-client data, whose content is particular to the
      selected authentication mechanism. If the authentication mechanism doesn't
@@ -1746,9 +1746,9 @@ SELCT 1/0;<!-- this typo is intentional -->
    <title>SCRAM-SHA-256 Authentication</title>
 
    <para>
-    The implemented SASL mechanisms at the moment
-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
+    <literal>SCRAM-SHA-256</literal>, and its variant with channel
+    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
+    authentication mechanisms. They are described in
     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    </para>
@@ -1850,6 +1850,121 @@ SELCT 1/0;<!-- this typo is intentional -->
     </step>
    </procedure>
   </sect2>
+
+  <sect2 id="sasl-oauthbearer">
+   <title>OAUTHBEARER Authentication</title>
+
+   <para>
+    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
+    authentication. It is described in detail in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
+   </para>
+
+   <para>
+    A typical exchange differs depending on whether or not the client already
+    has a bearer token cached for the current user. If it does not, the exchange
+    will take place over two connections: the first "discovery" connection to
+    obtain OAuth metadata from the server, and the second connection to send
+    the token after the client has obtained it. (libpq does not currently
+    implement a caching method as part of its builtin flow, so it uses the
+    two-connection exchange.)
+   </para>
+
+   <para>
+    This mechanism is client-initiated, like SCRAM. The client initial response
+    consists of the standard "GS2" header used by SCRAM, followed by a list of
+    <literal>key=value</literal> pairs. The only key currently supported by
+    the server is <literal>auth</literal>, which contains the bearer token.
+    <literal>OAUTHBEARER</literal> additionally specifies three optional
+    components of the client initial response (the <literal>authzid</literal> of
+    the GS2 header, and the <structfield>host</structfield> and
+    <structfield>port</structfield> keys) which are currently ignored by the
+    server.
+   </para>
+
+   <para>
+    <literal>OAUTHBEARER</literal> does not support channel binding, and there
+    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
+    server data during a successful authentication, so the
+    AuthenticationSASLFinal message is not used in the exchange.
+   </para>
+
+   <procedure>
+    <title>Example</title>
+    <step>
+     <para>
+      During the first exchange, the server sends an AuthenticationSASL message
+      with the <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message which
+      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
+      client does not already have a valid bearer token for the current user,
+      the <structfield>auth</structfield> field is empty, indicating a discovery
+      connection.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an AuthenticationSASLContinue message containing an error
+      <literal>status</literal> alongside a well-known URI and scopes that the
+      client should use to conduct an OAuth flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Client sends a SASLResponse message containing the empty set (a single
+      <literal>0x01</literal> byte) to finish its half of the discovery
+      exchange.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an ErrorMessage to fail the first exchange.
+     </para>
+     <para>
+      At this point, the client conducts one of many possible OAuth flows to
+      obtain a bearer token, using any metadata that it has been configured with
+      in addition to that provided by the server. (This description is left
+      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
+      mandate any particular method for obtaining a token.)
+     </para>
+     <para>
+      Once it has a token, the client reconnects to the server for the final
+      exchange:
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server once again sends an AuthenticationSASL message with the
+      <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message, but this
+      time the <structfield>auth</structfield> field in the message contains the
+      bearer token that was obtained during the client flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server validates the token according to the instructions of the
+      token provider. If the client is authorized to connect, it sends an
+      AuthenticationOk message to end the SASL exchange.
+     </para>
+    </step>
+   </procedure>
+  </sect2>
  </sect1>
 
  <sect1 id="protocol-replication">
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index 7c474559bdf..0e5e8e8f309 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -347,6 +347,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 1ceadb9a830..96e5f0f6434 100644
--- a/meson.build
+++ b/meson.build
@@ -854,6 +854,101 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+
+    # Check to see whether the current platform supports threadsafe Curl
+    # initialization.
+    libcurl_threadsafe_init = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+        #ifdef CURL_VERSION_THREADSAFE
+            if (info->features & CURL_VERSION_THREADSAFE)
+                return 0;
+        #endif
+
+            return 1;
+        }''',
+        name: 'test for curl_global_init thread safety',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_threadsafe_init = true
+        message('curl_global_init is threadsafe')
+      elif r.returncode() == 1
+        message('curl_global_init is not threadsafe')
+      else
+        message('curl_global_init failed; assuming not threadsafe')
+      endif
+    endif
+
+    if libcurl_threadsafe_init
+      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
+    endif
+
+    # Warn if a thread-friendly DNS resolver isn't built.
+    libcurl_async_dns = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+            return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+        }''',
+        name: 'test for curl support for asynchronous DNS',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_async_dns = true
+      endif
+    endif
+
+    if not libcurl_async_dns
+      warning('''
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.''')
+    endif
+  endif
+
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3044,6 +3139,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3720,6 +3819,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbe11e75bf0..3b620bac5ac 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..d910cbcb161
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,865 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(void *arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message.
+	 *
+	 * TODO: see if there's a better place to fail, earlier than this.
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+	MemoryContextCallback *mcb;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	/* Shut down the library before cleaning up its state. */
+	mcb = palloc0(sizeof(*mcb));
+	mcb->func = shutdown_validator_library;
+
+	MemoryContextRegisterResetCallback(CurrentMemoryContext, mcb);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked during memory context reset.
+ */
+static void
+shutdown_validator_library(void *arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index ce7534d4d23..7747a09c2a9 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4832,6 +4833,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index c40b7a3121e..9184ea0f1d4 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..8fe56267780
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..4fcdda74305
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..c04ee38d086 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -442,6 +445,9 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
+/* Define to 1 if curl_global_init() is guaranteed to be threadsafe. */
+#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
 /* Define to 1 if your compiler understands `typeof' or something similar. */
 #undef HAVE_TYPEOF
 
@@ -663,6 +669,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 701810a272a..90b0b65db6f 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..96c5096e4ca
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2850 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *verification_uri_complete;
+	char	   *expires_in_str;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			expires_in;
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->verification_uri_complete);
+	free(authz->expires_in_str);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+	int			timerfd;		/* descriptor for signaling async timeouts */
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * TODO: in general, none of the error cases below should ever happen if
+	 * we have no bugs above. But if we do hit them, surfacing those errors
+	 * somehow might be the only way to have a chance to debug them. What's
+	 * the best way to do that? Assertions? Spraying messages on stderr?
+	 * Bubbling an error code to the top? Appending to the connection's error
+	 * message only helps if the bug caused a connection failure; otherwise
+	 * it'll be buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 */
+		if (ctx->active)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: started field '%s' before field '%s' was finished",
+								  name, ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+
+	/*
+	 * All fields should be fully processed by the end of the top-level
+	 * object.
+	 */
+	if (!ctx->nested && ctx->active)
+	{
+		Assert(false);
+		oauth_parse_set_error(ctx,
+							  "internal error: field '%s' still active at end of object",
+							  ctx->active->name);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Clear the target (which should be an array inside the top-level
+		 * object). For this to be safe, no target arrays can contain other
+		 * arrays; we check for that in the array_start callback.
+		 */
+		if (ctx->nested != 2 || ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: found unexpected array end while parsing field '%s'",
+								  ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			/* Ensure that we're parsing the top-level keys... */
+			if (ctx->nested != 1)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar target found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* ...and that a result has not already been set. */
+			if (*field->target.scalar)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar field '%s' would be assigned twice",
+									  ctx->active->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			/* The target array should be inside the top-level object. */
+			if (ctx->nested != 2)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: array member found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses a valid JSON number into a double. The input must have come from
+ * pg_parse_json(), so that we know the lexer has validated it; there's no
+ * in-band signal for invalid formats.
+ */
+static double
+parse_json_number(const char *s)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(s, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(cnt == 1);
+		return 0;
+	}
+
+	return parsed;
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(interval_str);
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the "expires_in" JSON number, corresponding to the number of seconds
+ * remaining in the lifetime of the device code request.
+ *
+ * Similar to parse_interval, but we have even fewer requirements for reasonable
+ * values since we don't use the expiration time directly (it's passed to the
+ * PQAUTHDATA_PROMPT_OAUTH_DEVICE hook, in case the application wants to do
+ * something with it). We simply round and clamp to int range.
+ */
+static int
+parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(expires_in_str);
+	parsed = round(parsed);
+
+	if (INT_MAX <= parsed)
+		return INT_MAX;
+	else if (parsed <= INT_MIN)
+		return INT_MIN;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+		{"expires_in", JSON_TOKEN_NUMBER, {&authz->expires_in_str}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	Assert(authz->expires_in_str);	/* ensured by parse_oauth_json() */
+	authz->expires_in = parse_expires_in(actx, authz->expires_in_str);
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*---
+		 * We currently have no use for the following OPTIONAL fields:
+		 *
+		 * - expires_in: This will be important for maintaining a token cache,
+		 *               but we do not yet implement one.
+		 *
+		 * - refresh_token: Ditto.
+		 *
+		 * - scope: This is only sent when the authorization server sees fit to
+		 *          change our scope request. It's not clear what we should do
+		 *          about this; either it's been done as a matter of policy, or
+		 *          the user has explicitly denied part of the authorization,
+		 *          and either way the server-side validator is in a better
+		 *          place to complain if the change isn't acceptable.
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * For epoll, the timerfd is always part of the set; it's just disabled when
+ * we're not using it. For kqueue, the "timerfd" is actually a second kqueue
+ * instance which is only added to the set when needed.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	/*
+	 * Originally, we set EVFILT_TIMER directly on the top-level multiplexer.
+	 * This makes it difficult to implement timer_expired(), though, so now we
+	 * set EVFILT_TIMER on a separate actx->timerfd, which is chained to
+	 * actx->mux while the timer is active.
+	 */
+	actx->timerfd = kqueue();
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timer kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer).
+ *
+ * For epoll, rather than continually adding and removing the timer, we keep it
+ * in the set at all times and just disarm it when it's not needed. For kqueue,
+ * the timer is removed completely when disabled to prevent stale timeouts from
+ * remaining in the queue.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	/* Enable/disable the timer itself. */
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
+		   0, timeout, 0);
+	if (kevent(actx->timerfd, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+
+	/*
+	 * Add/remove the timer to/from the mux. (In contrast with epoll, if we
+	 * allowed the timer to remain registered here after being disabled, the
+	 * mux queue would retain any previous stale timeout notifications and
+	 * remain readable.)
+	 */
+	EV_SET(&ev, actx->timerfd, EVFILT_READ, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, 0, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "could not update timer on kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return false;
+}
+
+/*
+ * Returns 1 if the timeout in the multiplexer set has expired since the last
+ * call to set_timer(), 0 if the timer is still running, or -1 (with an
+ * actx_error() report) if the timer cannot be queried.
+ */
+static int
+timer_expired(struct async_ctx *actx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timerfd_gettime(actx->timerfd, &spec) < 0)
+	{
+		actx_error(actx, "getting timerfd value: %m");
+		return -1;
+	}
+
+	/*
+	 * This implementation assumes we're using single-shot timers. If you
+	 * change to using intervals, you'll need to reimplement this function
+	 * too, possibly with the read() or select() interfaces for timerfd.
+	 */
+	Assert(spec.it_interval.tv_sec == 0
+		   && spec.it_interval.tv_nsec == 0);
+
+	/* If the remaining time to expiration is zero, we're done. */
+	return (spec.it_value.tv_sec == 0
+			&& spec.it_value.tv_nsec == 0);
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	int			res;
+
+	/* Is the timer queue ready? */
+	res = PQsocketPoll(actx->timerfd, 1 /* forRead */ , 0, 0);
+	if (res < 0)
+	{
+		actx_error(actx, "checking kqueue for timeout: %m");
+		return -1;
+	}
+
+	return (res > 0);
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return -1;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * There might be an optimization opportunity here: if timeout == 0, we
+	 * could signal drive_request to immediately call
+	 * curl_multi_socket_action, rather than returning all the way up the
+	 * stack only to come right back. But it's not clear that the additional
+	 * code complexity is worth it.
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *prefix;
+	bool		printed_prefix = false;
+	PQExpBufferData buf;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	initPQExpBuffer(&buf);
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call. We also don't allow unprintable ASCII
+	 * through without a basic <XX> escape.
+	 */
+	for (int i = 0; i < size; i++)
+	{
+		char		c = data[i];
+
+		if (!printed_prefix)
+		{
+			appendPQExpBuffer(&buf, "%s ", prefix);
+			printed_prefix = true;
+		}
+
+		if (c >= 0x20 && c <= 0x7E)
+			appendPQExpBufferChar(&buf, c);
+		else if ((type == CURLINFO_HEADER_IN
+				  || type == CURLINFO_HEADER_OUT
+				  || type == CURLINFO_TEXT)
+				 && (c == '\r' || c == '\n'))
+		{
+			/*
+			 * Don't bother emitting <0D><0A> for headers and text; it's not
+			 * helpful noise.
+			 */
+		}
+		else
+			appendPQExpBuffer(&buf, "<%02X>", c);
+
+		if (c == '\n')
+		{
+			appendPQExpBufferChar(&buf, c);
+			printed_prefix = false;
+		}
+	}
+
+	if (printed_prefix)
+		appendPQExpBufferChar(&buf, '\n');	/* finish the line */
+
+	fprintf(stderr, "%s", buf.data);
+	termPQExpBuffer(&buf);
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 *
+	 * NB: If libcurl is not built against a friendly DNS resolver (c-ares or
+	 * threaded), setting this option prevents DNS lookups from timing out
+	 * correctly. We warn about this situation at configure time.
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * If we're in debug mode, allow the developer to change the trusted CA
+	 * list. For now, this is not something we expose outside of the UNSAFE
+	 * mode, because it's not clear that it's useful in production: both libpq
+	 * and the user's browser must trust the same authorization servers for
+	 * the flow to work at all, so any changes to the roots are likely to be
+	 * done system-wide.
+	 */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define HTTPS_SCHEME "https://"
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * provides an authorization endpoint, and both the token and authorization
+ * endpoint URLs seem reasonable).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/*
+	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
+	 * was present in the discovery document's grant_types_supported list. MS
+	 * Entra does not advertise this grant type, though, and since it doesn't
+	 * make sense to stand up a device_authorization_endpoint without also
+	 * accepting device codes at the token_endpoint, that's the only thing we
+	 * currently require.
+	 */
+
+	/*
+	 * Although libcurl will fail later if the URL contains an unsupported
+	 * scheme, that error message is going to be a bit opaque. This is a
+	 * decent time to bail out if we're not using HTTPS for the endpoints
+	 * we'll use for the flow.
+	 */
+	if (!actx->debugging)
+	{
+		if (pg_strncasecmp(provider->device_authorization_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "device authorization endpoint \"%s\" must use HTTPS",
+					   provider->device_authorization_endpoint);
+			return false;
+		}
+
+		if (pg_strncasecmp(provider->token_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "token endpoint \"%s\" must use HTTPS",
+					   provider->token_endpoint);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		.verification_uri_complete = actx->authz.verification_uri_complete,
+		.expires_in = actx->authz.expires_in,
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Calls curl_global_init() in a thread-safe way.
+ *
+ * libcurl has stringent requirements for the thread context in which you call
+ * curl_global_init(), because it's going to try initializing a bunch of other
+ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
+ * the thread-safety situation, but there's a chicken-and-egg problem at
+ * runtime: you can't check the thread safety until you've initialized libcurl,
+ * which you can't do from within a thread unless you know it's thread-safe...
+ *
+ * Returns true if initialization was successful. Successful or not, this
+ * function will not try to reinitialize Curl on successive calls.
+ */
+static bool
+initialize_curl(PGconn *conn)
+{
+	/*
+	 * Don't let the compiler play tricks with this variable. In the
+	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
+	 * enter simultaneously, but we do care if this gets set transiently to
+	 * PG_BOOL_YES/NO in cases where that's not the final answer.
+	 */
+	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	curl_version_info_data *info;
+#endif
+
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * Lock around the whole function. If a libpq client performs its own work
+	 * with libcurl, it must either ensure that Curl is initialized safely
+	 * before calling us (in which case our call will be a no-op), or else it
+	 * must guard its own calls to curl_global_init() with a registered
+	 * threadlock handler. See PQregisterThreadLock().
+	 */
+	pglock_thread();
+#endif
+
+	/*
+	 * Skip initialization if we've already done it. (Curl tracks the number
+	 * of calls; there's no point in incrementing the counter every time we
+	 * connect.)
+	 */
+	if (init_successful == PG_BOOL_YES)
+		goto done;
+	else if (init_successful == PG_BOOL_NO)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init previously failed during OAuth setup");
+		goto done;
+	}
+
+	/*
+	 * We know we've already initialized Winsock by this point (see
+	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
+	 * we have to tell libcurl to initialize everything else, because other
+	 * pieces of our client executable may already be using libcurl for their
+	 * own purposes. If we initialize libcurl with only a subset of its
+	 * features, we could break those other clients nondeterministically, and
+	 * that would probably be a nightmare to debug.
+	 *
+	 * If some other part of the program has already called this, it's a
+	 * no-op.
+	 */
+	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init failed during OAuth setup");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * If we determined at configure time that the Curl installation is
+	 * threadsafe, our job here is much easier. We simply initialize above
+	 * without any locking (concurrent or duplicated calls are fine in that
+	 * situation), then double-check to make sure the runtime setting agrees,
+	 * to try to catch silent downgrades.
+	 */
+	info = curl_version_info(CURLVERSION_NOW);
+	if (!(info->features & CURL_VERSION_THREADSAFE))
+	{
+		/*
+		 * In a downgrade situation, the damage is already done. Curl global
+		 * state may be corrupted. Be noisy.
+		 */
+		libpq_append_conn_error(conn, "libcurl is no longer threadsafe\n"
+								"\tCurl initialization was reported threadsafe when libpq\n"
+								"\twas compiled, but the currently installed version of\n"
+								"\tlibcurl reports that it is not. Recompile libpq against\n"
+								"\tthe installed version of libcurl.");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+#endif
+
+	init_successful = PG_BOOL_YES;
+
+done:
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	pgunlock_thread();
+#endif
+	return (init_successful == PG_BOOL_YES);
+}
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	if (!initialize_curl(conn))
+		return PGRES_POLLING_FAILED;
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+		actx->timerfd = -1;
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+
+					break;
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+
+				/*
+				 * The client application is supposed to wait until our timer
+				 * expires before calling PQconnectPoll() again, but that
+				 * might not happen. To avoid sending a token request early,
+				 * check the timer before continuing.
+				 */
+				if (!timer_expired(actx))
+				{
+					conn->altsock = actx->timerfd;
+					return PGRES_POLLING_READING;
+				}
+
+				/* Disable the expired timer. */
+				if (!set_timer(actx, -1))
+					goto error_return;
+
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer.
+				 */
+				conn->altsock = actx->timerfd;
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..8beae9604c7
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1153 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	/* Only top-level keys are considered. */
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		if (ctx->nested != 1)
+		{
+			/*
+			 * ctx->target_field should not have been set for nested keys.
+			 * Assert and don't continue any further for production builds.
+			 */
+			Assert(false);
+			oauth_json_set_error(ctx,	/* don't bother translating */
+								 "internal error: target scalar found at nesting level %d during OAUTHBEARER parsing",
+								 ctx->nested);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..32598721686
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f7..ec7a9236044 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85d1ca2864f..d5051f5e820 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -399,6 +417,7 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 static const pg_fe_sasl_mech *supported_sasl_mechs[] =
 {
 	&pg_scram_mech,
+	&pg_oauth_mech,
 };
 #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
 
@@ -655,6 +674,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1144,7 +1164,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1519,6 +1539,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4111,7 +4135,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4390,6 +4426,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -5002,6 +5041,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5155,6 +5200,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..b7399dee58e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,86 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+	const char *verification_uri_complete;	/* optional combination of URI and
+											 * code, or NULL */
+	int			expires_in;		/* seconds until user code expires */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a50..f36f7f19d58 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index dd64d291b3e..19f4a52a97a 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -37,6 +38,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index c0d3cf0e14b..bdfd5f1f8de 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index 4f544a042d4..0c2ccc75a63 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..f297ed5c968
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..138a8104622
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..f77a3e115c6
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..4b78c90557c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..fc003030ff8
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,293 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+	printf(" --stress-async			busy-loop on PQconnectPoll rather than polling\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static bool stress_async = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{"stress-async", no_argument, NULL, 1006},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			case 1006:			/* --stress-async */
+				stress_async = true;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	if (stress_async)
+	{
+		/*
+		 * Perform an asynchronous connection, busy-looping on PQconnectPoll()
+		 * without actually waiting on socket events. This stresses code paths
+		 * that rely on asynchronous work to be done before continuing with
+		 * the next step in the flow.
+		 */
+		PostgresPollingStatusType res;
+
+		conn = PQconnectStart(conninfo);
+
+		do
+		{
+			res = PQconnectPoll(conn);
+		} while (res != PGRES_POLLING_FAILED && res != PGRES_POLLING_OK);
+	}
+	else
+	{
+		/* Perform a standard synchronous connection. */
+		conn = PQconnectdb(conninfo);
+	}
+
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..f0b918390fd
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,566 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+# Stress test: make sure our builtin flow operates correctly even if the client
+# application isn't respecting PGRES_POLLING_READING/WRITING signals returned
+# from PQconnectPoll().
+$base_connstr =
+  "$common_connstr port=" . $node->port . " host=" . $node->host;
+my @cmd = (
+	"oauth_hook_client", "--no-hook", "--stress-async",
+	connstr(stage => 'all', retries => 1, interval => 1));
+
+note "running '" . join("' '", @cmd) . "'";
+my ($stdout, $stderr) = run_command(\@cmd);
+
+like($stdout, qr/connection succeeded/, "stress-async: stdout matches");
+unlike($stderr, qr/connection to database failed/, "stress-async: stderr matches");
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..95cccf90dd8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..f0f23d1d1a8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..4faf3323d38
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires_in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..ef9bbb2866f
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,135 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(PANIC, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index f521ad0b12f..ab7d7452ede 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2515,6 +2515,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2558,7 +2563,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 9a3bee93dec..f3e3592eb77 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1951,6 +1958,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3089,6 +3097,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3485,6 +3495,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v48-0002-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/x-patch; name=v48-0002-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 9171989a75e364b3977e42c33e37bc983c2d000c Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 7 Feb 2025 14:23:40 -0800
Subject: [PATCH v48 2/8] fixup! Add OAUTHBEARER SASL mechanism

---
 doc/src/sgml/libpq.sgml                   |  6 +++---
 src/backend/libpq/auth-oauth.c            |  5 ++---
 src/interfaces/libpq/fe-auth-oauth-curl.c | 26 +++++++++++++++--------
 3 files changed, 22 insertions(+), 15 deletions(-)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index ddfc2a27c50..3ee0a31e6b7 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10255,9 +10255,9 @@ typedef struct _PGpromptOAuthDevice
          example, by displaying a QR code). The URL and user code should still
          be displayed to the end user in this case, because the code will be
          manually confirmed by the provider, and the URL lets users continue
-         even if they can't use the non-textual method. Review the RFC's
-         <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">notes
-         on non-textual verification</ulink>.
+         even if they can't use the non-textual method. For more information,
+         see section 3.3.1 in
+         <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">RFC 8628</ulink>.
         </para>
        </listitem>
       </varlistentry>
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index d910cbcb161..aa16977c643 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -490,9 +490,8 @@ generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
 	/*
 	 * The admin needs to set an issuer and scope for OAuth to work. There's
 	 * not really a way to hide this from the user, either, because we can't
-	 * choose a "default" issuer, so be honest in the failure message.
-	 *
-	 * TODO: see if there's a better place to fail, earlier than this.
+	 * choose a "default" issuer, so be honest in the failure message. (In
+	 * practice such configurations are rejected during HBA parsing.)
 	 */
 	if (!ctx->issuer || !ctx->scope)
 		ereport(FATAL,
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 96c5096e4ca..2179bb89800 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -225,13 +225,14 @@ static void
 free_async_ctx(PGconn *conn, struct async_ctx *actx)
 {
 	/*
-	 * TODO: in general, none of the error cases below should ever happen if
-	 * we have no bugs above. But if we do hit them, surfacing those errors
-	 * somehow might be the only way to have a chance to debug them. What's
-	 * the best way to do that? Assertions? Spraying messages on stderr?
-	 * Bubbling an error code to the top? Appending to the connection's error
-	 * message only helps if the bug caused a connection failure; otherwise
-	 * it'll be buried...
+	 * In general, none of the error cases below should ever happen if we have
+	 * no bugs above. But if we do hit them, surfacing those errors somehow
+	 * might be the only way to have a chance to debug them.
+	 *
+	 * TODO: At some point it'd be nice to have a standard way to warn about
+	 * teardown failures. Appending to the connection's error message only
+	 * helps if the bug caused a connection failure; otherwise it'll be
+	 * buried...
 	 */
 
 	if (actx->curlm && actx->curl)
@@ -876,7 +877,7 @@ parse_json_number(const char *s)
 		 * Either the lexer screwed up or our assumption above isn't true, and
 		 * either way a developer needs to take a look.
 		 */
-		Assert(cnt == 1);
+		Assert(false);
 		return 0;
 	}
 
@@ -1121,6 +1122,7 @@ setup_multiplexer(struct async_ctx *actx)
 	actx->mux = kqueue();
 	if (actx->mux < 0)
 	{
+		/*- translator: the term "kqueue" (kernel queue) should not be translated */
 		actx_error(actx, "failed to create kqueue: %m");
 		return false;
 	}
@@ -1486,7 +1488,7 @@ debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
 
 		if (!printed_prefix)
 		{
-			appendPQExpBuffer(&buf, "%s ", prefix);
+			appendPQExpBuffer(&buf, "[libcurl] %s ", prefix);
 			printed_prefix = true;
 		}
 
@@ -1570,6 +1572,12 @@ setup_curl_handles(struct async_ctx *actx)
 	 * NB: If libcurl is not built against a friendly DNS resolver (c-ares or
 	 * threaded), setting this option prevents DNS lookups from timing out
 	 * correctly. We warn about this situation at configure time.
+	 *
+	 * TODO: Perhaps there's a clever way to warn the user about synchronous
+	 * DNS at runtime too? It's not immediately clear how to do that in a
+	 * helpful way: for many standard single-threaded use cases, the user
+	 * might not care at all, so spraying warnings to stderr would probably do
+	 * more harm than good.
 	 */
 	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
 
-- 
2.34.1

v48-0003-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/x-patch; name=v48-0003-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 1bd03e1de1010d1f99f490776a63e53f75e34ec4 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 7 Feb 2025 14:23:40 -0800
Subject: [PATCH v48 3/8] fixup! Add OAUTHBEARER SASL mechanism

---
 doc/src/sgml/libpq.sgml            | 40 +++++++++++++++++++++++++++++-
 doc/src/sgml/oauth-validators.sgml |  9 ++++++-
 2 files changed, 47 insertions(+), 2 deletions(-)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 3ee0a31e6b7..e655ee20890 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10133,8 +10133,46 @@ void PQinitSSL(int do_ssl);
   <title>OAuth Support</title>
 
   <para>
-   TODO
+   libpq implements support for the OAuth v2 Device Authorization client flow,
+   documented in
+   <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
+   which it will attempt to use by default if the server
+   <link linkend="auth-oauth">requests a bearer token</link> during
+   authentication. This flow can be utilized even if the system running the
+   client application does not have a usable web browser, for example when
+   running a client via SSH. Client applications may implement their own flows
+   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
   </para>
+  <para>
+   The builtin flow will, by default, print a URL to visit and a user code to
+   enter there:
+<programlisting>
+$ psql 'dbname=postgres oauth_issuer=https://example.com oauth_client_id=...'
+Visit https://example.com/device and enter the code: ABCD-EFGH
+</programlisting>
+   (This prompt may be
+   <link linkend="libpq-oauth-authdata-prompt-oauth-device">customized</link>.)
+   You will then log into your OAuth provider, which will ask whether you want
+   to allow libpq and the server to perform actions on your behalf. It is always
+   a good idea to carefully review the URL and permissions displayed, to ensure
+   they match your expectations, before continuing. Do not give permissions to
+   untrusted third parties.
+  </para>
+  <para>
+   For an OAuth client flow to be usable, the connection string must at minimum
+   contain <xref linkend="libpq-connect-oauth-issuer"/> and
+   <xref linkend="libpq-connect-oauth-client-id"/>. (These settings are
+   determined by your organization's OAuth provider.) The builtin flow
+   additionally requires the OAuth authorization server to publish a device
+   authorization endpoint.
+  </para>
+
+  <note>
+   <para>
+    The builtin Device Authorization flow is not currently supported on Windows.
+    Custom client flows may still be implemented.
+   </para>
+  </note>
 
   <sect2 id="libpq-oauth-authdata-hooks">
    <title>Authdata Hooks</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index d0bca9196d9..c8bbac7b462 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -41,7 +41,9 @@
   <sect2 id="oauth-validator-design-responsibilities">
    <title>Validator Responsibilities</title>
    <para>
-    TODO
+    Although different modules may take very different approaches to token
+    validation, implementations generally need to perform three separate
+    actions:
    </para>
    <variablelist>
     <varlistentry>
@@ -121,6 +123,11 @@
        </footnote>
        if users are not prompted for additional scopes.
       </para>
+      <para>
+       Even if authorization fails, a module may choose to continue to pull
+       authentication information from the token for use in auditing and
+       debugging.
+      </para>
      </listitem>
     </varlistentry>
     <varlistentry>
-- 
2.34.1

v48-0004-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/x-patch; name=v48-0004-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 0929bfbc5fc371e8f0a8ca25c74fadcd0bd9be50 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 7 Feb 2025 15:39:35 -0800
Subject: [PATCH v48 4/8] fixup! Add OAUTHBEARER SASL mechanism

---
 src/test/modules/oauth_validator/t/001_server.pl | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index f0b918390fd..d2dda62a2d4 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -14,12 +14,18 @@ use MIME::Base64 qw(encode_base64);
 use PostgreSQL::Test::Cluster;
 use PostgreSQL::Test::Utils;
 use Test::More;
+use Config;
 
 use FindBin;
 use lib $FindBin::RealBin;
 
 use OAuth::Server;
 
+if ($Config{osname} eq 'MSWin32')
+{
+	plan skip_all => 'OAuth server-side tests are not supported on Windows';
+}
+
 if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
 {
 	plan skip_all =>
@@ -402,7 +408,10 @@ note "running '" . join("' '", @cmd) . "'";
 my ($stdout, $stderr) = run_command(\@cmd);
 
 like($stdout, qr/connection succeeded/, "stress-async: stdout matches");
-unlike($stderr, qr/connection to database failed/, "stress-async: stderr matches");
+unlike(
+	$stderr,
+	qr/connection to database failed/,
+	"stress-async: stderr matches");
 
 #
 # This section of tests reconfigures the validator module itself, rather than
-- 
2.34.1

v48-0005-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/x-patch; name=v48-0005-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From be882ef6eaeba0a9ea536e65a87147e40281e5a4 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 7 Feb 2025 16:20:25 -0800
Subject: [PATCH v48 5/8] fixup! Add OAUTHBEARER SASL mechanism

---
 src/interfaces/libpq/fe-auth-oauth-curl.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 2179bb89800..74323de309a 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -32,7 +32,19 @@
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
 
-#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+/*
+ * It's generally prudent to set a maximum response size to buffer in memory,
+ * but it's less clear what size to choose. The biggest of our expected
+ * responses is the server metadata JSON, which will only continue to grow in
+ * size; the number of IANA-registered parameters in that document is up to 78
+ * as of February 2025.
+ *
+ * Even if every single parameter were to take up 2k on average (a previously
+ * common limit on the size of a URL), 256k gives us 128 parameter values before
+ * we give up. (That's almost certainly complete overkill in practice; 2-4k
+ * appears to be common among popular providers at the moment.)
+ */
+#define MAX_OAUTH_RESPONSE_SIZE (256 * 1024)
 
 /*
  * Parsed JSON Representations
-- 
2.34.1

v48-0006-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/x-patch; name=v48-0006-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 954341052b44ab0f89e679926c9b66a085d2f064 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Thu, 6 Feb 2025 20:53:33 -0800
Subject: [PATCH v48 6/8] fixup! Add OAUTHBEARER SASL mechanism

---
 doc/src/sgml/oauth-validators.sgml            | 22 +++++----
 src/backend/libpq/auth-oauth.c                | 20 ++++++--
 src/include/libpq/oauth.h                     | 48 ++++++++++++++++++-
 .../modules/oauth_validator/fail_validator.c  | 15 ++++--
 src/test/modules/oauth_validator/validator.c  | 28 +++++++----
 5 files changed, 105 insertions(+), 28 deletions(-)

diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index c8bbac7b462..eb8c4431c2d 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -297,13 +297,15 @@
    validator module a function named
    <function>_PG_oauth_validator_module_init</function> must be provided. The
    return value of the function must be a pointer to a struct of type
-   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
-   the module's token validation functions. The returned
+   <structname>OAuthValidatorCallbacks</structname>, which contains a magic
+   number and pointers to the module's token validation functions. The returned
    pointer must be of server lifetime, which is typically achieved by defining
    it as a <literal>static const</literal> variable in global scope.
 <programlisting>
 typedef struct OAuthValidatorCallbacks
 {
+    uint32        magic;            /* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
     ValidatorStartupCB startup_cb;
     ValidatorShutdownCB shutdown_cb;
     ValidatorValidateCB validate_cb;
@@ -348,14 +350,16 @@ typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
     previous calls will be available in <structfield>state->private_data</structfield>.
 
 <programlisting>
-typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+                                     const char *token, const char *role,
+                                     ValidatorModuleResult *result);
 </programlisting>
 
     <replaceable>token</replaceable> will contain the bearer token to validate.
     The server has ensured that the token is well-formed syntactically, but no
     other validation has been performed.  <replaceable>role</replaceable> will
     contain the role the user has requested to log in as.  The callback must
-    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    set output parameters in the <literal>result</literal> struct, which is
     defined as below:
 
 <programlisting>
@@ -375,17 +379,17 @@ typedef struct ValidatorModuleResult
     determined.
    </para>
    <para>
-    The caller assumes ownership of the returned memory allocation, the
-    validator module should not in any way access the memory after it has been
-    returned.  A validator may instead return NULL to signal an internal
-    error.
+    A validator may return <literal>false</literal> to signal an internal error,
+    in which case any result parameters are ignored and the connection fails.
+    Otherwise the validator should return <literal>true</literal> to indicate
+    that it has processed the token and made an authorization decision.
    </para>
    <para>
     The behavior after <function>validate_cb</function> returns depends on the
     specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
     name must exactly match the role that the user is logging in as.  (This
     behavior may be modified with a usermap.)  But when authenticating against
-    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    an HBA rule with <literal>delegate_ident_mapping</literal> turned on, the
     server will not perform any checks on the value of
     <structfield>authn_id</structfield> at all; in this case it is up to the
     validator to ensure that the token carries enough privileges for the user to
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index aa16977c643..e2b5d1ed913 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -656,9 +656,9 @@ validate(Port *port, const char *auth)
 				errmsg("validation of OAuth token requested without a validator loaded"));
 
 	/* Call the validation function from the validator module */
-	ret = ValidatorCallbacks->validate_cb(validator_module_state,
-										  token, port->user_name);
-	if (ret == NULL)
+	ret = palloc0(sizeof(ValidatorModuleResult));
+	if (!ValidatorCallbacks->validate_cb(validator_module_state, token,
+										 port->user_name, ret))
 	{
 		ereport(LOG, errmsg("internal error in OAuth validator module"));
 		return false;
@@ -756,8 +756,22 @@ load_validator_library(const char *libname)
 	ValidatorCallbacks = (*validator_init) ();
 	Assert(ValidatorCallbacks);
 
+	/*
+	 * Check the magic number, to protect against break-glass scenarios where
+	 * the ABI must change within a major version. load_external_function()
+	 * already checks for compatibility across major versions.
+	 */
+	if (ValidatorCallbacks->magic != PG_OAUTH_VALIDATOR_MAGIC)
+		ereport(ERROR,
+				errmsg("%s module \"%s\": magic number mismatch",
+					   "OAuth validator", libname),
+				errdetail("Server has magic number 0x%08X, module has 0x%08X.",
+						  PG_OAUTH_VALIDATOR_MAGIC, ValidatorCallbacks->magic));
+
 	/* Allocate memory for validator library private state data */
 	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	validator_module_state->sversion = PG_VERSION_NUM;
+
 	if (ValidatorCallbacks->startup_cb != NULL)
 		ValidatorCallbacks->startup_cb(validator_module_state);
 
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
index 4fcdda74305..7e249613e10 100644
--- a/src/include/libpq/oauth.h
+++ b/src/include/libpq/oauth.h
@@ -20,26 +20,72 @@ extern PGDLLIMPORT char *oauth_validator_libraries_string;
 
 typedef struct ValidatorModuleState
 {
+	/* Holds the server's PG_VERSION_NUM. Reserved for future extensibility. */
+	int			sversion;
+
+	/*
+	 * Private data pointer for use by a validator module. This can be used to
+	 * store state for the module that will be passed to each of its
+	 * callbacks.
+	 */
 	void	   *private_data;
 } ValidatorModuleState;
 
 typedef struct ValidatorModuleResult
 {
+	/*
+	 * Should be set to true if the token carries sufficient permissions for
+	 * the bearer to connect.
+	 */
 	bool		authorized;
+
+	/*
+	 * If the token authenticates the user, this should be set to a palloc'd
+	 * string containing the SYSTEM_USER to use for HBA mapping. Consider
+	 * setting this even if result->authorized is false so that DBAs may use
+	 * the logs to match end users to token failures.
+	 *
+	 * This is required if the module is not configured for ident mapping
+	 * delegation. See the validator module documentation for details.
+	 */
 	char	   *authn_id;
 } ValidatorModuleResult;
 
+/*
+ * Validator module callbacks
+ *
+ * These callback functions should be defined by validator modules and returned
+ * via _PG_oauth_validator_module_init().  ValidatorValidateCB is the only
+ * required callback. For more information about the purpose of each callback,
+ * refer to the OAuth validator modules documentation.
+ */
 typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
 typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
-typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+									 const char *token, const char *role,
+									 ValidatorModuleResult *result);
+
+/*
+ * Identifies the compiled ABI version of the validator module. Since the server
+ * already enforces the PG_MODULE_MAGIC number for modules across major
+ * versions, this is reserved for emergency use within a stable release line.
+ * May it never need to change.
+ */
+#define PG_OAUTH_VALIDATOR_MAGIC 0x20250207
 
 typedef struct OAuthValidatorCallbacks
 {
+	uint32		magic;			/* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
 	ValidatorStartupCB startup_cb;
 	ValidatorShutdownCB shutdown_cb;
 	ValidatorValidateCB validate_cb;
 } OAuthValidatorCallbacks;
 
+/*
+ * Type of the shared library symbol _PG_oauth_validator_module_init that is
+ * looked up when loading a validator module.
+ */
 typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
 extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
 
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
index f77a3e115c6..7b1e69518d9 100644
--- a/src/test/modules/oauth_validator/fail_validator.c
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -19,12 +19,15 @@
 
 PG_MODULE_MAGIC;
 
-static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
-										 const char *token,
-										 const char *role);
+static bool fail_token(const ValidatorModuleState *state,
+					   const char *token,
+					   const char *role,
+					   ValidatorModuleResult *result);
 
 /* Callback implementations (we only need the main one) */
 static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
 	.validate_cb = fail_token,
 };
 
@@ -34,8 +37,10 @@ _PG_oauth_validator_module_init(void)
 	return &validator_callbacks;
 }
 
-static ValidatorModuleResult *
-fail_token(ValidatorModuleState *state, const char *token, const char *role)
+static bool
+fail_token(const ValidatorModuleState *state,
+		   const char *token, const char *role,
+		   ValidatorModuleResult *res)
 {
 	elog(FATAL, "fail_validator: sentinel error");
 	pg_unreachable();
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
index ef9bbb2866f..e218f5c8902 100644
--- a/src/test/modules/oauth_validator/validator.c
+++ b/src/test/modules/oauth_validator/validator.c
@@ -23,12 +23,15 @@ PG_MODULE_MAGIC;
 
 static void validator_startup(ValidatorModuleState *state);
 static void validator_shutdown(ValidatorModuleState *state);
-static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
-											 const char *token,
-											 const char *role);
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
 
 /* Callback implementations (exercise all three) */
 static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
 	.startup_cb = validator_startup,
 	.shutdown_cb = validator_shutdown,
 	.validate_cb = validate_token
@@ -89,6 +92,13 @@ _PG_oauth_validator_module_init(void)
 static void
 validator_startup(ValidatorModuleState *state)
 {
+	/*
+	 * Make sure the server is correctly setting sversion. (Real modules
+	 * should not do this; it would defeat upgrade compatibility.)
+	 */
+	if (state->sversion != PG_VERSION_NUM)
+		elog(ERROR, "oauth_validator: sversion set to %d", state->sversion);
+
 	state->private_data = PRIVATE_COOKIE;
 }
 
@@ -108,18 +118,16 @@ validator_shutdown(ValidatorModuleState *state)
  * Validator implementation. Logs the incoming data and authorizes the token by
  * default; the behavior can be modified via the module's GUC settings.
  */
-static ValidatorModuleResult *
-validate_token(ValidatorModuleState *state, const char *token, const char *role)
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
 {
-	ValidatorModuleResult *res;
-
 	/* Check to make sure our private state still exists. */
 	if (state->private_data != PRIVATE_COOKIE)
 		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
 			 state->private_data);
 
-	res = palloc(sizeof(ValidatorModuleResult));
-
 	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
 	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
 		 MyProcPort->hba->oauth_issuer,
@@ -131,5 +139,5 @@ validate_token(ValidatorModuleState *state, const char *token, const char *role)
 	else
 		res->authn_id = pstrdup(role);
 
-	return res;
+	return true;
 }
-- 
2.34.1

v48-0007-XXX-fix-libcurl-link-error.patchapplication/x-patch; name=v48-0007-XXX-fix-libcurl-link-error.patchDownload
From d88a2938e7e6a7299ca50e98dc04c7b3dd88fa8a Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 13 Jan 2025 12:31:59 -0800
Subject: [PATCH v48 7/8] XXX fix libcurl link error

The ftp/curl port appears to be missing a minimum version dependency on
libssh2, so the following starts showing up after upgrading to curl
8.11.1_1:

    libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

But 13.3 is EOL, so it's not clear if anyone would be interested in a
bug report, and a FreeBSD 14 Cirrus image is in progress. Hack past it
for now.
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index c192a077701..3afea832bc9 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -168,6 +168,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.34.1

v48-0008-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchapplication/x-patch; name=v48-0008-DO-NOT-MERGE-Add-pytest-suite-for-OAuth.patchDownload
From 44e5cbc8ad1f3e9fcb8af3e69c807b7d31d0e8e9 Mon Sep 17 00:00:00 2001
From: Jacob Champion <pchampion@vmware.com>
Date: Fri, 4 Jun 2021 09:06:38 -0700
Subject: [PATCH v48 8/8] DO NOT MERGE: Add pytest suite for OAuth

Requires Python 3. On the first run of `make installcheck` or `meson
test` the dependencies will be installed into a local virtualenv for
you. See the README for more details.

Cirrus has been updated to build OAuth support on Debian and FreeBSD.

The suite contains a --temp-instance option, analogous to pg_regress's
option of the same name, which allows an ephemeral server to be spun up
during a test run.

TODOs:
- The --tap-stream option to pytest-tap is slightly broken during test
  failures (it suppresses error information), which impedes debugging.
- pyca/cryptography is pinned at an old version. Since we use it for
  testing and not security, this isn't a critical problem yet, but it's
  not ideal. Newer versions require a Rust compiler to build, and while
  many platforms have precompiled wheels, some (FreeBSD) do not. Even
  with the Rust pieces bypassed, compilation on FreeBSD takes a while.
- The with_oauth test skip logic should probably be integrated into the
  Makefile side as well...
- See if 32-bit tests can be enabled with a 32-bit Python.
---
 .cirrus.tasks.yml                     |    6 +-
 meson.build                           |  103 +
 src/test/meson.build                  |    1 +
 src/test/python/.gitignore            |    2 +
 src/test/python/Makefile              |   38 +
 src/test/python/README                |   66 +
 src/test/python/client/__init__.py    |    0
 src/test/python/client/conftest.py    |  196 ++
 src/test/python/client/test_client.py |  186 ++
 src/test/python/client/test_oauth.py  | 2663 +++++++++++++++++++++++++
 src/test/python/conftest.py           |   34 +
 src/test/python/meson.build           |   47 +
 src/test/python/pq3.py                |  740 +++++++
 src/test/python/pytest.ini            |    4 +
 src/test/python/requirements.txt      |   11 +
 src/test/python/server/__init__.py    |    0
 src/test/python/server/conftest.py    |  141 ++
 src/test/python/server/meson.build    |   18 +
 src/test/python/server/oauthtest.c    |  119 ++
 src/test/python/server/test_oauth.py  | 1080 ++++++++++
 src/test/python/server/test_server.py |   21 +
 src/test/python/test_internals.py     |  138 ++
 src/test/python/test_pq3.py           |  574 ++++++
 src/test/python/tls.py                |  195 ++
 src/tools/make_venv                   |   56 +
 src/tools/testwrap                    |    7 +
 26 files changed, 6445 insertions(+), 1 deletion(-)
 create mode 100644 src/test/python/.gitignore
 create mode 100644 src/test/python/Makefile
 create mode 100644 src/test/python/README
 create mode 100644 src/test/python/client/__init__.py
 create mode 100644 src/test/python/client/conftest.py
 create mode 100644 src/test/python/client/test_client.py
 create mode 100644 src/test/python/client/test_oauth.py
 create mode 100644 src/test/python/conftest.py
 create mode 100644 src/test/python/meson.build
 create mode 100644 src/test/python/pq3.py
 create mode 100644 src/test/python/pytest.ini
 create mode 100644 src/test/python/requirements.txt
 create mode 100644 src/test/python/server/__init__.py
 create mode 100644 src/test/python/server/conftest.py
 create mode 100644 src/test/python/server/meson.build
 create mode 100644 src/test/python/server/oauthtest.c
 create mode 100644 src/test/python/server/test_oauth.py
 create mode 100644 src/test/python/server/test_server.py
 create mode 100644 src/test/python/test_internals.py
 create mode 100644 src/test/python/test_pq3.py
 create mode 100644 src/test/python/tls.py
 create mode 100755 src/tools/make_venv

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 3afea832bc9..06efe5f9b0a 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth python
 
 
 # What files to preserve in case tests fail
@@ -321,6 +321,7 @@ task:
     DEBIAN_FRONTEND=noninteractive apt-get -y install \
       libcurl4-openssl-dev \
       libcurl4-openssl-dev:i386 \
+      python3-venv \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -405,8 +406,11 @@ task:
       # can easily provide some here by running one of the sets of tests that
       # way. Newer versions of python insist on changing the LC_CTYPE away
       # from C, prevent that with PYTHONCOERCECLOCALE.
+      # XXX 32-bit Python tests are currently disabled, as the system's 64-bit
+      # Python modules can't link against libpq.
       test_world_32_script: |
         su postgres <<-EOF
+          export PG_TEST_EXTRA="${PG_TEST_EXTRA//python}"
           ulimit -c unlimited
           PYTHONCOERCECLOCALE=0 LANG=C meson test $MTEST_ARGS -C build-32 --num-processes ${TEST_JOBS}
         EOF
diff --git a/meson.build b/meson.build
index 96e5f0f6434..6e60c8d3dae 100644
--- a/meson.build
+++ b/meson.build
@@ -3458,6 +3458,9 @@ else
 endif
 
 testwrap = files('src/tools/testwrap')
+make_venv = files('src/tools/make_venv')
+
+checked_working_venv = false
 
 foreach test_dir : tests
   testwrap_base = [
@@ -3626,6 +3629,106 @@ foreach test_dir : tests
         )
       endforeach
       install_suites += test_group
+    elif kind == 'pytest'
+      venv_name = test_dir['name'] + '_venv'
+      venv_path = meson.build_root() / venv_name
+
+      # The Python tests require a working venv module. This is part of the
+      # standard library, but some platforms disable it until a separate package
+      # is installed. Those same platforms don't provide an easy way to check
+      # whether the venv command will work until the first time you try it, so
+      # we decide whether or not to enable these tests on the fly.
+      if not checked_working_venv
+        cmd = run_command(python, '-m', 'venv', venv_path, check: false)
+
+        have_working_venv = (cmd.returncode() == 0)
+        if not have_working_venv
+          warning('A working Python venv module is required to run Python tests.')
+        endif
+
+        checked_working_venv = true
+      endif
+
+      if not have_working_venv
+        continue
+      endif
+
+      # Make sure the temporary installation is in PATH (necessary both for
+      # --temp-instance and for any pip modules compiling against libpq, like
+      # psycopg2).
+      env = test_env
+      env.prepend('PATH', temp_install_bindir, test_dir['bd'])
+
+      foreach name, value : t.get('env', {})
+        env.set(name, value)
+      endforeach
+
+      reqs = files(t['requirements'])
+      test('install_' + venv_name,
+        python,
+        args: [ make_venv, '--requirements', reqs, venv_path ],
+        env: env,
+        priority: setup_tests_priority - 1,  # must run after tmp_install
+        is_parallel: false,
+        suite: ['setup'],
+        timeout: 60,  # 30s is too short for the cryptography package compile
+      )
+
+      test_group = test_dir['name']
+      test_output = test_result_dir / test_group / kind
+      test_kwargs = {
+        #'protocol': 'tap',
+        'suite': test_group,
+        'timeout': 1000,
+        'depends': test_deps,
+        'env': env,
+      } + t.get('test_kwargs', {})
+
+      if fs.is_dir(venv_path / 'Scripts')
+        # Windows virtualenv layout
+        pytest = venv_path / 'Scripts' / 'py.test'
+      else
+        pytest = venv_path / 'bin' / 'py.test'
+      endif
+
+      test_command = [
+        pytest,
+        # Avoid running these tests against an existing database.
+        '--temp-instance', test_output / 'data',
+
+        # FIXME pytest-tap's stream feature accidentally suppresses errors that
+        # are critical for debugging:
+        #     https://github.com/python-tap/pytest-tap/issues/30
+        # Don't use the meson TAP protocol for now...
+        #'--tap-stream',
+      ]
+
+      foreach pyt : t['tests']
+        # Similarly to TAP, strip ./ and .py to make the names prettier
+        pyt_p = pyt
+        if pyt_p.startswith('./')
+          pyt_p = pyt_p.split('./')[1]
+        endif
+        if pyt_p.endswith('.py')
+          pyt_p = fs.stem(pyt_p)
+        endif
+
+        testwrap_pytest = testwrap_base + [
+          '--testgroup', test_group,
+          '--testname', pyt_p,
+          '--skip-without-extra', 'python',
+        ]
+
+        test(test_group / pyt_p,
+          python,
+          kwargs: test_kwargs,
+          args: testwrap_pytest + [
+            '--', test_command,
+            test_dir['sd'] / pyt,
+          ],
+        )
+      endforeach
+      install_suites += test_group
     else
       error('unknown kind @0@ of test in @1@'.format(kind, test_dir['sd']))
     endif
diff --git a/src/test/meson.build b/src/test/meson.build
index ccc31d6a86a..236057cd99e 100644
--- a/src/test/meson.build
+++ b/src/test/meson.build
@@ -8,6 +8,7 @@ subdir('postmaster')
 subdir('recovery')
 subdir('subscription')
 subdir('modules')
+subdir('python')
 
 if ssl.found()
   subdir('ssl')
diff --git a/src/test/python/.gitignore b/src/test/python/.gitignore
new file mode 100644
index 00000000000..0e8f027b2ec
--- /dev/null
+++ b/src/test/python/.gitignore
@@ -0,0 +1,2 @@
+__pycache__/
+/venv/
diff --git a/src/test/python/Makefile b/src/test/python/Makefile
new file mode 100644
index 00000000000..b0695b6287e
--- /dev/null
+++ b/src/test/python/Makefile
@@ -0,0 +1,38 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+# Only Python 3 is supported, but if it's named something different on your
+# system you can override it with the PYTHON3 variable.
+PYTHON3 := python3
+
+# All dependencies are placed into this directory. The default is .gitignored
+# for you, but you can override it if you'd like.
+VENV := ./venv
+
+override VBIN   := $(VENV)/bin
+override PIP    := $(VBIN)/pip
+override PYTEST := $(VBIN)/py.test
+override ISORT  := $(VBIN)/isort
+override BLACK  := $(VBIN)/black
+
+.PHONY: installcheck indent
+
+installcheck: $(PYTEST)
+	$(PYTEST) -v -rs
+
+indent: $(ISORT) $(BLACK)
+	$(ISORT) --profile black *.py client/*.py server/*.py
+	$(BLACK) *.py client/*.py server/*.py
+
+$(PYTEST) $(ISORT) $(BLACK) &: requirements.txt | $(PIP)
+	$(PIP) install --force-reinstall -r $<
+
+$(PIP):
+	$(PYTHON3) -m venv $(VENV)
+
+# A convenience recipe to rebuild psycopg2 against the local libpq.
+.PHONY: rebuild-psycopg2
+rebuild-psycopg2: | $(PIP)
+	$(PIP) install --force-reinstall --no-binary :all: $(shell grep psycopg2 requirements.txt)
diff --git a/src/test/python/README b/src/test/python/README
new file mode 100644
index 00000000000..acf339a5899
--- /dev/null
+++ b/src/test/python/README
@@ -0,0 +1,66 @@
+A test suite for exercising both the libpq client and the server backend at the
+protocol level, based on pytest and Construct.
+
+WARNING! This suite takes superuser-level control of the cluster under test,
+writing to the server config, creating and destroying databases, etc. It also
+spins up various ephemeral TCP services. This is not safe for production servers
+and therefore must be explicitly opted into by setting PG_TEST_EXTRA=python in
+the environment.
+
+The test suite currently assumes that the standard PG* environment variables
+point to the database under test and are sufficient to log in a superuser on
+that system. In other words, a bare `psql` needs to Just Work before the test
+suite can do its thing. For a newly built dev cluster, typically all that I need
+to do is a
+
+    export PGDATABASE=postgres
+
+but you can adjust as needed for your setup. See also 'Advanced Usage' below.
+
+## Requirements
+
+A supported version (3.6+) of Python.
+
+The first run of
+
+    make installcheck PG_TEST_EXTRA=python
+
+will install a local virtual environment and all needed dependencies. During
+development, if libpq changes incompatibly, you can issue
+
+    $ make rebuild-psycopg2
+
+to force a rebuild of the client library.
+
+## Hacking
+
+The code style is enforced by a _very_ opinionated autoformatter. Running the
+
+    make indent
+
+recipe will invoke it for you automatically. Don't fight the tool; part of the
+zen is in knowing that if the formatter makes your code ugly, there's probably a
+cleaner way to write your code.
+
+## Advanced Usage
+
+The Makefile is there for convenience, but you don't have to use it. Activate
+the virtualenv to be able to use pytest directly:
+
+    $ export PG_TEST_EXTRA=python
+    $ source venv/bin/activate
+    $ py.test -k oauth
+    ...
+    $ py.test ./server/test_server.py
+    ...
+    $ deactivate  # puts the PATH et al back the way it was before
+
+To make quick smoke tests possible, slow tests have been marked explicitly. You
+can skip them by saying e.g.
+
+    $ py.test -m 'not slow'
+
+If you'd rather not test against an existing server, you can have the suite spin
+up a temporary one using whatever pg_ctl it finds in PATH:
+
+    $ py.test --temp-instance=./tmp_check
diff --git a/src/test/python/client/__init__.py b/src/test/python/client/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/client/conftest.py b/src/test/python/client/conftest.py
new file mode 100644
index 00000000000..20e72a404aa
--- /dev/null
+++ b/src/test/python/client/conftest.py
@@ -0,0 +1,196 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import datetime
+import functools
+import ipaddress
+import os
+import socket
+import sys
+import threading
+
+import psycopg2
+import psycopg2.extras
+import pytest
+from cryptography import x509
+from cryptography.hazmat.primitives import hashes, serialization
+from cryptography.hazmat.primitives.asymmetric import rsa
+from cryptography.x509.oid import NameOID
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+@pytest.fixture
+def server_socket(unused_tcp_port_factory):
+    """
+    Returns a listening socket bound to an ephemeral port.
+    """
+    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
+        s.bind(("127.0.0.1", unused_tcp_port_factory()))
+        s.listen(1)
+        s.settimeout(BLOCKING_TIMEOUT)
+        yield s
+
+
+class ClientHandshake(threading.Thread):
+    """
+    A thread that connects to a local Postgres server using psycopg2. Once the
+    opening handshake completes, the connection will be immediately closed.
+    """
+
+    def __init__(self, *, port, **kwargs):
+        super().__init__()
+
+        kwargs["port"] = port
+        self._kwargs = kwargs
+
+        self.exception = None
+
+    def run(self):
+        try:
+            conn = psycopg2.connect(host="127.0.0.1", **self._kwargs)
+            with contextlib.closing(conn):
+                self._pump_async(conn)
+        except Exception as e:
+            self.exception = e
+
+    def check_completed(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Joins the client thread. Raises an exception if the thread could not be
+        joined, or if it threw an exception itself. (The exception will be
+        cleared, so future calls to check_completed will succeed.)
+        """
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            self.exception = None
+            raise e
+
+    def _pump_async(self, conn):
+        """
+        Polls a psycopg2 connection until it's completed. (Synchronous
+        connections will work here too; they'll just immediately return OK.)
+        """
+        psycopg2.extras.wait_select(conn)
+
+
+@pytest.fixture
+def accept(server_socket):
+    """
+    Returns a factory function that, when called, returns a pair (sock, client)
+    where sock is a server socket that has accepted a connection from client,
+    and client is an instance of ClientHandshake. Clients will complete their
+    handshakes and cleanly disconnect.
+
+    The default connstring options may be extended or overridden by passing
+    arbitrary keyword arguments. Keep in mind that you generally should not
+    override the host or port, since they point to the local test server.
+
+    For situations where a client needs to connect more than once to complete a
+    handshake, the accept function may be called more than once. (The client
+    returned for subsequent calls will always be the same client that was
+    returned for the first call.)
+
+    Tests must either complete the handshake so that the client thread can be
+    automatically joined during teardown, or else call client.check_completed()
+    and manually handle any expected errors.
+    """
+    _, port = server_socket.getsockname()
+
+    client = None
+    default_opts = dict(
+        port=port,
+        user=pq3.pguser(),
+        sslmode="disable",
+    )
+
+    def factory(**kwargs):
+        nonlocal client
+
+        if client is None:
+            opts = dict(default_opts)
+            opts.update(kwargs)
+
+            # The server_socket is already listening, so the client thread can
+            # be safely started; it'll block on the connection until we accept.
+            client = ClientHandshake(**opts)
+            client.start()
+
+        sock, _ = server_socket.accept()
+        sock.settimeout(BLOCKING_TIMEOUT)
+        return sock, client
+
+    yield factory
+
+    if client is not None:
+        client.check_completed()
+
+
+@pytest.fixture
+def conn(accept):
+    """
+    Returns an accepted, wrapped pq3 connection to a psycopg2 client. The socket
+    will be closed when the test finishes, and the client will be checked for a
+    cleanly completed handshake.
+    """
+    sock, client = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+@pytest.fixture(scope="session")
+def certpair(tmp_path_factory):
+    """
+    Yields a (cert, key) pair of file paths that can be used by a TLS server.
+    The certificate is issued for "localhost" and its standard IPv4/6 addresses.
+    """
+
+    tmpdir = tmp_path_factory.mktemp("certs")
+    now = datetime.datetime.now(datetime.timezone.utc)
+
+    # https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate
+    key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
+
+    subject = issuer = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, "localhost")])
+    altNames = [
+        x509.DNSName("localhost"),
+        x509.IPAddress(ipaddress.IPv4Address("127.0.0.1")),
+        x509.IPAddress(ipaddress.IPv6Address("::1")),
+    ]
+    cert = (
+        x509.CertificateBuilder()
+        .subject_name(subject)
+        .issuer_name(issuer)
+        .public_key(key.public_key())
+        .serial_number(x509.random_serial_number())
+        .not_valid_before(now)
+        .not_valid_after(now + datetime.timedelta(minutes=10))
+        .add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
+        .add_extension(x509.SubjectAlternativeName(altNames), critical=False)
+    ).sign(key, hashes.SHA256())
+
+    # Writing the key with mode 0600 lets us use this from the server side, too.
+    keypath = str(tmpdir / "key.pem")
+    with open(keypath, "wb", opener=functools.partial(os.open, mode=0o600)) as f:
+        f.write(
+            key.private_bytes(
+                encoding=serialization.Encoding.PEM,
+                format=serialization.PrivateFormat.PKCS8,
+                encryption_algorithm=serialization.NoEncryption(),
+            )
+        )
+
+    certpath = str(tmpdir / "cert.pem")
+    with open(certpath, "wb") as f:
+        f.write(cert.public_bytes(serialization.Encoding.PEM))
+
+    return certpath, keypath
diff --git a/src/test/python/client/test_client.py b/src/test/python/client/test_client.py
new file mode 100644
index 00000000000..8372376ede4
--- /dev/null
+++ b/src/test/python/client/test_client.py
@@ -0,0 +1,186 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import sys
+
+import psycopg2
+import pytest
+from cryptography.hazmat.primitives import hashes, hmac
+
+import pq3
+
+from .test_oauth import alt_patterns
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+def test_handshake(conn):
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    finish_handshake(conn)
+
+
+def test_aborted_connection(accept):
+    """
+    Make sure the client correctly reports an early close during handshakes.
+    """
+    sock, client = accept()
+    sock.close()
+
+    expected = alt_patterns(
+        "server closed the connection unexpectedly",
+        # On some platforms, ECONNABORTED gets set instead.
+        "Software caused connection abort",
+    )
+    with pytest.raises(psycopg2.OperationalError, match=expected):
+        client.check_completed()
+
+
+#
+# SCRAM-SHA-256 (see RFC 5802: https://tools.ietf.org/html/rfc5802)
+#
+
+
+@pytest.fixture
+def password():
+    """
+    Returns a password for use by both client and server.
+    """
+    # TODO: parameterize this with passwords that require SASLprep.
+    return "secret"
+
+
+@pytest.fixture
+def pwconn(accept, password):
+    """
+    Like the conn fixture, but uses a password in the connection.
+    """
+    sock, client = accept(password=password)
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            yield conn
+
+
+def sha256(data):
+    """The H(str) function from Section 2.2."""
+    digest = hashes.Hash(hashes.SHA256())
+    digest.update(data)
+    return digest.finalize()
+
+
+def hmac_256(key, data):
+    """The HMAC(key, str) function from Section 2.2."""
+    h = hmac.HMAC(key, hashes.SHA256())
+    h.update(data)
+    return h.finalize()
+
+
+def xor(a, b):
+    """The XOR operation from Section 2.2."""
+    res = bytearray(a)
+    for i, byte in enumerate(b):
+        res[i] ^= byte
+    return bytes(res)
+
+
+def h_i(data, salt, i):
+    """The Hi(str, salt, i) function from Section 2.2."""
+    assert i > 0
+
+    acc = hmac_256(data, salt + b"\x00\x00\x00\x01")
+    last = acc
+    i -= 1
+
+    while i:
+        u = hmac_256(data, last)
+        acc = xor(acc, u)
+
+        last = u
+        i -= 1
+
+    return acc
+
+
+def test_scram(pwconn, password):
+    startup = pq3.recv1(pwconn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        pwconn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASL,
+        body=[b"SCRAM-SHA-256", b""],
+    )
+
+    # Get the client-first-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"SCRAM-SHA-256"
+
+    c_bind, authzid, c_name, c_nonce = initial.data.split(b",")
+    assert c_bind == b"n"  # no channel bindings on a plaintext connection
+    assert authzid == b""  # we don't support authzid currently
+    assert c_name == b"n="  # libpq doesn't honor the GS2 username
+    assert c_nonce.startswith(b"r=")
+
+    # Send the server-first-message.
+    salt = b"12345"
+    iterations = 2
+
+    s_nonce = c_nonce + b"somenonce"
+    s_salt = b"s=" + base64.b64encode(salt)
+    s_iterations = b"i=%d" % iterations
+
+    msg = b",".join([s_nonce, s_salt, s_iterations])
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=msg)
+
+    # Get the client-final-message.
+    pkt = pq3.recv1(pwconn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    c_bind_final, c_nonce_final, c_proof = pkt.payload.split(b",")
+    assert c_bind_final == b"c=" + base64.b64encode(c_bind + b"," + authzid + b",")
+    assert c_nonce_final == s_nonce
+
+    # Calculate what the client proof should be.
+    salted_password = h_i(password.encode("ascii"), salt, iterations)
+    client_key = hmac_256(salted_password, b"Client Key")
+    stored_key = sha256(client_key)
+
+    auth_message = b",".join(
+        [c_name, c_nonce, s_nonce, s_salt, s_iterations, c_bind_final, c_nonce_final]
+    )
+    client_signature = hmac_256(stored_key, auth_message)
+    client_proof = xor(client_key, client_signature)
+
+    expected = b"p=" + base64.b64encode(client_proof)
+    assert c_proof == expected
+
+    # Send the correct server signature.
+    server_key = hmac_256(salted_password, b"Server Key")
+    server_signature = hmac_256(server_key, auth_message)
+
+    s_verify = b"v=" + base64.b64encode(server_signature)
+    pq3.send(pwconn, pq3.types.AuthnRequest, type=pq3.authn.SASLFinal, body=s_verify)
+
+    # Done!
+    finish_handshake(pwconn)
diff --git a/src/test/python/client/test_oauth.py b/src/test/python/client/test_oauth.py
new file mode 100644
index 00000000000..e0117bfc894
--- /dev/null
+++ b/src/test/python/client/test_oauth.py
@@ -0,0 +1,2663 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# Portions Copyright 2024 PostgreSQL Global Development Group
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import collections
+import contextlib
+import ctypes
+import http.server
+import json
+import logging
+import os
+import platform
+import secrets
+import socket
+import ssl
+import sys
+import threading
+import time
+import traceback
+import types
+import urllib.parse
+from numbers import Number
+
+import psycopg2
+import pytest
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+# The client tests need libpq to have been compiled with OAuth support; skip
+# them otherwise.
+pytestmark = pytest.mark.skipif(
+    os.getenv("with_libcurl") != "yes",
+    reason="OAuth client tests require --with-libcurl support",
+)
+
+if platform.system() == "Darwin":
+    libpq = ctypes.cdll.LoadLibrary("libpq.5.dylib")
+elif platform.system() == "Windows":
+    pass  # TODO
+else:
+    libpq = ctypes.cdll.LoadLibrary("libpq.so.5")
+
+
+def finish_handshake(conn):
+    """
+    Sends the AuthenticationOK message and the standard opening salvo of server
+    messages, then asserts that the client immediately sends a Terminate message
+    to close the connection cleanly.
+    """
+    pq3.send(conn, pq3.types.AuthnRequest, type=pq3.authn.OK)
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"client_encoding", value=b"UTF-8")
+    pq3.send(conn, pq3.types.ParameterStatus, name=b"DateStyle", value=b"ISO, MDY")
+    pq3.send(conn, pq3.types.ReadyForQuery, status=b"I")
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.Terminate
+
+
+#
+# OAUTHBEARER (see RFC 7628: https://tools.ietf.org/html/rfc7628)
+#
+
+
+def start_oauth_handshake(conn):
+    """
+    Negotiates an OAUTHBEARER SASL challenge. Returns the client's initial
+    response data.
+    """
+    startup = pq3.recv1(conn, cls=pq3.Startup)
+    assert startup.proto == pq3.protocol(3, 0)
+
+    pq3.send(
+        conn, pq3.types.AuthnRequest, type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]
+    )
+
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+
+    initial = pq3.SASLInitialResponse.parse(pkt.payload)
+    assert initial.name == b"OAUTHBEARER"
+
+    return initial.data
+
+
+def get_auth_value(initial):
+    """
+    Finds the auth value (e.g. "Bearer somedata..." in the client's initial SASL
+    response.
+    """
+    kvpairs = initial.split(b"\x01")
+    assert kvpairs[0] == b"n,,"  # no channel binding or authzid
+    assert kvpairs[2] == b""  # ends with an empty kvpair
+    assert kvpairs[3] == b""  # ...and there's nothing after it
+    assert len(kvpairs) == 4
+
+    key, value = kvpairs[1].split(b"=", 2)
+    assert key == b"auth"
+
+    return value
+
+
+def fail_oauth_handshake(conn, sasl_resp, *, errmsg="doesn't matter"):
+    """
+    Sends a failure response via the OAUTHBEARER mechanism, consumes the
+    client's dummy response, and issues a FATAL error to end the exchange.
+
+    sasl_resp is a dictionary which will be serialized as the OAUTHBEARER JSON
+    response. If provided, errmsg is used in the FATAL ErrorResponse.
+    """
+    resp = json.dumps(sasl_resp)
+    pq3.send(
+        conn,
+        pq3.types.AuthnRequest,
+        type=pq3.authn.SASLContinue,
+        body=resp.encode("utf-8"),
+    )
+
+    # Per RFC, the client is required to send a dummy ^A response.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.PasswordMessage
+    assert pkt.payload == b"\x01"
+
+    # Now fail the SASL exchange.
+    pq3.send(
+        conn,
+        pq3.types.ErrorResponse,
+        fields=[
+            b"SFATAL",
+            b"C28000",
+            b"M" + errmsg.encode("utf-8"),
+            b"",
+        ],
+    )
+
+
+def handle_discovery_connection(sock, discovery=None, *, response=None):
+    """
+    Helper for all tests that expect an initial discovery connection from the
+    client. The provided discovery URI will be used in a standard error response
+    from the server (or response may be set, to provide a custom dictionary),
+    and the SASL exchange will be failed.
+
+    By default, the client is expected to complete the entire handshake. Set
+    finish to False if the client should immediately disconnect when it receives
+    the error response.
+    """
+    if response is None:
+        response = {"status": "invalid_token"}
+        if discovery is not None:
+            response["openid-configuration"] = discovery
+
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        initial = start_oauth_handshake(conn)
+
+        # For discovery, the client should send an empty auth header. See RFC
+        # 7628, Sec. 4.3.
+        auth = get_auth_value(initial)
+        assert auth == b""
+
+        # The discovery handshake is doomed to fail.
+        fail_oauth_handshake(conn, response)
+
+
+class RawResponse(str):
+    """
+    Returned by registered endpoint callbacks to take full control of the
+    response. Usually, return values are converted to JSON; a RawResponse body
+    will be passed to the client as-is, allowing endpoint implementations to
+    issue invalid JSON.
+    """
+
+    pass
+
+
+class RawBytes(bytes):
+    """
+    Like RawResponse, but bypasses the UTF-8 encoding step as well, allowing
+    implementations to issue invalid encodings.
+    """
+
+    pass
+
+
+class OpenIDProvider(threading.Thread):
+    """
+    A thread that runs a mock OpenID provider server on an SSL-enabled socket.
+    """
+
+    def __init__(self, ssl_socket):
+        super().__init__()
+
+        self.exception = None
+
+        _, port = ssl_socket.getsockname()
+
+        oauth = self._OAuthState()
+        oauth.host = f"localhost:{port}"
+        oauth.issuer = f"https://localhost:{port}"
+
+        # The following endpoints are required to be advertised by providers,
+        # even though our chosen client implementation does not actually make
+        # use of them.
+        oauth.register_endpoint(
+            "authorization_endpoint", "POST", "/authorize", self._authorization_handler
+        )
+        oauth.register_endpoint("jwks_uri", "GET", "/keys", self._jwks_handler)
+
+        self.server = self._HTTPSServer(ssl_socket, self._Handler)
+        self.server.oauth = oauth
+
+    def run(self):
+        try:
+            # XXX socketserver.serve_forever() has a serious architectural
+            # issue: its select loop wakes up every `poll_interval` seconds to
+            # see if the server is shutting down. The default, 500 ms, only lets
+            # us run two tests every second. But the faster we go, the more CPU
+            # we burn unnecessarily...
+            self.server.serve_forever(poll_interval=0.01)
+        except Exception as e:
+            self.exception = e
+
+    def stop(self, timeout=BLOCKING_TIMEOUT):
+        """
+        Shuts down the server and joins its thread. Raises an exception if the
+        thread could not be joined, or if it threw an exception itself. Must
+        only be called once, after start().
+        """
+        self.server.shutdown()
+        self.join(timeout)
+
+        if self.is_alive():
+            raise TimeoutError("client thread did not handshake within the timeout")
+        elif self.exception:
+            e = self.exception
+            raise e
+
+    class _OAuthState(object):
+        def __init__(self):
+            self.endpoint_paths = {}
+            self._endpoints = {}
+
+            # Provide a standard discovery document by default; tests can
+            # override it.
+            self.register_endpoint(
+                None,
+                "GET",
+                "/.well-known/openid-configuration",
+                self._default_discovery_handler,
+            )
+
+            # Default content type unless overridden.
+            self.content_type = "application/json"
+
+        @property
+        def discovery_uri(self):
+            return f"{self.issuer}/.well-known/openid-configuration"
+
+        def register_endpoint(self, name, method, path, func):
+            if method not in self._endpoints:
+                self._endpoints[method] = {}
+
+            self._endpoints[method][path] = func
+
+            if name is not None:
+                self.endpoint_paths[name] = path
+
+        def endpoint(self, method, path):
+            if method not in self._endpoints:
+                return None
+
+            return self._endpoints[method].get(path)
+
+        def _default_discovery_handler(self, headers, params):
+            doc = {
+                "issuer": self.issuer,
+                "response_types_supported": ["token"],
+                "subject_types_supported": ["public"],
+                "id_token_signing_alg_values_supported": ["RS256"],
+                "grant_types_supported": [
+                    "authorization_code",
+                    "urn:ietf:params:oauth:grant-type:device_code",
+                ],
+            }
+
+            for name, path in self.endpoint_paths.items():
+                doc[name] = self.issuer + path
+
+            return 200, doc
+
+    class _HTTPSServer(http.server.HTTPServer):
+        def __init__(self, ssl_socket, handler_cls):
+            # Attach the SSL socket to the server. We don't bind/activate since
+            # the socket is already listening.
+            super().__init__(None, handler_cls, bind_and_activate=False)
+            self.socket = ssl_socket
+            self.server_address = self.socket.getsockname()
+
+        def shutdown_request(self, request):
+            # Cleanly unwrap the SSL socket before shutting down the connection;
+            # otherwise careful clients will complain about truncation.
+            try:
+                request = request.unwrap()
+            except (ssl.SSLEOFError, ConnectionResetError, BrokenPipeError):
+                # The client already closed (or aborted) the connection without
+                # a clean shutdown. This is seen on some platforms during tests
+                # that break the HTTP protocol. Just return and have the server
+                # close the socket.
+                return
+            except ssl.SSLError as err:
+                # FIXME OpenSSL 3.4 introduced an incompatibility with Python's
+                # TLS error handling, resulting in a bogus "[SYS] unknown error"
+                # on some platforms. Hopefully this is fixed in 2025's set of
+                # maintenance releases and this case can be removed.
+                #
+                #     https://github.com/python/cpython/issues/127257
+                #
+                if "[SYS] unknown error" in str(err):
+                    return
+                raise
+
+            super().shutdown_request(request)
+
+        def handle_error(self, request, addr):
+            self.shutdown_request(request)
+            raise
+
+    @staticmethod
+    def _jwks_handler(headers, params):
+        return 200, {"keys": []}
+
+    @staticmethod
+    def _authorization_handler(headers, params):
+        # We don't actually want this to be called during these tests -- we
+        # should be using the device authorization endpoint instead.
+        assert (
+            False
+        ), "authorization handler called instead of device authorization handler"
+
+    class _Handler(http.server.BaseHTTPRequestHandler):
+        timeout = BLOCKING_TIMEOUT
+
+        def _handle(self, *, params=None, handler=None):
+            oauth = self.server.oauth
+            assert self.headers["Host"] == oauth.host
+
+            # XXX: BaseHTTPRequestHandler collapses leading slashes in the path
+            # to work around an open redirection vuln (gh-87389) in
+            # SimpleHTTPServer. But we're not using SimpleHTTPServer, and we
+            # want to test repeating leading slashes, so that's not very
+            # helpful. Put them back.
+            orig_path = self.raw_requestline.split()[1]
+            orig_path = str(orig_path, "iso-8859-1")
+            assert orig_path.endswith(self.path)  # sanity check
+            self.path = orig_path
+
+            if handler is None:
+                handler = oauth.endpoint(self.command, self.path)
+                assert (
+                    handler is not None
+                ), f"no registered endpoint for {self.command} {self.path}"
+
+            result = handler(self.headers, params)
+
+            if len(result) == 2:
+                headers = {"Content-Type": oauth.content_type}
+                code, resp = result
+            else:
+                code, headers, resp = result
+
+            self.send_response(code)
+            for h, v in headers.items():
+                self.send_header(h, v)
+            self.end_headers()
+
+            if resp is not None:
+                if not isinstance(resp, RawBytes):
+                    if not isinstance(resp, RawResponse):
+                        resp = json.dumps(resp)
+                    resp = resp.encode("utf-8")
+                self.wfile.write(resp)
+
+            self.close_connection = True
+
+        def do_GET(self):
+            self._handle()
+
+        def _request_body(self):
+            length = self.headers["Content-Length"]
+
+            # Handle only an explicit content-length.
+            assert length is not None
+            length = int(length)
+
+            return self.rfile.read(length).decode("utf-8")
+
+        def do_POST(self):
+            assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+
+            body = self._request_body()
+            if body:
+                # parse_qs() is understandably fairly lax when it comes to
+                # acceptable characters, but we're stricter. Spaces must be
+                # encoded, and they must use the '+' encoding rather than "%20".
+                assert " " not in body
+                assert "%20" not in body
+
+                params = urllib.parse.parse_qs(
+                    body,
+                    keep_blank_values=True,
+                    strict_parsing=True,
+                    encoding="utf-8",
+                    errors="strict",
+                )
+            else:
+                params = {}
+
+            self._handle(params=params)
+
+
+@pytest.fixture(autouse=True)
+def enable_client_oauth_debugging(monkeypatch):
+    """
+    HTTP providers aren't allowed by default; enable them via envvar.
+    """
+    monkeypatch.setenv("PGOAUTHDEBUG", "UNSAFE")
+
+
+@pytest.fixture(autouse=True)
+def trust_certpair_in_client(monkeypatch, certpair):
+    """
+    Set a trusted CA file for OAuth client connections.
+    """
+    monkeypatch.setenv("PGOAUTHCAFILE", certpair[0])
+
+
+@pytest.fixture(scope="session")
+def ssl_socket(certpair):
+    """
+    A listening server-side socket for SSL connections, using the certpair
+    fixture.
+    """
+    sock = socket.create_server(("", 0))
+
+    # The TLS connections we're making are incredibly sensitive to delayed ACKs
+    # from the client. (Without TCP_NODELAY, test performance degrades 4-5x.)
+    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+    with contextlib.closing(sock):
+        # Wrap the server socket for TLS.
+        ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
+        ctx.load_cert_chain(*certpair)
+
+        yield ctx.wrap_socket(sock, server_side=True)
+
+
+@pytest.fixture
+def openid_provider(ssl_socket):
+    """
+    A fixture that returns the OAuth state of a running OpenID provider server. The
+    server will be stopped when the fixture is torn down.
+    """
+    thread = OpenIDProvider(ssl_socket)
+    thread.start()
+
+    try:
+        yield thread.server.oauth
+    finally:
+        thread.stop()
+
+
+#
+# PQAuthDataHook implementation, matching libpq.h
+#
+
+
+PQAUTHDATA_PROMPT_OAUTH_DEVICE = 0
+PQAUTHDATA_OAUTH_BEARER_TOKEN = 1
+
+PGRES_POLLING_FAILED = 0
+PGRES_POLLING_READING = 1
+PGRES_POLLING_WRITING = 2
+PGRES_POLLING_OK = 3
+
+
+class PGPromptOAuthDevice(ctypes.Structure):
+    _fields_ = [
+        ("verification_uri", ctypes.c_char_p),
+        ("user_code", ctypes.c_char_p),
+        ("verification_uri_complete", ctypes.c_char_p),
+        ("expires_in", ctypes.c_int),
+    ]
+
+
+class PGOAuthBearerRequest(ctypes.Structure):
+    pass
+
+
+PGOAuthBearerRequest._fields_ = [
+    ("openid_configuration", ctypes.c_char_p),
+    ("scope", ctypes.c_char_p),
+    (
+        "async_",
+        ctypes.CFUNCTYPE(
+            ctypes.c_int,
+            ctypes.c_void_p,
+            ctypes.POINTER(PGOAuthBearerRequest),
+            ctypes.POINTER(ctypes.c_int),
+        ),
+    ),
+    (
+        "cleanup",
+        ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest)),
+    ),
+    ("token", ctypes.c_char_p),
+    ("user", ctypes.c_void_p),
+]
+
+
+@pytest.fixture
+def auth_data_cb():
+    """
+    Tracks calls to the libpq authdata hook. The yielded object contains a calls
+    member that records the data sent to the hook. If a test needs to perform
+    custom actions during a call, it can set the yielded object's impl callback;
+    beware that the callback takes place on a different thread.
+
+    This is done differently from the other callback implementations on purpose.
+    For the others, we can declare test-specific callbacks and have them perform
+    direct assertions on the data they receive. But that won't work for a C
+    callback, because there's no way for us to bubble up the assertion through
+    libpq. Instead, this mock-style approach is taken, where we just record the
+    calls and let the test examine them later.
+    """
+
+    class _Call:
+        pass
+
+    class _cb(object):
+        def __init__(self):
+            self.calls = []
+
+    cb = _cb()
+    cb.impl = None
+
+    # The callback will occur on a different thread, so protect the cb object.
+    cb_lock = threading.Lock()
+
+    @ctypes.CFUNCTYPE(ctypes.c_int, ctypes.c_byte, ctypes.c_void_p, ctypes.c_void_p)
+    def auth_data_cb(typ, pgconn, data):
+        handle_by_default = 0  # does an implementation have to be provided?
+
+        if typ == PQAUTHDATA_PROMPT_OAUTH_DEVICE:
+            cls = PGPromptOAuthDevice
+            handle_by_default = 1
+        elif typ == PQAUTHDATA_OAUTH_BEARER_TOKEN:
+            cls = PGOAuthBearerRequest
+        else:
+            return 0
+
+        call = _Call()
+        call.type = typ
+
+        # The lifetime of the underlying data being pointed to doesn't
+        # necessarily match the lifetime of the Python object, so we can't
+        # reference a Structure's fields after returning. Explicitly copy the
+        # contents over, field by field.
+        data = ctypes.cast(data, ctypes.POINTER(cls))
+        for name, _ in cls._fields_:
+            setattr(call, name, getattr(data.contents, name))
+
+        with cb_lock:
+            cb.calls.append(call)
+
+        if cb.impl:
+            # Pass control back to the test.
+            try:
+                return cb.impl(typ, pgconn, data.contents)
+            except Exception:
+                # This can't escape into the C stack, but we can fail the flow
+                # and hope the traceback gives us enough detail.
+                logging.error(
+                    "Exception during authdata hook callback:\n"
+                    + traceback.format_exc()
+                )
+                return -1
+
+        return handle_by_default
+
+    libpq.PQsetAuthDataHook(auth_data_cb)
+    try:
+        yield cb
+    finally:
+        # The callback is about to go out of scope, so make sure libpq is
+        # disconnected from it. (We wouldn't want to accidentally influence
+        # later tests anyway.)
+        libpq.PQsetAuthDataHook(None)
+
+
+@pytest.mark.parametrize(
+    "success, abnormal_failure",
+    [
+        pytest.param(True, False, id="success"),
+        pytest.param(False, False, id="normal failure"),
+        pytest.param(False, True, id="abnormal failure"),
+    ],
+)
+@pytest.mark.parametrize("secret", [None, "", "hunter2"])
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize("retries", [0, 1])
+@pytest.mark.parametrize(
+    "content_type",
+    [
+        pytest.param("application/json", id="standard"),
+        pytest.param("application/json;charset=utf-8", id="charset"),
+        pytest.param("application/json \t;\t charset=utf-8", id="charset (whitespace)"),
+    ],
+)
+@pytest.mark.parametrize("uri_spelling", ["verification_url", "verification_uri"])
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_oauth_with_explicit_discovery_uri(
+    accept,
+    openid_provider,
+    asynchronous,
+    uri_spelling,
+    content_type,
+    retries,
+    scope,
+    secret,
+    auth_data_cb,
+    success,
+    abnormal_failure,
+):
+    client_id = secrets.token_hex()
+    openid_provider.content_type = content_type
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        expected = f"{client_id}:{secret}"
+        assert base64.b64decode(creds) == expected.encode("ascii")
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            uri_spelling: verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    retry_lock = threading.Lock()
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+                return 400, {"error": "authorization_pending"}
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Client should reconnect.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            elif abnormal_failure:
+                # Send an empty error response, which should result in a
+                # mechanism-level failure in the client. This test ensures that
+                # the client doesn't try a third connection for this case.
+                expected_error = "server sent error response without a status"
+                fail_oauth_handshake(conn, {})
+
+            else:
+                # Simulate token validation failure.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": openid_provider.discovery_uri,
+                }
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, resp, errmsg=expected_error)
+
+    if retries:
+        # Finally, make sure that the client prompted the user once with the
+        # expected authorization URL and user code.
+        assert len(auth_data_cb.calls) == 2
+
+        # First call should have been for a custom flow, which we ignored.
+        assert auth_data_cb.calls[0].type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+
+        # Second call is for our user prompt.
+        call = auth_data_cb.calls[1]
+        assert call.type == PQAUTHDATA_PROMPT_OAUTH_DEVICE
+        assert call.verification_uri.decode() == verification_url
+        assert call.user_code.decode() == user_code
+        assert call.verification_uri_complete is None
+        assert call.expires_in == 5
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server",
+            id="oauth",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/oauth-authorization-server/alt",
+            id="oauth with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/oauth-authorization-server",
+            id="oauth with path, broken OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/alt/.well-known/openid-configuration",
+            id="openid with path, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/alt",
+            "/.well-known/openid-configuration/alt",
+            id="openid with path, IETF style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "//.well-known/openid-configuration",
+            id="empty path segment, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/",
+            "/.well-known/openid-configuration/",
+            id="empty path segment, IETF style",
+        ),
+    ],
+)
+def test_alternate_well_known_paths(
+    accept, openid_provider, issuer, path, server_discovery
+):
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = openid_provider.issuer + path
+
+    client_id = secrets.token_hex()
+    access_token = secrets.token_urlsafe()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "12345",
+            "user_code": "ABCDE",
+            "interval": 0,
+            "verification_url": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+
+    with sock:
+        handle_discovery_connection(sock, discovery_uri)
+
+    # Expect the client to connect again.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.parametrize(
+    "server_discovery",
+    [
+        pytest.param(True, id="server discovery"),
+        pytest.param(False, id="direct discovery"),
+    ],
+)
+@pytest.mark.parametrize(
+    "issuer, path, expected_error",
+    [
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-authorization-server/",
+            None,
+            id="extra empty segment (no path)",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server/path/",
+            None,
+            id="extra empty segment (with path)",
+        ),
+        pytest.param(
+            "{issuer}",
+            "?/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="query",
+        ),
+        pytest.param(
+            "{issuer}",
+            "#/.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must not contain query or fragment components',
+            id="fragment",
+        ),
+        pytest.param(
+            "{issuer}/sub/path",
+            "/sub/.well-known/oauth-authorization-server/path",
+            r'OAuth discovery URI ".*" uses an invalid format',
+            id="sandwiched prefix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/openid-configuration",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id="not .well-known",
+        ),
+        pytest.param(
+            "{issuer}",
+            "https://.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" is not a .well-known URI',
+            id=".well-known prefix buried in the authority",
+        ),
+        pytest.param(
+            "{issuer}",
+            "/.well-known/oauth-protected-resource",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/path/.well-known/openid-configuration-2",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, OIDC style",
+        ),
+        pytest.param(
+            "{issuer}/path",
+            "/.well-known/oauth-authorization-server-2/path",
+            r'OAuth discovery URI ".*" uses an unsupported .well-known suffix',
+            id="unknown well-known suffix, IETF style",
+        ),
+        pytest.param(
+            "{issuer}",
+            "file:///.well-known/oauth-authorization-server",
+            r'OAuth discovery URI ".*" must use HTTPS',
+            id="unsupported scheme",
+        ),
+    ],
+)
+def test_bad_well_known_paths(
+    accept, openid_provider, issuer, path, expected_error, server_discovery
+):
+    if not server_discovery and "/.well-known/" not in path:
+        # An oauth_issuer without a /.well-known/ path segment is just a normal
+        # issuer identifier, so this isn't an interesting test.
+        pytest.skip("not interesting: direct discovery requires .well-known")
+
+    issuer = issuer.format(issuer=openid_provider.issuer)
+    discovery_uri = urllib.parse.urljoin(openid_provider.issuer, path)
+
+    client_id = secrets.token_hex()
+
+    def discovery_handler(*args):
+        """
+        Pass-through implementation of the discovery handler. Modifies the
+        default document to contain this test's issuer identifier.
+        """
+        code, doc = openid_provider._default_discovery_handler(*args)
+        doc["issuer"] = issuer
+        return code, doc
+
+    openid_provider.register_endpoint(None, "GET", path, discovery_handler)
+
+    def fail(*args):
+        """
+        No other endpoints should be contacted; fail if the client tries.
+        """
+        assert False, "endpoint unexpectedly called"
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", fail
+    )
+    openid_provider.register_endpoint("token_endpoint", "POST", "/token", fail)
+
+    kwargs = dict(oauth_client_id=client_id)
+    if server_discovery:
+        kwargs.update(oauth_issuer=issuer)
+    else:
+        kwargs.update(oauth_issuer=discovery_uri)
+
+    sock, client = accept(**kwargs)
+    with sock:
+        if expected_error and not server_discovery:
+            # If the client already knows the URL, it should disconnect as soon
+            # as it realizes it's not valid.
+            expect_disconnected_handshake(sock)
+        else:
+            # Otherwise, it should complete the connection.
+            handle_discovery_connection(sock, discovery_uri)
+
+    # The client should not reconnect.
+
+    if expected_error is None:
+        if server_discovery:
+            expected_error = rf"server's discovery document at {discovery_uri} \(issuer \".*\"\) is incompatible with oauth_issuer \({issuer}\)"
+        else:
+            expected_error = rf"the issuer identifier \({issuer}\) does not match oauth_issuer \(.*\)"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def expect_disconnected_handshake(sock):
+    """
+    Helper for any tests that expect the client to disconnect immediately after
+    being sent the OAUTHBEARER SASL method. Generally speaking, this requires
+    the client to have an oauth_issuer set so that it doesn't try to go through
+    discovery.
+    """
+    with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+        # Initiate a handshake.
+        startup = pq3.recv1(conn, cls=pq3.Startup)
+        assert startup.proto == pq3.protocol(3, 0)
+
+        pq3.send(
+            conn,
+            pq3.types.AuthnRequest,
+            type=pq3.authn.SASL,
+            body=[b"OAUTHBEARER", b""],
+        )
+
+        # The client should disconnect at this point.
+        assert not conn.read(1), "client sent unexpected data"
+
+
+@pytest.mark.parametrize(
+    "missing",
+    [
+        pytest.param(["oauth_issuer"], id="missing oauth_issuer"),
+        pytest.param(["oauth_client_id"], id="missing oauth_client_id"),
+        pytest.param(["oauth_client_id", "oauth_issuer"], id="missing both"),
+    ],
+)
+def test_oauth_requires_issuer_and_client_id(accept, openid_provider, missing):
+    params = dict(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id="some-id",
+    )
+
+    # Remove required parameters. This should cause a client error after the
+    # server asks for OAUTHBEARER and the client tries to contact the issuer.
+    for k in missing:
+        del params[k]
+
+    sock, client = accept(**params)
+    with sock:
+        expect_disconnected_handshake(sock)
+
+    expected_error = "oauth_issuer and oauth_client_id are not both set"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# See https://datatracker.ietf.org/doc/html/rfc6749#appendix-A for character
+# class definitions.
+all_vschars = "".join([chr(c) for c in range(0x20, 0x7F)])
+all_nqchars = "".join([chr(c) for c in range(0x21, 0x7F) if c not in (0x22, 0x5C)])
+
+
+@pytest.mark.parametrize("client_id", ["", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("secret", [None, "", ":", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("device_code", ["", " + ", r'+=&"\/~', all_vschars])
+@pytest.mark.parametrize("scope", ["&", r"+=&/", all_nqchars])
+def test_url_encoding(accept, openid_provider, client_id, secret, device_code, scope):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+        oauth_client_secret=secret,
+        oauth_scope=scope,
+    )
+
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    def check_client_authn(headers, params):
+        if secret is None:
+            assert "Authorization" not in headers
+            assert params["client_id"] == [client_id]
+            return
+
+        # Require the client to use Basic authn; request-body credentials are
+        # NOT RECOMMENDED (RFC 6749, Sec. 2.3.1).
+        assert "Authorization" in headers
+        assert "client_id" not in params
+
+        method, creds = headers["Authorization"].split()
+        assert method == "Basic"
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, password = decoded.split(":", 1)
+
+        expected_username = urllib.parse.quote_plus(client_id)
+        expected_password = urllib.parse.quote_plus(secret)
+
+        assert [username, password] == [expected_username, expected_password]
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_url": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        check_client_authn(headers, params)
+
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Second connection sends the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize("error_code", ["authorization_pending", "slow_down"])
+@pytest.mark.parametrize("retries", [1, 2])
+@pytest.mark.parametrize("omit_interval", [True, False])
+def test_oauth_retry_interval(
+    accept, openid_provider, omit_interval, retries, error_code
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    expected_retry_interval = 5 if omit_interval else 1
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        if not omit_interval:
+            resp["interval"] = expected_retry_interval
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    attempts = 0
+    last_retry = None
+    retry_lock = threading.Lock()
+    token_sent = threading.Event()
+
+    def token_endpoint(headers, params):
+        now = time.monotonic()
+
+        with retry_lock:
+            nonlocal attempts, last_retry, expected_retry_interval
+
+            # Make sure the retry interval is being respected by the client.
+            if last_retry is not None:
+                interval = now - last_retry
+                assert interval >= expected_retry_interval
+
+            last_retry = now
+
+            # If the test wants to force the client to retry, return the desired
+            # error response and decrement the retry count.
+            if attempts < retries:
+                attempts += 1
+
+                # A slow_down code requires the client to additionally increase
+                # its interval by five seconds.
+                if error_code == "slow_down":
+                    expected_retry_interval += 5
+
+                return 400, {"error": error_code}
+
+        # Successfully finish the request by sending the access bearer token,
+        # and signal the main thread to continue.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+        token_sent.set()
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # First connection is a discovery request, which should result in the above
+    # endpoints being called.
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # At this point the client is talking to the authorization server. Wait for
+    # that to succeed so we don't run into the accept() timeout.
+    token_sent.wait()
+
+    # Client should reconnect and send the token.
+    sock, _ = accept()
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+
+@pytest.fixture
+def self_pipe():
+    """
+    Yields a pipe fd pair.
+    """
+
+    class _Pipe:
+        pass
+
+    p = _Pipe()
+    p.readfd, p.writefd = os.pipe()
+
+    try:
+        yield p
+    finally:
+        os.close(p.readfd)
+        os.close(p.writefd)
+
+
+@pytest.mark.parametrize("scope", [None, "", "openid email"])
+@pytest.mark.parametrize(
+    "retries",
+    [
+        -1,  # no async callback
+        0,  # async callback immediately returns token
+        1,  # async callback waits on altsock once
+        2,  # async callback waits on altsock twice
+    ],
+)
+@pytest.mark.parametrize(
+    "asynchronous",
+    [
+        pytest.param(False, id="synchronous"),
+        pytest.param(True, id="asynchronous"),
+    ],
+)
+def test_user_defined_flow(
+    accept, auth_data_cb, self_pipe, scope, retries, asynchronous
+):
+    issuer = "http://localhost"
+    discovery_uri = issuer + "/.well-known/openid-configuration"
+    access_token = secrets.token_urlsafe()
+
+    sock, client = accept(
+        oauth_issuer=discovery_uri,
+        oauth_client_id="some-id",
+        oauth_scope=scope,
+        async_=asynchronous,
+    )
+
+    # Track callbacks.
+    attempts = 0
+    wakeup_called = False
+    cleanup_calls = 0
+    lock = threading.Lock()
+
+    def wakeup():
+        """Writes a byte to the wakeup pipe."""
+        nonlocal wakeup_called
+        with lock:
+            wakeup_called = True
+            os.write(self_pipe.writefd, b"\0")
+
+    def get_token(pgconn, request, p_altsock):
+        """
+        Async token callback. While attempts < retries, libpq will be instructed
+        to wait on the self_pipe. When attempts == retries, the token will be
+        set.
+
+        Note that assertions and exceptions raised here are allowed but not very
+        helpful, since they can't bubble through the libpq stack to be collected
+        by the test suite. Try not to rely too heavily on them.
+        """
+        # Make sure libpq passed our user data through.
+        assert request.user == 42
+
+        with lock:
+            nonlocal attempts, wakeup_called
+
+            if attempts:
+                # If we've already started the timer, we shouldn't get a
+                # call back before it trips.
+                assert wakeup_called, "authdata hook was called before the timer"
+
+                # Drain the wakeup byte.
+                os.read(self_pipe.readfd, 1)
+
+            if attempts < retries:
+                attempts += 1
+
+                # Wake up the client in a little bit of time.
+                wakeup_called = False
+                threading.Timer(0.1, wakeup).start()
+
+                # Tell libpq to wait on the other end of the wakeup pipe.
+                p_altsock[0] = self_pipe.readfd
+                return PGRES_POLLING_READING
+
+        # Done!
+        request.token = access_token.encode()
+        return PGRES_POLLING_OK
+
+    @ctypes.CFUNCTYPE(
+        ctypes.c_int,
+        ctypes.c_void_p,
+        ctypes.POINTER(PGOAuthBearerRequest),
+        ctypes.POINTER(ctypes.c_int),
+    )
+    def get_token_wrapper(pgconn, p_request, p_altsock):
+        """
+        Translation layer between C and Python for the async callback.
+        Assertions and exceptions will be swallowed at the boundary, so make
+        sure they don't escape here.
+        """
+        try:
+            return get_token(pgconn, p_request.contents, p_altsock)
+        except Exception:
+            logging.error("Exception during async callback:\n" + traceback.format_exc())
+            return PGRES_POLLING_FAILED
+
+    @ctypes.CFUNCTYPE(None, ctypes.c_void_p, ctypes.POINTER(PGOAuthBearerRequest))
+    def cleanup(pgconn, p_request):
+        """
+        Should be called exactly once per connection.
+        """
+        nonlocal cleanup_calls
+        with lock:
+            cleanup_calls += 1
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which either sets up an async
+        callback or returns the token directly, depending on the value of
+        retries.
+
+        As above, try not to rely too much on assertions/exceptions here.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.cleanup = cleanup
+
+        if retries < 0:
+            # Special case: return a token immediately without a callback.
+            request.token = access_token.encode()
+            return 1
+
+        # Tell libpq to call us back.
+        request.async_ = get_token_wrapper
+        request.user = ctypes.c_void_p(42)  # will be checked in the callback
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    # Now drive the server side.
+    if retries >= 0:
+        # First connection is a discovery request, which should result in the
+        # hook being invoked.
+        with sock:
+            handle_discovery_connection(sock, discovery_uri)
+
+        # Client should reconnect to send the token.
+        sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            # Initiate a handshake, which should result in our custom callback
+            # being invoked to fetch the token.
+            initial = start_oauth_handshake(conn)
+
+            # Validate and accept the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            finish_handshake(conn)
+
+    # Check the data provided to the hook.
+    assert len(auth_data_cb.calls) == 1
+
+    call = auth_data_cb.calls[0]
+    assert call.type == PQAUTHDATA_OAUTH_BEARER_TOKEN
+    assert call.openid_configuration.decode() == discovery_uri
+    assert call.scope == (None if scope is None else scope.encode())
+
+    # Make sure we clean up after ourselves when the connection is finished.
+    client.check_completed()
+    assert cleanup_calls == 1
+
+
+def alt_patterns(*patterns):
+    """
+    Just combines multiple alternative regexes into one. It's not very efficient
+    but IMO it's easier to read and maintain.
+    """
+    pat = ""
+
+    for p in patterns:
+        if pat:
+            pat += "|"
+        pat += f"({p})"
+
+    return pat
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                401,
+                {
+                    "error": "invalid_client",
+                    "error_description": "client authentication failed",
+                },
+            ),
+            r"failed to obtain device authorization: client authentication failed \(invalid_client\)",
+            id="authentication failure with description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request"}),
+            r"failed to obtain device authorization: \(invalid_request\)",
+            id="invalid request without description",
+        ),
+        pytest.param(
+            (400, {"error": "invalid_request", "padding": "x" * 256 * 1024}),
+            r"failed to obtain device authorization: response is too large",
+            id="gigantic authz response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="broken error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain device authorization: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="failed authentication without description",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 3.5.8 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="non-numeric interval",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "interval": 08 }')),
+            r"failed to parse device authorization: Token .* is invalid",
+            id="invalid numeric interval",
+        ),
+    ],
+)
+def test_oauth_device_authorization_failures(
+    accept, openid_provider, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+Missing = object()  # sentinel for test_oauth_device_authorization_bad_json()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("device_code", str, True),
+        ("user_code", str, True),
+        ("verification_uri", str, True),
+        ("interval", int, False),
+    ],
+)
+def test_oauth_device_authorization_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert False, "token endpoint was invoked unexpectedly"
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    if bad_value is Missing:
+        error_pattern = f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern = f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern = f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "failure_mode, error_pattern",
+    [
+        pytest.param(
+            (
+                400,
+                {
+                    "error": "expired_token",
+                    "error_description": "the device code has expired",
+                },
+            ),
+            r"failed to obtain access token: the device code has expired \(expired_token\)",
+            id="expired token with description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied"}),
+            r"failed to obtain access token: \(access_denied\)",
+            id="access denied without description",
+        ),
+        pytest.param(
+            (400, {"error": "access_denied", "padding": "x" * 256 * 1024}),
+            r"failed to obtain access token: response is too large",
+            id="gigantic token response",
+        ),
+        pytest.param(
+            (400, {}),
+            r'failed to parse token error response: field "error" is missing',
+            id="empty error response",
+        ),
+        pytest.param(
+            (401, {"error": "invalid_client"}),
+            r"failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)",
+            id="authentication failure without description",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse access token response: no content type was provided",
+            id="missing content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type",
+        ),
+        pytest.param(
+            (200, {"Content-Type": "application/jsonx"}, {}),
+            r"failed to parse access token response: unexpected content type",
+            id="wrong content type (correct prefix)",
+        ),
+    ],
+)
+@pytest.mark.parametrize("retries", [0, 1])
+def test_oauth_token_failures(
+    accept, openid_provider, retries, failure_mode, error_pattern
+):
+    client_id = secrets.token_hex()
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=client_id,
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        assert params["client_id"] == [client_id]
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": "https://example.com/device",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    retry_lock = threading.Lock()
+    final_sent = False
+
+    def token_endpoint(headers, params):
+        with retry_lock:
+            nonlocal retries, final_sent
+
+            # If the test wants to force the client to retry, return an
+            # authorization_pending response and decrement the retry count.
+            if retries > 0:
+                retries -= 1
+                return 400, {"error": "authorization_pending"}
+
+            # We should only return our failure_mode response once; any further
+            # requests indicate that the client isn't correctly bailing out.
+            assert not final_sent, "client continued after token error"
+
+            final_sent = True
+
+        return failure_mode
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "bad_value",
+    [
+        pytest.param({"device_code": 3}, id="object"),
+        pytest.param([1, 2, 3], id="array"),
+        pytest.param("some string", id="string"),
+        pytest.param(4, id="numeric"),
+        pytest.param(False, id="boolean"),
+        pytest.param(None, id="null"),
+        pytest.param(Missing, id="missing"),
+    ],
+)
+@pytest.mark.parametrize(
+    "field_name,ok_type,required",
+    [
+        ("access_token", str, True),
+        ("token_type", str, True),
+    ],
+)
+def test_oauth_token_bad_json_schema(
+    accept, openid_provider, field_name, ok_type, required, bad_value
+):
+    # To make the test matrix easy, just skip the tests that aren't actually
+    # interesting (field of the correct type, missing optional field).
+    if bad_value is Missing and not required:
+        pytest.skip("not interesting: optional field")
+    elif type(bad_value) == ok_type:  # not isinstance(), because bool is an int
+        pytest.skip("not interesting: correct type")
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "interval": 0,
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        # Begin with an acceptable base response...
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        # ...then tweak it so the client fails.
+        if bad_value is Missing:
+            del resp[field_name]
+        else:
+            resp[field_name] = bad_value
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    # Now make sure the client correctly failed.
+    error_pattern = "failed to parse access token response: "
+    if bad_value is Missing:
+        error_pattern += f'field "{field_name}" is missing'
+    elif ok_type == str:
+        error_pattern += f'field "{field_name}" must be a string'
+    elif ok_type == int:
+        error_pattern += f'field "{field_name}" must be a number'
+    else:
+        assert False, "update error_pattern for new failure mode"
+
+    with pytest.raises(psycopg2.OperationalError, match=error_pattern):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("success", [True, False])
+@pytest.mark.parametrize("scope", [None, "openid email"])
+@pytest.mark.parametrize(
+    "base_response",
+    [
+        {"status": "invalid_token"},
+        {"extra_object": {"key": "value"}, "status": "invalid_token"},
+        {"extra_object": {"status": 1}, "status": "invalid_token"},
+    ],
+)
+def test_oauth_discovery(accept, openid_provider, base_response, scope, success):
+    sock, client = accept(
+        oauth_issuer=openid_provider.issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    device_code = secrets.token_hex()
+    user_code = f"{secrets.token_hex(2)}-{secrets.token_hex(2)}"
+    verification_url = "https://example.com/device"
+
+    access_token = secrets.token_urlsafe()
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        if scope:
+            assert params["scope"] == [scope]
+        else:
+            assert "scope" not in params
+
+        resp = {
+            "device_code": device_code,
+            "user_code": user_code,
+            "interval": 0,
+            "verification_uri": verification_url,
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        assert params["grant_type"] == ["urn:ietf:params:oauth:grant-type:device_code"]
+        assert params["device_code"] == [device_code]
+
+        # Successfully finish the request by sending the access bearer token.
+        resp = {
+            "access_token": access_token,
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Construct the response to use when failing the SASL exchange. Return a
+    # link to the discovery document, pointing to the test provider server.
+    fail_resp = {
+        **base_response,
+        "openid-configuration": openid_provider.discovery_uri,
+    }
+
+    if scope:
+        fail_resp["scope"] = scope
+
+    with sock:
+        handle_discovery_connection(sock, response=fail_resp)
+
+    # The client will connect to us a second time, using the parameters we sent
+    # it.
+    sock, _ = accept()
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            # Validate the token.
+            auth = get_auth_value(initial)
+            assert auth == f"Bearer {access_token}".encode("ascii")
+
+            if success:
+                finish_handshake(conn)
+
+            else:
+                # Simulate token validation failure.
+                expected_error = "test token validation failure"
+                fail_oauth_handshake(conn, fail_resp, errmsg=expected_error)
+
+    if not success:
+        # The client should not try to connect again.
+        with pytest.raises(psycopg2.OperationalError, match=expected_error):
+            client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "response,expected_error",
+    [
+        pytest.param(
+            "abcde",
+            'Token "abcde" is invalid',
+            id="bad JSON: invalid syntax",
+        ),
+        pytest.param(
+            b"\xFF\xFF\xFF\xFF",
+            "server's error response is not valid UTF-8",
+            id="bad JSON: invalid encoding",
+        ),
+        pytest.param(
+            '"abcde"',
+            "top-level element must be an object",
+            id="bad JSON: top-level element is a string",
+        ),
+        pytest.param(
+            "[]",
+            "top-level element must be an object",
+            id="bad JSON: top-level element is an array",
+        ),
+        pytest.param(
+            "{}",
+            "server sent error response without a status",
+            id="bad JSON: no status member",
+        ),
+        pytest.param(
+            '{ "status": null }',
+            'field "status" must be a string',
+            id="bad JSON: null status member",
+        ),
+        pytest.param(
+            '{ "status": 0 }',
+            'field "status" must be a string',
+            id="bad JSON: int status member",
+        ),
+        pytest.param(
+            '{ "status": [ "bad" ] }',
+            'field "status" must be a string',
+            id="bad JSON: array status member",
+        ),
+        pytest.param(
+            '{ "status": { "bad": "bad" } }',
+            'field "status" must be a string',
+            id="bad JSON: object status member",
+        ),
+        pytest.param(
+            '{ "nested": { "status": "bad" } }',
+            "server sent error response without a status",
+            id="bad JSON: nested status",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" ',
+            "The input string ended unexpectedly",
+            id="bad JSON: unterminated object",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token" } { }',
+            'Expected end of input, but found "{"',
+            id="bad JSON: trailing data",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": 1 }',
+            'field "openid-configuration" must be a string',
+            id="bad JSON: int openid-configuration member",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "openid-configuration": "", "openid-configuration": "" }',
+            'field "openid-configuration" is duplicated',
+            id="bad JSON: duplicated field",
+        ),
+        pytest.param(
+            '{ "status": "invalid_token", "scope": 1 }',
+            'field "scope" must be a string',
+            id="bad JSON: int scope member",
+        ),
+    ],
+)
+def test_oauth_discovery_server_error(accept, response, expected_error):
+    sock, client = accept(
+        oauth_issuer="https://example.com",
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            if isinstance(response, str):
+                response = response.encode("utf-8")
+
+            # Fail the SASL exchange with an invalid JSON response.
+            pq3.send(
+                conn,
+                pq3.types.AuthnRequest,
+                type=pq3.authn.SASLContinue,
+                body=response,
+            )
+
+            # The client should disconnect, so the socket is closed here. (If
+            # the client doesn't disconnect, it will report a different error
+            # below and the test will fail.)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+# All of these tests are expected to fail before libpq tries to actually attempt
+# a connection to any endpoint. To avoid hitting the network in the event that a
+# test fails, an invalid IPv4 address (256.256.256.256) is used as a hostname.
+@pytest.mark.parametrize(
+    "bad_response,expected_error",
+    [
+        pytest.param(
+            (200, {"Content-Type": "text/plain"}, {}),
+            r'failed to parse OpenID discovery document: unexpected content type: "text/plain"',
+            id="not JSON",
+        ),
+        pytest.param(
+            (200, {}, {}),
+            r"failed to parse OpenID discovery document: no content type was provided",
+            id="no Content-Type",
+        ),
+        pytest.param(
+            (204, {}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 204",
+            id="no content",
+        ),
+        pytest.param(
+            (301, {"Location": "https://localhost/"}, None),
+            r"failed to fetch OpenID discovery document: unexpected response code 301",
+            id="redirection",
+        ),
+        pytest.param(
+            (404, {}),
+            r"failed to fetch OpenID discovery document: unexpected response code 404",
+            id="not found",
+        ),
+        pytest.param(
+            (200, RawResponse("blah\x00blah")),
+            r"failed to parse OpenID discovery document: response contains embedded NULLs",
+            id="NULL bytes in document",
+        ),
+        pytest.param(
+            (200, RawBytes(b"blah\xFFblah")),
+            r"failed to parse OpenID discovery document: response is not valid UTF-8",
+            id="document is not UTF-8",
+        ),
+        pytest.param(
+            (200, 123),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="scalar at top level",
+        ),
+        pytest.param(
+            (200, []),
+            r"failed to parse OpenID discovery document: top-level element must be an object",
+            id="array at top level",
+        ),
+        pytest.param(
+            (200, RawResponse("{")),
+            r"failed to parse OpenID discovery document.* input string ended unexpectedly",
+            id="unclosed object",
+        ),
+        pytest.param(
+            (200, RawResponse(r'{ "hello": ] }')),
+            r"failed to parse OpenID discovery document.* Expected JSON value",
+            id="bad array",
+        ),
+        pytest.param(
+            (200, {"issuer": 123}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": ["something"]}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer array",
+        ),
+        pytest.param(
+            (200, {"issuer": {}}),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="issuer object",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": 123}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="numeric grant types field",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": "urn:ietf:params:oauth:grant-type:device_code"
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="string grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": {}}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types field",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": [123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", 123]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="non-string grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", {}]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="object grant types later in the list",
+        ),
+        pytest.param(
+            (200, {"grant_types_supported": ["something", ["something"]]}),
+            r'failed to parse OpenID discovery document: field "grant_types_supported" must be an array of strings',
+            id="embedded array grant types later in the list",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "grant_types_supported": ["something"],
+                    "token_endpoint": "https://256.256.256.256/",
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other valid fields",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "ignored": {"grant_types_supported": 123, "token_endpoint": 123},
+                    "issuer": 123,
+                },
+            ),
+            r'failed to parse OpenID discovery document: field "issuer" must be a string',
+            id="non-string issuer after other ignored fields",
+        ),
+        pytest.param(
+            (200, {"token_endpoint": "https://256.256.256.256/"}),
+            r'failed to parse OpenID discovery document: field "issuer" is missing',
+            id="missing issuer",
+        ),
+        pytest.param(
+            (200, {"issuer": "{issuer}"}),
+            r'failed to parse OpenID discovery document: field "token_endpoint" is missing',
+            id="missing token endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                },
+            ),
+            r'cannot run OAuth device authorization: issuer "https://.*" does not provide a device authorization endpoint',
+            id="missing device_authorization_endpoint",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                    "filler": "x" * 256 * 1024,
+                },
+            ),
+            r"failed to fetch OpenID discovery document: response is too large",
+            id="gigantic discovery response",
+        ),
+        pytest.param(
+            (
+                200,
+                {
+                    "issuer": "{issuer}/path",
+                    "token_endpoint": "https://256.256.256.256/token",
+                    "grant_types_supported": [
+                        "urn:ietf:params:oauth:grant-type:device_code"
+                    ],
+                    "device_authorization_endpoint": "https://256.256.256.256/dev",
+                },
+            ),
+            r"failed to parse OpenID discovery document: the issuer identifier \(https://.*/path\) does not match oauth_issuer \(https://.*\)",
+            id="mismatched issuer identifier",
+        ),
+        pytest.param(
+            (
+                200,
+                RawResponse(
+                    """{
+                        "issuer": "https://256.256.256.256/path",
+                        "token_endpoint": "https://256.256.256.256/token",
+                        "grant_types_supported": [
+                            "urn:ietf:params:oauth:grant-type:device_code"
+                        ],
+                        "device_authorization_endpoint": "https://256.256.256.256/dev",
+                        "device_authorization_endpoint": "https://256.256.256.256/dev"
+                    }"""
+                ),
+            ),
+            r'failed to parse OpenID discovery document: field "device_authorization_endpoint" is duplicated',
+            id="duplicated field",
+        ),
+        #
+        # Exercise HTTP-level failures by breaking the protocol. Note that the
+        # error messages here are implementation-dependent.
+        #
+        pytest.param(
+            (1000, {}),
+            r"failed to fetch OpenID discovery document: Unsupported protocol \(.*\)",
+            id="invalid HTTP response code",
+        ),
+        pytest.param(
+            (200, {"Content-Length": -1}, {}),
+            r"failed to fetch OpenID discovery document: Weird server reply \(.*Content-Length.*\)",
+            id="bad HTTP Content-Length",
+        ),
+    ],
+)
+def test_oauth_discovery_provider_failure(
+    accept, openid_provider, bad_response, expected_error
+):
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    def failing_discovery_handler(headers, params):
+        try:
+            # Insert the correct issuer value if the test wants to.
+            resp = bad_response[1]
+            iss = resp["issuer"]
+            resp["issuer"] = iss.format(issuer=openid_provider.issuer)
+        except (AttributeError, KeyError, TypeError):
+            pass
+
+        return bad_response
+
+    openid_provider.register_endpoint(
+        None,
+        "GET",
+        "/.well-known/openid-configuration",
+        failing_discovery_handler,
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize(
+    "sasl_err,resp_type,resp_payload,expected_error",
+    [
+        pytest.param(
+            {"status": "invalid_request"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "server rejected OAuth bearer token: invalid_request",
+            id="standard server error: invalid_request",
+        ),
+        pytest.param(
+            {"status": "invalid_token"},
+            pq3.types.ErrorResponse,
+            dict(
+                fields=[b"SFATAL", b"C28000", b"Mexpected error message", b""],
+            ),
+            "expected error message",
+            id="standard server error: invalid_token without discovery URI",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLContinue, body=b""),
+            "server sent additional OAuth data",
+            id="broken server: additional challenge after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASLFinal),
+            "server sent additional OAuth data",
+            id="broken server: SASL success after error",
+        ),
+        pytest.param(
+            {"status": "invalid_token", "openid-configuration": ""},
+            pq3.types.AuthnRequest,
+            dict(type=pq3.authn.SASL, body=[b"OAUTHBEARER", b""]),
+            "duplicate SASL authentication request",
+            id="broken server: SASL reinitialization after error",
+        ),
+    ],
+)
+def test_oauth_server_error(
+    accept, auth_data_cb, sasl_err, resp_type, resp_payload, expected_error
+):
+    wkuri = f"https://256.256.256.256/.well-known/openid-configuration"
+    sock, client = accept(
+        oauth_issuer=wkuri,
+        oauth_client_id="some-id",
+    )
+
+    def bearer_hook(typ, pgconn, request):
+        """
+        Implementation of the PQAuthDataHook, which returns a token directly so
+        we don't need an openid_provider instance.
+        """
+        assert typ == PQAUTHDATA_OAUTH_BEARER_TOKEN
+        request.token = secrets.token_urlsafe().encode()
+        return 1
+
+    auth_data_cb.impl = bearer_hook
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            start_oauth_handshake(conn)
+
+            # Ignore the client data. Return an error "challenge".
+            if "openid-configuration" in sasl_err:
+                sasl_err["openid-configuration"] = wkuri
+
+            resp = json.dumps(sasl_err)
+            resp = resp.encode("utf-8")
+
+            pq3.send(
+                conn, pq3.types.AuthnRequest, type=pq3.authn.SASLContinue, body=resp
+            )
+
+            # Per RFC, the client is required to send a dummy ^A response.
+            pkt = pq3.recv1(conn)
+            assert pkt.type == pq3.types.PasswordMessage
+            assert pkt.payload == b"\x01"
+
+            # Now fail the SASL exchange (in either a valid way, or an
+            # invalid one, depending on the test).
+            pq3.send(conn, resp_type, **resp_payload)
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_interval_overflow(accept, openid_provider):
+    """
+    A really badly behaved server could send a huge interval and then
+    immediately tell us to slow_down; ensure we handle this without breaking.
+    """
+    # (should be equivalent to the INT_MAX in limits.h)
+    int_max = ctypes.c_uint(-1).value // 2
+
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "my-device-code",
+            "user_code": "my-user-code",
+            "verification_uri": "https://example.com",
+            "expires_in": 5,
+            "interval": int_max,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        return 400, {"error": "slow_down"}
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    with sock:
+        handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+    expected_error = "slow_down interval overflow"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_oauth_refuses_http(accept, openid_provider, monkeypatch):
+    """
+    HTTP must be refused without PGOAUTHDEBUG.
+    """
+    monkeypatch.delenv("PGOAUTHDEBUG")
+
+    def to_http(uri):
+        """Swaps out a URI's scheme for http."""
+        parts = urllib.parse.urlparse(uri)
+        parts = parts._replace(scheme="http")
+        return urllib.parse.urlunparse(parts)
+
+    sock, client = accept(
+        oauth_issuer=to_http(openid_provider.issuer),
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    # No provider callbacks necessary; we should fail immediately.
+
+    with sock:
+        handle_discovery_connection(sock, to_http(openid_provider.discovery_uri))
+
+    expected_error = r'OAuth discovery URI ".*" must use HTTPS'
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("auth_type", [pq3.authn.OK, pq3.authn.SASLFinal])
+def test_discovery_incorrectly_permits_connection(accept, auth_type):
+    """
+    Incorrectly responds to a client's discovery request with AuthenticationOK
+    or AuthenticationSASLFinal. require_auth=oauth should catch the former, and
+    the mechanism itself should catch the latter.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+        require_auth="oauth",
+    )
+
+    with sock:
+        with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+            initial = start_oauth_handshake(conn)
+
+            auth = get_auth_value(initial)
+            assert auth == b""
+
+            # Incorrectly log the client in. It should immediately disconnect.
+            pq3.send(conn, pq3.types.AuthnRequest, type=auth_type)
+            assert not conn.read(1), "client sent unexpected data"
+
+    if auth_type == pq3.authn.OK:
+        expected_error = "server did not complete authentication"
+    else:
+        expected_error = "server sent unexpected additional OAuth data"
+
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+def test_no_discovery_url_provided(accept):
+    """
+    Tests what happens when the client doesn't know who to contact and the
+    server doesn't tell it.
+    """
+    issuer = "https://256.256.256.256"
+    sock, client = accept(
+        oauth_issuer=issuer,
+        oauth_client_id=secrets.token_hex(),
+    )
+
+    with sock:
+        handle_discovery_connection(sock, discovery=None)
+
+    expected_error = "no discovery metadata was provided"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
+
+
+@pytest.mark.parametrize("change_between_connections", [False, True])
+def test_discovery_url_changes(accept, openid_provider, change_between_connections):
+    """
+    Ensures that the client complains if the server agrees on the issuer, but
+    disagrees on the discovery URL to be used.
+    """
+
+    # Set up our provider callbacks.
+    # NOTE that these callbacks will be called on a background thread. Don't do
+    # any unprotected state mutation here.
+
+    def authorization_endpoint(headers, params):
+        resp = {
+            "device_code": "DEV",
+            "user_code": "USER",
+            "interval": 0,
+            "verification_uri": "https://example.org",
+            "expires_in": 5,
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "device_authorization_endpoint", "POST", "/device", authorization_endpoint
+    )
+
+    def token_endpoint(headers, params):
+        resp = {
+            "access_token": secrets.token_urlsafe(),
+            "token_type": "bearer",
+        }
+
+        return 200, resp
+
+    openid_provider.register_endpoint(
+        "token_endpoint", "POST", "/token", token_endpoint
+    )
+
+    # Have the client connect.
+    sock, client = accept(
+        oauth_issuer=openid_provider.discovery_uri,
+        oauth_client_id="some-id",
+    )
+
+    other_wkuri = f"{openid_provider.issuer}/.well-known/oauth-authorization-server"
+
+    if not change_between_connections:
+        # Immediately respond with the wrong URL.
+        with sock:
+            handle_discovery_connection(sock, other_wkuri)
+
+    else:
+        # First connection; use the right URL to begin with.
+        with sock:
+            handle_discovery_connection(sock, openid_provider.discovery_uri)
+
+        # Second connection. Reject the token and switch the URL.
+        sock, _ = accept()
+        with sock:
+            with pq3.wrap(sock, debug_stream=sys.stdout) as conn:
+                initial = start_oauth_handshake(conn)
+                get_auth_value(initial)
+
+                # Ignore the token; fail with a different discovery URL.
+                resp = {
+                    "status": "invalid_token",
+                    "openid-configuration": other_wkuri,
+                }
+                fail_oauth_handshake(conn, resp)
+
+    expected_error = rf"server's discovery document has moved to {other_wkuri} \(previous location was {openid_provider.discovery_uri}\)"
+    with pytest.raises(psycopg2.OperationalError, match=expected_error):
+        client.check_completed()
diff --git a/src/test/python/conftest.py b/src/test/python/conftest.py
new file mode 100644
index 00000000000..1a73865ee47
--- /dev/null
+++ b/src/test/python/conftest.py
@@ -0,0 +1,34 @@
+#
+# Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import os
+
+import pytest
+
+
+def pytest_addoption(parser):
+    """
+    Adds custom command line options to py.test. We add one to signal temporary
+    Postgres instance creation for the server tests.
+
+    Per pytest documentation, this must live in the top level test directory.
+    """
+    parser.addoption(
+        "--temp-instance",
+        metavar="DIR",
+        help="create a temporary Postgres instance in DIR",
+    )
+
+
+@pytest.fixture(scope="session", autouse=True)
+def _check_PG_TEST_EXTRA(request):
+    """
+    Automatically skips the whole suite if PG_TEST_EXTRA doesn't contain
+    'python'. pytestmark doesn't seem to work in a top-level conftest.py, so
+    I've made this an autoused fixture instead.
+    """
+    extra_tests = os.getenv("PG_TEST_EXTRA", "").split()
+    if "python" not in extra_tests:
+        pytest.skip("Potentially unsafe test 'python' not enabled in PG_TEST_EXTRA")
diff --git a/src/test/python/meson.build b/src/test/python/meson.build
new file mode 100644
index 00000000000..e137df852ef
--- /dev/null
+++ b/src/test/python/meson.build
@@ -0,0 +1,47 @@
+# Copyright (c) 2023, PostgreSQL Global Development Group
+
+subdir('server')
+
+pytest_env = {
+  'with_libcurl': libcurl.found() ? 'yes' : 'no',
+
+  # Point to the default database; the tests will create their own databases as
+  # needed.
+  'PGDATABASE': 'postgres',
+
+  # Avoid the need for a Rust compiler on platforms without prebuilt wheels for
+  # pyca/cryptography.
+  'CRYPTOGRAPHY_DONT_BUILD_RUST': '1',
+}
+
+# Some modules (psycopg2) need OpenSSL at compile time; for platforms where we
+# might have multiple implementations installed (macOS+brew), try to use the
+# same one that libpq is using.
+if ssl.found()
+  pytest_incdir = ssl.get_variable(pkgconfig: 'includedir', default_value: '')
+  if pytest_incdir != ''
+    pytest_env += { 'CPPFLAGS': '-I@0@'.format(pytest_incdir) }
+  endif
+
+  pytest_libdir = ssl.get_variable(pkgconfig: 'libdir', default_value: '')
+  if pytest_libdir != ''
+    pytest_env += { 'LDFLAGS': '-L@0@'.format(pytest_libdir) }
+  endif
+endif
+
+tests += {
+  'name': 'python',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'pytest': {
+	'requirements': meson.current_source_dir() / 'requirements.txt',
+    'tests': [
+      './client',
+      './server',
+      './test_internals.py',
+      './test_pq3.py',
+    ],
+    'env': pytest_env,
+    'test_kwargs': {'priority': 50}, # python tests are slow, start early
+  },
+}
diff --git a/src/test/python/pq3.py b/src/test/python/pq3.py
new file mode 100644
index 00000000000..ef809e288af
--- /dev/null
+++ b/src/test/python/pq3.py
@@ -0,0 +1,740 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import ssl
+import sys
+import textwrap
+
+from construct import *
+
+import tls
+
+
+def protocol(major, minor):
+    """
+    Returns the protocol version, in integer format, corresponding to the given
+    major and minor version numbers.
+    """
+    return (major << 16) | minor
+
+
+# Startup
+
+StringList = GreedyRange(NullTerminated(GreedyBytes))
+
+
+class KeyValueAdapter(Adapter):
+    """
+    Turns a key-value store into a null-terminated list of null-terminated
+    strings, as presented on the wire in the startup packet.
+    """
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, list):
+            return obj
+
+        l = []
+
+        for k, v in obj.items():
+            if isinstance(k, str):
+                k = k.encode("utf-8")
+            l.append(k)
+
+            if isinstance(v, str):
+                v = v.encode("utf-8")
+            l.append(v)
+
+        l.append(b"")
+        return l
+
+    def _decode(self, obj, context, path):
+        # TODO: turn a list back into a dict
+        return obj
+
+
+KeyValues = KeyValueAdapter(StringList)
+
+_startup_payload = Switch(
+    this.proto,
+    {
+        protocol(3, 0): KeyValues,
+    },
+    default=GreedyBytes,
+)
+
+
+def _default_protocol(this):
+    try:
+        if isinstance(this.payload, (list, dict)):
+            return protocol(3, 0)
+    except AttributeError:
+        pass  # no payload passed during build
+
+    return 0
+
+
+def _startup_payload_len(this):
+    """
+    The payload field has a fixed size based on the length of the packet. But
+    if the caller hasn't supplied an explicit length at build time, we have to
+    build the payload to figure out how long it is, which requires us to know
+    the length first... This function exists solely to break the cycle.
+    """
+    assert this._building, "_startup_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    try:
+        proto = this.proto
+    except AttributeError:
+        proto = _default_protocol(this)
+
+    data = _startup_payload.build(payload, proto=proto)
+    return len(data)
+
+
+Startup = Struct(
+    "len" / Default(Int32sb, lambda this: _startup_payload_len(this) + 8),
+    "proto" / Default(Hex(Int32sb), _default_protocol),
+    "payload" / FixedSized(this.len - 8, Default(_startup_payload, b"")),
+)
+
+# Pq3
+
+
+# Adapted from construct.core.EnumIntegerString
+class EnumNamedByte:
+    def __init__(self, val, name):
+        self._val = val
+        self._name = name
+
+    def __int__(self):
+        return ord(self._val)
+
+    def __str__(self):
+        return "(enum) %s %r" % (self._name, self._val)
+
+    def __repr__(self):
+        return "EnumNamedByte(%r)" % self._val
+
+    def __eq__(self, other):
+        if isinstance(other, EnumNamedByte):
+            other = other._val
+        if not isinstance(other, bytes):
+            return NotImplemented
+
+        return self._val == other
+
+    def __hash__(self):
+        return hash(self._val)
+
+
+# Adapted from construct.core.Enum
+class ByteEnum(Adapter):
+    def __init__(self, **mapping):
+        super(ByteEnum, self).__init__(Byte)
+        self.namemapping = {k: EnumNamedByte(v, k) for k, v in mapping.items()}
+        self.decmapping = {v: EnumNamedByte(v, k) for k, v in mapping.items()}
+
+    def __getattr__(self, name):
+        if name in self.namemapping:
+            return self.decmapping[self.namemapping[name]]
+        raise AttributeError
+
+    def _decode(self, obj, context, path):
+        b = bytes([obj])
+        try:
+            return self.decmapping[b]
+        except KeyError:
+            return EnumNamedByte(b, "(unknown)")
+
+    def _encode(self, obj, context, path):
+        if isinstance(obj, int):
+            return obj
+        elif isinstance(obj, bytes):
+            return ord(obj)
+        return int(obj)
+
+
+types = ByteEnum(
+    ErrorResponse=b"E",
+    ReadyForQuery=b"Z",
+    Query=b"Q",
+    EmptyQueryResponse=b"I",
+    AuthnRequest=b"R",
+    PasswordMessage=b"p",
+    BackendKeyData=b"K",
+    CommandComplete=b"C",
+    ParameterStatus=b"S",
+    DataRow=b"D",
+    Terminate=b"X",
+)
+
+
+authn = Enum(
+    Int32ub,
+    OK=0,
+    SASL=10,
+    SASLContinue=11,
+    SASLFinal=12,
+)
+
+
+_authn_body = Switch(
+    this.type,
+    {
+        authn.OK: Terminated,
+        authn.SASL: StringList,
+    },
+    default=GreedyBytes,
+)
+
+
+def _data_len(this):
+    assert this._building, "_data_len() cannot be called during parsing"
+
+    if not hasattr(this, "data") or this.data is None:
+        return -1
+
+    return len(this.data)
+
+
+# The protocol reuses the PasswordMessage for several authentication response
+# types, and there's no good way to figure out which is which without keeping
+# state for the entire stream. So this is a separate Construct that can be
+# explicitly parsed/built by code that knows it's needed.
+SASLInitialResponse = Struct(
+    "name" / NullTerminated(GreedyBytes),
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(GreedyBytes),
+        If(this.len != -1, Default(FixedSized(this.len, GreedyBytes), b"")),
+    ),
+    Terminated,  # make sure the entire response is consumed
+)
+
+
+_column = FocusedSeq(
+    "data",
+    "len" / Default(Int32sb, lambda this: _data_len(this)),
+    "data" / If(this.len != -1, FixedSized(this.len, GreedyBytes)),
+)
+
+
+_payload_map = {
+    types.ErrorResponse: Struct("fields" / StringList),
+    types.ReadyForQuery: Struct("status" / Bytes(1)),
+    types.Query: Struct("query" / NullTerminated(GreedyBytes)),
+    types.EmptyQueryResponse: Terminated,
+    types.AuthnRequest: Struct("type" / authn, "body" / Default(_authn_body, b"")),
+    types.BackendKeyData: Struct("pid" / Int32ub, "key" / Hex(Int32ub)),
+    types.CommandComplete: Struct("tag" / NullTerminated(GreedyBytes)),
+    types.ParameterStatus: Struct(
+        "name" / NullTerminated(GreedyBytes), "value" / NullTerminated(GreedyBytes)
+    ),
+    types.DataRow: Struct("columns" / Default(PrefixedArray(Int16sb, _column), b"")),
+    types.Terminate: Terminated,
+}
+
+
+_payload = FocusedSeq(
+    "_payload",
+    "_payload"
+    / Switch(
+        this._.type,
+        _payload_map,
+        default=GreedyBytes,
+    ),
+    Terminated,  # make sure every payload consumes the entire packet
+)
+
+
+def _payload_len(this):
+    """
+    See _startup_payload_len() for an explanation.
+    """
+    assert this._building, "_payload_len() cannot be called during parsing"
+
+    try:
+        payload = this.payload
+    except AttributeError:
+        return 0  # no payload
+
+    if isinstance(payload, bytes):
+        # already serialized; just use the given length
+        return len(payload)
+
+    data = _payload.build(payload, type=this.type)
+    return len(data)
+
+
+Pq3 = Struct(
+    "type" / types,
+    "len" / Default(Int32ub, lambda this: _payload_len(this) + 4),
+    "payload"
+    / IfThenElse(
+        # Allow tests to explicitly pass an incorrect length during testing, by
+        # not enforcing a FixedSized during build. (The len calculation above
+        # defaults to the correct size.)
+        this._building,
+        Optional(_payload),
+        FixedSized(this.len - 4, Default(_payload, b"")),
+    ),
+)
+
+
+# Environment
+
+
+def pghost():
+    return os.environ.get("PGHOST", default="localhost")
+
+
+def pgport():
+    return int(os.environ.get("PGPORT", default=5432))
+
+
+def pguser():
+    try:
+        return os.environ["PGUSER"]
+    except KeyError:
+        if platform.system() == "Windows":
+            # libpq defaults to GetUserName() on Windows.
+            return os.getlogin()
+        return getpass.getuser()
+
+
+def pgdatabase():
+    return os.environ.get("PGDATABASE", default="postgres")
+
+
+# Connections
+
+
+def _hexdump_translation_map():
+    """
+    For hexdumps. Translates any unprintable or non-ASCII bytes into '.'.
+    """
+    input = bytearray()
+
+    for i in range(128):
+        c = chr(i)
+
+        if not c.isprintable():
+            input += bytes([i])
+
+    input += bytes(range(128, 256))
+
+    return bytes.maketrans(input, b"." * len(input))
+
+
+class _DebugStream(object):
+    """
+    Wraps a file-like object and adds hexdumps of the read and write data. Call
+    end_packet() on a _DebugStream to write the accumulated hexdumps to the
+    output stream, along with the packet that was sent.
+    """
+
+    _translation_map = _hexdump_translation_map()
+
+    def __init__(self, stream, out=sys.stdout):
+        """
+        Creates a new _DebugStream wrapping the given stream (which must have
+        been created by wrap()). All attributes not provided by the _DebugStream
+        are delegated to the wrapped stream. out is the text stream to which
+        hexdumps are written.
+        """
+        self.raw = stream
+        self._out = out
+        self._rbuf = io.BytesIO()
+        self._wbuf = io.BytesIO()
+
+    def __getattr__(self, name):
+        return getattr(self.raw, name)
+
+    def __setattr__(self, name, value):
+        if name in ("raw", "_out", "_rbuf", "_wbuf"):
+            return object.__setattr__(self, name, value)
+
+        setattr(self.raw, name, value)
+
+    def read(self, *args, **kwargs):
+        buf = self.raw.read(*args, **kwargs)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def write(self, b):
+        self._wbuf.write(b)
+        return self.raw.write(b)
+
+    def recv(self, *args):
+        buf = self.raw.recv(*args)
+
+        self._rbuf.write(buf)
+        return buf
+
+    def _flush(self, buf, prefix):
+        width = 16
+        hexwidth = width * 3 - 1
+
+        count = 0
+        buf.seek(0)
+
+        while True:
+            line = buf.read(16)
+
+            if not line:
+                if count:
+                    self._out.write("\n")  # separate the output block with a newline
+                return
+
+            self._out.write("%s %04X:\t" % (prefix, count))
+            self._out.write("%*s\t" % (-hexwidth, line.hex(" ")))
+            self._out.write(line.translate(self._translation_map).decode("ascii"))
+            self._out.write("\n")
+
+            count += 16
+
+    def print_debug(self, obj, *, prefix=""):
+        contents = ""
+        if obj is not None:
+            contents = str(obj)
+
+        for line in contents.splitlines():
+            self._out.write("%s%s\n" % (prefix, line))
+
+        self._out.write("\n")
+
+    def flush_debug(self, *, prefix=""):
+        self._flush(self._rbuf, prefix + "<")
+        self._rbuf = io.BytesIO()
+
+        self._flush(self._wbuf, prefix + ">")
+        self._wbuf = io.BytesIO()
+
+    def end_packet(self, pkt, *, read=False, prefix="", indent="  "):
+        """
+        Marks the end of a logical "packet" of data. A string representation of
+        pkt will be printed, and the debug buffers will be flushed with an
+        indent. All lines can be optionally prefixed.
+
+        If read is True, the packet representation is written after the debug
+        buffers; otherwise the default of False (meaning write) causes the
+        packet representation to be dumped first. This is meant to capture the
+        logical flow of layer translation.
+        """
+        write = not read
+
+        if write:
+            self.print_debug(pkt, prefix=prefix + "> ")
+
+        self.flush_debug(prefix=prefix + indent)
+
+        if read:
+            self.print_debug(pkt, prefix=prefix + "< ")
+
+
+@contextlib.contextmanager
+def wrap(socket, *, debug_stream=None):
+    """
+    Transforms a raw socket into a connection that can be used for Construct
+    building and parsing. The return value is a context manager and can be used
+    in a with statement.
+    """
+    # It is critical that buffering be disabled here, so that we can still
+    # manipulate the raw socket without desyncing the stream.
+    with socket.makefile("rwb", buffering=0) as sfile:
+        # Expose the original socket's recv() on the SocketIO object we return.
+        def recv(self, *args):
+            return socket.recv(*args)
+
+        sfile.recv = recv.__get__(sfile)
+
+        conn = sfile
+        if debug_stream:
+            conn = _DebugStream(conn, debug_stream)
+
+        try:
+            yield conn
+        finally:
+            if debug_stream:
+                conn.flush_debug(prefix="? ")
+
+
+def _send(stream, cls, obj):
+    debugging = hasattr(stream, "flush_debug")
+    out = io.BytesIO()
+
+    # Ideally we would build directly to the passed stream, but because we need
+    # to reparse the generated output for the debugging case, build to an
+    # intermediate BytesIO and send it instead.
+    cls.build_stream(obj, out)
+    buf = out.getvalue()
+
+    stream.write(buf)
+    if debugging:
+        pkt = cls.parse(buf)
+        stream.end_packet(pkt)
+
+    stream.flush()
+
+
+def send(stream, packet_type, payload_data=None, **payloadkw):
+    """
+    Sends a packet on the given pq3 connection. type is the pq3.types member
+    that should be assigned to the packet. If payload_data is given, it will be
+    used as the packet payload; otherwise the key/value pairs in payloadkw will
+    be the payload contents.
+    """
+    data = payloadkw
+
+    if payload_data is not None:
+        if payloadkw:
+            raise ValueError(
+                "payload_data and payload keywords may not be used simultaneously"
+            )
+
+        data = payload_data
+
+    _send(stream, Pq3, dict(type=packet_type, payload=data))
+
+
+def send_startup(stream, proto=None, **kwargs):
+    """
+    Sends a startup packet on the given pq3 connection. In most cases you should
+    use the handshake functions instead, which will do this for you.
+
+    By default, a protocol version 3 packet will be sent. This can be overridden
+    with the proto parameter.
+    """
+    pkt = {}
+
+    if proto is not None:
+        pkt["proto"] = proto
+    if kwargs:
+        pkt["payload"] = kwargs
+
+    _send(stream, Startup, pkt)
+
+
+def recv1(stream, *, cls=Pq3):
+    """
+    Receives a single pq3 packet from the given stream and returns it.
+    """
+    resp = cls.parse_stream(stream)
+
+    debugging = hasattr(stream, "flush_debug")
+    if debugging:
+        stream.end_packet(resp, read=True)
+
+    return resp
+
+
+def handshake(stream, **kwargs):
+    """
+    Performs a libpq v3 startup handshake. kwargs should contain the key/value
+    parameters to send to the server in the startup packet.
+    """
+    # Send our startup parameters.
+    send_startup(stream, **kwargs)
+
+    # Receive and dump packets until the server indicates it's ready for our
+    # first query.
+    while True:
+        resp = recv1(stream)
+        if resp is None:
+            raise RuntimeError("server closed connection during handshake")
+
+        if resp.type == types.ReadyForQuery:
+            return
+        elif resp.type == types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {resp.payload.fields!r}"
+            )
+
+
+# TLS
+
+
+class _TLSStream(object):
+    """
+    A file-like object that performs TLS encryption/decryption on a wrapped
+    stream. Differs from ssl.SSLSocket in that we have full visibility and
+    control over the TLS layer.
+    """
+
+    def __init__(self, stream, context):
+        self._stream = stream
+        self._debugging = hasattr(stream, "flush_debug")
+
+        self._in = ssl.MemoryBIO()
+        self._out = ssl.MemoryBIO()
+        self._ssl = context.wrap_bio(self._in, self._out)
+
+    def handshake(self):
+        try:
+            self._pump(lambda: self._ssl.do_handshake())
+        finally:
+            self._flush_debug(prefix="? ")
+
+    def read(self, *args):
+        return self._pump(lambda: self._ssl.read(*args))
+
+    def write(self, *args):
+        return self._pump(lambda: self._ssl.write(*args))
+
+    def _decode(self, buf):
+        """
+        Attempts to decode a buffer of TLS data into a packet representation
+        that can be printed.
+
+        TODO: handle buffers (and record fragments) that don't align with packet
+        boundaries.
+        """
+        end = len(buf)
+        bio = io.BytesIO(buf)
+
+        ret = io.StringIO()
+
+        while bio.tell() < end:
+            record = tls.Plaintext.parse_stream(bio)
+
+            if ret.tell() > 0:
+                ret.write("\n")
+            ret.write("[Record] ")
+            ret.write(str(record))
+            ret.write("\n")
+
+            if record.type == tls.ContentType.handshake:
+                record_cls = tls.Handshake
+            else:
+                continue
+
+            innerlen = len(record.fragment)
+            inner = io.BytesIO(record.fragment)
+
+            while inner.tell() < innerlen:
+                msg = record_cls.parse_stream(inner)
+
+                indented = "[Message] " + str(msg)
+                indented = textwrap.indent(indented, "    ")
+
+                ret.write("\n")
+                ret.write(indented)
+                ret.write("\n")
+
+        return ret.getvalue()
+
+    def flush(self):
+        if not self._out.pending:
+            self._stream.flush()
+            return
+
+        buf = self._out.read()
+        self._stream.write(buf)
+
+        if self._debugging:
+            pkt = self._decode(buf)
+            self._stream.end_packet(pkt, prefix="  ")
+
+        self._stream.flush()
+
+    def _pump(self, operation):
+        while True:
+            try:
+                return operation()
+            except (ssl.SSLWantReadError, ssl.SSLWantWriteError) as e:
+                want = e
+            self._read_write(want)
+
+    def _recv(self, maxsize):
+        buf = self._stream.recv(4096)
+        if not buf:
+            self._in.write_eof()
+            return
+
+        self._in.write(buf)
+
+        if not self._debugging:
+            return
+
+        pkt = self._decode(buf)
+        self._stream.end_packet(pkt, read=True, prefix="  ")
+
+    def _read_write(self, want):
+        # XXX This needs work. So many corner cases yet to handle. For one,
+        # doing blocking writes in flush may lead to distributed deadlock if the
+        # peer is already blocking on its writes.
+
+        if isinstance(want, ssl.SSLWantWriteError):
+            assert self._out.pending, "SSL backend wants write without data"
+
+        self.flush()
+
+        if isinstance(want, ssl.SSLWantReadError):
+            self._recv(4096)
+
+    def _flush_debug(self, prefix):
+        if not self._debugging:
+            return
+
+        self._stream.flush_debug(prefix=prefix)
+
+
+@contextlib.contextmanager
+def tls_handshake(stream, context):
+    """
+    Performs a TLS handshake over the given stream (which must have been created
+    via a call to wrap()), and returns a new stream which transparently tunnels
+    data over the TLS connection.
+
+    If the passed stream has debugging enabled, the returned stream will also
+    have debugging, using the same output IO.
+    """
+    debugging = hasattr(stream, "flush_debug")
+
+    # Send our startup parameters.
+    send_startup(stream, proto=protocol(1234, 5679))
+
+    # Look at the SSL response.
+    resp = stream.read(1)
+    if debugging:
+        stream.flush_debug(prefix="  ")
+
+    if resp == b"N":
+        raise RuntimeError("server does not support SSLRequest")
+    if resp != b"S":
+        raise RuntimeError(f"unexpected response of type {resp!r} during TLS startup")
+
+    tls = _TLSStream(stream, context)
+    tls.handshake()
+
+    if debugging:
+        tls = _DebugStream(tls, stream._out)
+
+    try:
+        yield tls
+        # TODO: teardown/unwrap the connection?
+    finally:
+        if debugging:
+            tls.flush_debug(prefix="? ")
diff --git a/src/test/python/pytest.ini b/src/test/python/pytest.ini
new file mode 100644
index 00000000000..ab7a6e7fb96
--- /dev/null
+++ b/src/test/python/pytest.ini
@@ -0,0 +1,4 @@
+[pytest]
+
+markers =
+    slow: mark test as slow
diff --git a/src/test/python/requirements.txt b/src/test/python/requirements.txt
new file mode 100644
index 00000000000..0dfcffb83e0
--- /dev/null
+++ b/src/test/python/requirements.txt
@@ -0,0 +1,11 @@
+black
+# cryptography 35.x and later add many platform/toolchain restrictions, beware
+cryptography~=3.4.8
+# TODO: figure out why 2.10.70 broke things
+# (probably https://github.com/construct/construct/pull/1015)
+construct==2.10.69
+isort~=5.6
+# TODO: update to psycopg[c] 3.1
+psycopg2~=2.9.7
+pytest~=7.3
+pytest-asyncio~=0.21.0
diff --git a/src/test/python/server/__init__.py b/src/test/python/server/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/test/python/server/conftest.py b/src/test/python/server/conftest.py
new file mode 100644
index 00000000000..42af80c73ee
--- /dev/null
+++ b/src/test/python/server/conftest.py
@@ -0,0 +1,141 @@
+#
+# Portions Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import collections
+import contextlib
+import os
+import shutil
+import socket
+import subprocess
+import sys
+
+import pytest
+
+import pq3
+
+BLOCKING_TIMEOUT = 2  # the number of seconds to wait for blocking calls
+
+
+def cleanup_prior_instance(datadir):
+    """
+    Clean up an existing data directory, but make sure it actually looks like a
+    data directory first. (Empty folders will remain untouched, since initdb can
+    populate them.)
+    """
+    required_entries = set(["base", "PG_VERSION", "postgresql.conf"])
+    empty = True
+
+    try:
+        with os.scandir(datadir) as entries:
+            for e in entries:
+                empty = False
+                required_entries.discard(e.name)
+
+    except FileNotFoundError:
+        return  # nothing to clean up
+
+    if empty:
+        return  # initdb can handle an empty datadir
+
+    if required_entries:
+        pytest.fail(
+            f"--temp-instance directory \"{datadir}\" is not empty and doesn't look like a data directory (missing {', '.join(required_entries)})"
+        )
+
+    # Okay, seems safe enough now.
+    shutil.rmtree(datadir)
+
+
+@pytest.fixture(scope="session")
+def postgres_instance(pytestconfig, unused_tcp_port_factory):
+    """
+    If --temp-instance has been passed to pytest, this fixture runs a temporary
+    Postgres instance on an available port. Otherwise, the fixture will attempt
+    to contact a running Postgres server on (PGHOST, PGPORT); dependent tests
+    will be skipped if the connection fails.
+
+    Yields a (host, port) tuple for connecting to the server.
+    """
+    PGInstance = collections.namedtuple("PGInstance", ["addr", "temporary"])
+
+    datadir = pytestconfig.getoption("temp_instance")
+    if datadir:
+        # We were told to create a temporary instance. Use pg_ctl to set it up
+        # on an unused port.
+        cleanup_prior_instance(datadir)
+        subprocess.run(["pg_ctl", "-D", datadir, "init"], check=True)
+
+        # The CI looks for *.log files to upload, so the file name here isn't
+        # completely arbitrary.
+        log = os.path.join(datadir, "postmaster.log")
+        port = unused_tcp_port_factory()
+
+        subprocess.run(
+            [
+                "pg_ctl",
+                "-D",
+                datadir,
+                "-l",
+                log,
+                "-o",
+                " ".join(
+                    [
+                        f"-c port={port}",
+                        "-c listen_addresses=localhost",
+                        "-c log_connections=on",
+                        "-c session_preload_libraries=oauthtest",
+                        "-c oauth_validator_libraries=oauthtest",
+                    ]
+                ),
+                "start",
+            ],
+            check=True,
+        )
+
+        yield ("localhost", port)
+
+        subprocess.run(["pg_ctl", "-D", datadir, "stop"], check=True)
+
+    else:
+        # Try to contact an already running server; skip the suite if we can't
+        # find one.
+        addr = (pq3.pghost(), pq3.pgport())
+
+        try:
+            with socket.create_connection(addr, timeout=BLOCKING_TIMEOUT):
+                pass
+        except ConnectionError as e:
+            pytest.skip(f"unable to connect to Postgres server at {addr}: {e}")
+
+        yield addr
+
+
+@pytest.fixture
+def connect(postgres_instance):
+    """
+    A factory fixture that, when called, returns a socket connected to a
+    Postgres server, wrapped in a pq3 connection. Dependent tests will be
+    skipped if no server is available.
+    """
+    addr = postgres_instance
+
+    # Set up an ExitStack to handle safe cleanup of all of the moving pieces.
+    with contextlib.ExitStack() as stack:
+
+        def conn_factory():
+            sock = socket.create_connection(addr, timeout=BLOCKING_TIMEOUT)
+
+            # Have ExitStack close our socket.
+            stack.enter_context(sock)
+
+            # Wrap the connection in a pq3 layer and have ExitStack clean it up
+            # too.
+            wrap_ctx = pq3.wrap(sock, debug_stream=sys.stdout)
+            conn = stack.enter_context(wrap_ctx)
+
+            return conn
+
+        yield conn_factory
diff --git a/src/test/python/server/meson.build b/src/test/python/server/meson.build
new file mode 100644
index 00000000000..85534b9cc99
--- /dev/null
+++ b/src/test/python/server/meson.build
@@ -0,0 +1,18 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+oauthtest_sources = files(
+  'oauthtest.c',
+)
+
+if host_system == 'windows'
+  oauthtest_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauthtest',
+    '--FILEDESC', 'passthrough module to validate OAuth tests',
+  ])
+endif
+
+oauthtest = shared_module('oauthtest',
+  oauthtest_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += oauthtest
diff --git a/src/test/python/server/oauthtest.c b/src/test/python/server/oauthtest.c
new file mode 100644
index 00000000000..cb7f20f4022
--- /dev/null
+++ b/src/test/python/server/oauthtest.c
@@ -0,0 +1,119 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauthtest.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/python/server/oauthtest.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void test_startup(ValidatorModuleState *state);
+static void test_shutdown(ValidatorModuleState *state);
+static bool test_validate(const ValidatorModuleState *state,
+						  const char *token,
+						  const char *role,
+						  ValidatorModuleResult *result);
+
+static const OAuthValidatorCallbacks callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
+	.startup_cb = test_startup,
+	.shutdown_cb = test_shutdown,
+	.validate_cb = test_validate,
+};
+
+static char *expected_bearer = "";
+static bool set_authn_id = false;
+static char *authn_id = "";
+static bool reflect_role = false;
+
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauthtest.expected_bearer",
+							   "Expected Bearer token for future connections",
+							   NULL,
+							   &expected_bearer,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.set_authn_id",
+							 "Whether to set an authenticated identity",
+							 NULL,
+							 &set_authn_id,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+	DefineCustomStringVariable("oauthtest.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   "",
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+
+	DefineCustomBoolVariable("oauthtest.reflect_role",
+							 "Ignore the bearer token; use the requested role as the authn_id",
+							 NULL,
+							 &reflect_role,
+							 false,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauthtest");
+}
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &callbacks;
+}
+
+static void
+test_startup(ValidatorModuleState *state)
+{
+}
+
+static void
+test_shutdown(ValidatorModuleState *state)
+{
+}
+
+static bool
+test_validate(const ValidatorModuleState *state,
+			  const char *token, const char *role,
+			  ValidatorModuleResult *res)
+{
+	if (reflect_role)
+	{
+		res->authorized = true;
+		res->authn_id = pstrdup(role);
+	}
+	else
+	{
+		if (*expected_bearer && strcmp(token, expected_bearer) == 0)
+			res->authorized = true;
+		if (set_authn_id)
+			res->authn_id = pstrdup(authn_id);
+	}
+
+	return true;
+}
diff --git a/src/test/python/server/test_oauth.py b/src/test/python/server/test_oauth.py
new file mode 100644
index 00000000000..2839343ffa1
--- /dev/null
+++ b/src/test/python/server/test_oauth.py
@@ -0,0 +1,1080 @@
+#
+# Copyright 2021 VMware, Inc.
+# Portions Copyright 2023 Timescale, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import base64
+import contextlib
+import json
+import os
+import pathlib
+import platform
+import secrets
+import shlex
+import shutil
+import socket
+import struct
+from multiprocessing import shared_memory
+
+import psycopg2
+import pytest
+from construct import Container
+from psycopg2 import sql
+
+import pq3
+
+from .conftest import BLOCKING_TIMEOUT
+
+MAX_SASL_MESSAGE_LENGTH = 65535
+
+INVALID_AUTHORIZATION_ERRCODE = b"28000"
+PROTOCOL_VIOLATION_ERRCODE = b"08P01"
+FEATURE_NOT_SUPPORTED_ERRCODE = b"0A000"
+
+SHARED_MEM_NAME = "oauth-pytest"
+MAX_UINT16 = 2**16 - 1
+
+
+@contextlib.contextmanager
+def prepend_file(path, lines, *, suffix=".bak"):
+    """
+    A context manager that prepends a file on disk with the desired lines of
+    text. When the context manager is exited, the file will be restored to its
+    original contents.
+    """
+    # First make a backup of the original file.
+    bak = path + suffix
+    shutil.copy2(path, bak)
+
+    try:
+        # Write the new lines, followed by the original file content.
+        with open(path, "w") as new, open(bak, "r") as orig:
+            new.writelines(lines)
+            shutil.copyfileobj(orig, new)
+
+        # Return control to the calling code.
+        yield
+
+    finally:
+        # Put the backup back into place.
+        os.replace(bak, path)
+
+
+@pytest.fixture(scope="module")
+def oauth_ctx(postgres_instance):
+    """
+    Creates a database and user that use the oauth auth method. The context
+    object contains the dbname and user attributes as strings to be used during
+    connection, as well as the issuer and scope that have been set in the HBA
+    configuration.
+
+    This fixture assumes that the standard PG* environment variables point to a
+    server running on a local machine, and that the PGUSER has rights to create
+    databases and roles.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        dbname = "oauth_test_" + id
+
+        user = "oauth_user_" + id
+        punct_user = "oauth_\"'? ;&!_user_" + id  # username w/ punctuation
+        map_user = "oauth_map_user_" + id
+        authz_user = "oauth_authz_user_" + id
+
+        issuer = "https://example.com/" + id
+        scope = "openid " + id
+
+    ctx = Context()
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.map_user}   samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" map=oauth\n',
+        f'host {ctx.dbname} {ctx.authz_user} samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}" delegate_ident_mapping=1\n',
+        f'host {ctx.dbname} all              samehost oauth issuer="{ctx.issuer}" scope="{ctx.scope}"\n',
+    ]
+    ident_lines = [r"oauth /^(.*)@example\.com$ \1"]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Create our roles and database.
+        user = sql.Identifier(ctx.user)
+        punct_user = sql.Identifier(ctx.punct_user)
+        map_user = sql.Identifier(ctx.map_user)
+        authz_user = sql.Identifier(ctx.authz_user)
+        dbname = sql.Identifier(ctx.dbname)
+
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(punct_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(map_user))
+        c.execute(sql.SQL("CREATE ROLE {} LOGIN;").format(authz_user))
+        c.execute(sql.SQL("CREATE DATABASE {};").format(dbname))
+
+        # Replace pg_hba and pg_ident.
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        c.execute("SHOW ident_file;")
+        ident = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines), prepend_file(ident, ident_lines):
+            c.execute("SELECT pg_reload_conf();")
+
+            # Use the new database and user.
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+        c.execute(sql.SQL("DROP DATABASE {};").format(dbname))
+        c.execute(sql.SQL("DROP ROLE {};").format(authz_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(map_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(punct_user))
+        c.execute(sql.SQL("DROP ROLE {};").format(user))
+
+
+@pytest.fixture()
+def conn(oauth_ctx, connect):
+    """
+    A convenience wrapper for connect(). The main purpose of this fixture is to
+    make sure oauth_ctx runs its setup code before the connection is made.
+    """
+    return connect()
+
+
+def bearer_token(*, size=16):
+    """
+    Generates a Bearer token using secrets.token_urlsafe(). The generated token
+    size in bytes may be specified; if unset, a small 16-byte token will be
+    generated.
+    """
+
+    if size % 4:
+        raise ValueError(f"requested token size {size} is not a multiple of 4")
+
+    token = secrets.token_urlsafe(size // 4 * 3)
+    assert len(token) == size
+
+    return token
+
+
+def begin_oauth_handshake(conn, oauth_ctx, *, user=None):
+    if user is None:
+        user = oauth_ctx.authz_user
+
+    pq3.send_startup(conn, user=user, database=oauth_ctx.dbname)
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    # The server should advertise exactly one mechanism.
+    assert resp.payload.type == pq3.authn.SASL
+    assert resp.payload.body == [b"OAUTHBEARER", b""]
+
+
+def send_initial_response(conn, *, auth=None, bearer=None):
+    """
+    Sends the OAUTHBEARER initial response on the connection, using the given
+    bearer token. Alternatively to a bearer token, the initial response's auth
+    field may be explicitly specified to test corner cases.
+    """
+    if bearer is not None and auth is not None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    if bearer is not None:
+        auth = b"Bearer " + bearer
+
+    if auth is None:
+        raise ValueError("exactly one of the auth and bearer kwargs must be set")
+
+    initial = pq3.SASLInitialResponse.build(
+        dict(
+            name=b"OAUTHBEARER",
+            data=b"n,,\x01auth=" + auth + b"\x01\x01",
+        )
+    )
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+
+def expect_handshake_success(conn):
+    """
+    Validates that the server responds with an AuthnOK message, and then drains
+    the connection until a ReadyForQuery message is received.
+    """
+    resp = pq3.recv1(conn)
+
+    assert resp.type == pq3.types.AuthnRequest
+    assert resp.payload.type == pq3.authn.OK
+    assert not resp.payload.body
+
+    receive_until(conn, pq3.types.ReadyForQuery)
+
+
+def expect_handshake_failure(conn, oauth_ctx):
+    """
+    Performs the OAUTHBEARER SASL failure "handshake" and validates the server's
+    side of the conversation, including the final ErrorResponse.
+    """
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.AuthnRequest
+
+    req = resp.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+    assert body["scope"] == oauth_ctx.scope
+
+    expected_config = oauth_ctx.issuer + "/.well-known/openid-configuration"
+    assert body["openid-configuration"] == expected_config
+
+    # Send the dummy response to complete the failed handshake.
+    pq3.send(conn, pq3.types.PasswordMessage, b"\x01")
+    resp = pq3.recv1(conn)
+
+    err = ExpectedError(INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed")
+    err.match(resp)
+
+
+def receive_until(conn, type):
+    """
+    receive_until pulls packets off the pq3 connection until a packet with the
+    desired type is found, or an error response is received.
+    """
+    while True:
+        pkt = pq3.recv1(conn)
+
+        if pkt.type == type:
+            return pkt
+        elif pkt.type == pq3.types.ErrorResponse:
+            raise RuntimeError(
+                f"received error response from peer: {pkt.payload.fields!r}"
+            )
+
+
+@pytest.fixture()
+def setup_validator(postgres_instance):
+    """
+    A per-test fixture that sets up the test validator with expected behavior.
+    The setting will be reverted during teardown.
+    """
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+        prev = dict()
+
+        def setter(**gucs):
+            for guc, val in gucs.items():
+                # Save the previous value.
+                c.execute(sql.SQL("SHOW oauthtest.{};").format(sql.Identifier(guc)))
+                prev[guc] = c.fetchone()[0]
+
+                c.execute(
+                    sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                        sql.Identifier(guc)
+                    ),
+                    (val,),
+                )
+                c.execute("SELECT pg_reload_conf();")
+
+        yield setter
+
+        # Restore the previous values.
+        for guc, val in prev.items():
+            c.execute(
+                sql.SQL("ALTER SYSTEM SET oauthtest.{} TO %s;").format(
+                    sql.Identifier(guc)
+                ),
+                (val,),
+            )
+            c.execute("SELECT pg_reload_conf();")
+
+
+@pytest.mark.parametrize("token_len", [16, 1024, 4096])
+@pytest.mark.parametrize(
+    "auth_prefix",
+    [
+        b"Bearer ",
+        b"bearer ",
+        b"Bearer    ",
+    ],
+)
+def test_oauth(setup_validator, connect, oauth_ctx, auth_prefix, token_len):
+    # Generate our bearer token with the desired length.
+    token = bearer_token(size=token_len)
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    auth = auth_prefix + token.encode("ascii")
+    send_initial_response(conn, auth=auth)
+    expect_handshake_success(conn)
+
+    # Make sure that the server has not set an authenticated ID.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    assert row.columns == [None]
+
+
+@pytest.mark.parametrize(
+    "token_value",
+    [
+        "abcdzA==",
+        "123456M=",
+        "x-._~+/x",
+    ],
+)
+def test_oauth_bearer_corner_cases(setup_validator, connect, oauth_ctx, token_value):
+    setup_validator(expected_bearer=token_value)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    send_initial_response(conn, bearer=token_value.encode("ascii"))
+
+    expect_handshake_success(conn)
+
+
+@pytest.mark.parametrize(
+    "user,authn_id,should_succeed",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.user,
+            True,
+            id="validator authn: succeeds when authn_id == username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: None,
+            False,
+            id="validator authn: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: "",
+            False,
+            id="validator authn: fails when authn_id is empty",
+        ),
+        pytest.param(
+            lambda ctx: ctx.user,
+            lambda ctx: ctx.authz_user,
+            False,
+            id="validator authn: fails when authn_id != username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.com",
+            True,
+            id="validator with map: succeeds when authn_id matches map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: None,
+            False,
+            id="validator with map: fails when authn_id is not set",
+        ),
+        pytest.param(
+            lambda ctx: ctx.map_user,
+            lambda ctx: ctx.map_user + "@example.net",
+            False,
+            id="validator with map: fails when authn_id doesn't match map",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: None,
+            True,
+            id="validator authz: succeeds with no authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "",
+            True,
+            id="validator authz: succeeds with empty authn_id",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "postgres",
+            True,
+            id="validator authz: succeeds with basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.authz_user,
+            lambda ctx: "me@example.com",
+            True,
+            id="validator authz: succeeds with email address",
+        ),
+    ],
+)
+def test_oauth_authn_id(
+    setup_validator, connect, oauth_ctx, user, authn_id, should_succeed
+):
+    token = bearer_token()
+    authn_id = authn_id(oauth_ctx)
+
+    # Set up the validator appropriately.
+    gucs = dict(expected_bearer=token)
+    if authn_id is not None:
+        gucs["set_authn_id"] = True
+        gucs["authn_id"] = authn_id
+    setup_validator(**gucs)
+
+    conn = connect()
+    username = user(oauth_ctx)
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=token.encode("ascii"))
+
+    if not should_succeed:
+        expect_handshake_failure(conn, oauth_ctx)
+        return
+
+    expect_handshake_success(conn)
+
+    # Check the reported authn_id.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    expected = authn_id
+    if expected is not None:
+        expected = b"oauth:" + expected.encode("ascii")
+
+    row = resp.payload
+    assert row.columns == [expected]
+
+
+class ExpectedError(object):
+    def __init__(self, code, msg=None, detail=None):
+        self.code = code
+        self.msg = msg
+        self.detail = detail
+
+        # Protect against the footgun of an accidental empty string, which will
+        # "match" anything. If you don't want to match message or detail, just
+        # don't pass them.
+        if self.msg == "":
+            raise ValueError("msg must be non-empty or None")
+        if self.detail == "":
+            raise ValueError("detail must be non-empty or None")
+
+    def _getfield(self, resp, type):
+        """
+        Searches an ErrorResponse for a single field of the given type (e.g.
+        "M", "C", "D") and returns its value. Asserts if it doesn't find exactly
+        one field.
+        """
+        prefix = type.encode("ascii")
+        fields = [f for f in resp.payload.fields if f.startswith(prefix)]
+
+        assert len(fields) == 1
+        return fields[0][1:]  # strip off the type byte
+
+    def match(self, resp):
+        """
+        Checks that the given response matches the expected code, message, and
+        detail (if given). The error code must match exactly. The expected
+        message and detail must be contained within the actual strings.
+        """
+        assert resp.type == pq3.types.ErrorResponse
+
+        code = self._getfield(resp, "C")
+        assert code == self.code
+
+        if self.msg:
+            msg = self._getfield(resp, "M")
+            expected = self.msg.encode("utf-8")
+            assert expected in msg
+
+        if self.detail:
+            detail = self._getfield(resp, "D")
+            expected = self.detail.encode("utf-8")
+            assert expected in detail
+
+
+def test_oauth_rejected_bearer(conn, oauth_ctx):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send a bearer token that doesn't match what the validator expects. It
+    # should fail the connection.
+    send_initial_response(conn, bearer=b"xxxxxx")
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "bad_bearer",
+    [
+        b"Bearer    ",
+        b"Bearer a===b",
+        b"Bearer hello!",
+        b"Bearer trailingspace ",
+        b"Bearer trailingtab\t",
+        b"Bearer me@example.com",
+        b"Beare abcd",
+        b" Bearer leadingspace",
+        b'OAuth realm="Example"',
+        b"",
+    ],
+)
+def test_oauth_invalid_bearer(setup_validator, connect, oauth_ctx, bad_bearer):
+    # Tell the validator to accept any token. This ensures that the invalid
+    # bearer tokens are rejected before the validation step.
+    setup_validator(reflect_role=True)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, auth=bad_bearer)
+
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.slow
+@pytest.mark.parametrize(
+    "resp_type,resp,err",
+    [
+        pytest.param(
+            None,
+            None,
+            None,
+            marks=pytest.mark.slow,
+            id="no response (expect timeout)",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"hello",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="bad dummy response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            b"\x01\x01",
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not send a kvsep response",
+            ),
+            id="multiple kvseps",
+        ),
+        pytest.param(
+            pq3.types.Query,
+            dict(query=b""),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="bad response message type",
+        ),
+    ],
+)
+def test_oauth_bad_response_to_error_challenge(conn, oauth_ctx, resp_type, resp, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    # We expect a discovery "challenge" back from the server before the authn
+    # failure message.
+    pkt = pq3.recv1(conn)
+    assert pkt.type == pq3.types.AuthnRequest
+
+    req = pkt.payload
+    assert req.type == pq3.authn.SASLContinue
+
+    body = json.loads(req.body)
+    assert body["status"] == "invalid_token"
+
+    if resp_type is None:
+        # Do not send the dummy response. We should time out and not get a
+        # response from the server.
+        with pytest.raises(socket.timeout):
+            conn.read(1)
+
+        # Done with the test.
+        return
+
+    # Send the bad response.
+    pq3.send(conn, resp_type, resp)
+
+    # Make sure the server fails the connection correctly.
+    pkt = pq3.recv1(conn)
+    err.match(pkt)
+
+
+@pytest.mark.parametrize(
+    "type,payload,err",
+    [
+        pytest.param(
+            pq3.types.ErrorResponse,
+            dict(fields=[b""]),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "expected SASL response"),
+            id="error response in initial message",
+        ),
+        pytest.param(
+            None,
+            # Sending an actual 65k packet results in ECONNRESET on Windows, and
+            # it floods the tests' connection log uselessly, so just fake the
+            # length and send a smaller number of bytes.
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=MAX_SASL_MESSAGE_LENGTH + 1,
+                payload=b"x" * 512,
+            ),
+            ExpectedError(
+                INVALID_AUTHORIZATION_ERRCODE, "bearer authentication failed"
+            ),
+            id="overlong initial response data",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"SCRAM-SHA-256")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE, "invalid SASL authentication mechanism"
+            ),
+            id="bad SASL mechanism selection",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=2, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "insufficient data"),
+            id="SASL data underflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", len=0, data=b"x")),
+            ExpectedError(PROTOCOL_VIOLATION_ERRCODE, "invalid message format"),
+            id="SASL data overflow",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "message is empty",
+            ),
+            id="empty",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"n,,\x01auth=\x01\x01\0")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "length does not match input length",
+            ),
+            id="contains null byte",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",  # XXX this is a bit strange
+            ),
+            id="initial error response",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"p=tls-server-end-point,,\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "server does not support channel binding",
+            ),
+            id="uses channel binding",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"x,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected channel-binding flag",
+            ),
+            id="invalid channel binding specifier",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Comma expected",
+            ),
+            id="bad GS2 header: missing channel binding terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,a")),
+            ExpectedError(
+                FEATURE_NOT_SUPPORTED_ERRCODE,
+                "client uses authorization identity",
+            ),
+            id="bad GS2 header: authzid in use",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,b,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Unexpected attribute",
+            ),
+            id="bad GS2 header: extra attribute",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                'Unexpected attribute "0x00"',  # XXX this is a bit strange
+            ),
+            id="bad GS2 header: missing authzid terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "Key-value separator expected",
+            ),
+            id="missing initial kvsep",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: empty key-value list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "does not contain an auth value",
+            ),
+            id="missing auth value: other keys present",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01host=example.com")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "unterminated key/value pair",
+            ),
+            id="missing value terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER", data=b"y,,\x01")),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: empty list",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "did not contain a final terminator",
+            ),
+            id="missing list terminator: with auth value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01auth=Bearer 0\x01\x01blah")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "additional data after the final terminator",
+            ),
+            id="additional key after terminator",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(name=b"OAUTHBEARER", data=b"y,,\x01key\x01\x01")
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "key without a value",
+            ),
+            id="key without value",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01auth=Bearer 0\x01auth=Bearer 1\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "contains multiple auth values",
+            ),
+            id="multiple auth values",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01=\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "empty key name",
+            ),
+            id="empty key",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01my key= \x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid key name",
+            ),
+            id="whitespace in key name",
+        ),
+        pytest.param(
+            pq3.types.PasswordMessage,
+            pq3.SASLInitialResponse.build(
+                dict(
+                    name=b"OAUTHBEARER",
+                    data=b"y,,\x01key=a\x05b\x01\x01",
+                )
+            ),
+            ExpectedError(
+                PROTOCOL_VIOLATION_ERRCODE,
+                "malformed OAUTHBEARER message",
+                "invalid value",
+            ),
+            id="junk in value",
+        ),
+    ],
+)
+def test_oauth_bad_initial_response(conn, oauth_ctx, type, payload, err):
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # The server expects a SASL response; give it something else instead.
+    if type is not None:
+        # Build a new packet of the desired type.
+        if not isinstance(payload, dict):
+            payload = dict(payload_data=payload)
+        pq3.send(conn, type, **payload)
+    else:
+        # The test has a custom packet to send. (The only reason to do this is
+        # if the packet is corrupt or otherwise unbuildable/unparsable, so we
+        # don't use the standard pq3.send().)
+        conn.write(pq3.Pq3.build(payload))
+        conn.end_packet(Container(payload))
+
+    resp = pq3.recv1(conn)
+    err.match(resp)
+
+
+def test_oauth_empty_initial_response(setup_validator, connect, oauth_ctx):
+    token = bearer_token()
+    setup_validator(expected_bearer=token)
+
+    conn = connect()
+    begin_oauth_handshake(conn, oauth_ctx)
+
+    # Send an initial response without data.
+    initial = pq3.SASLInitialResponse.build(dict(name=b"OAUTHBEARER"))
+    pq3.send(conn, pq3.types.PasswordMessage, initial)
+
+    # The server should respond with an empty challenge so we can send the data
+    # it wants.
+    pkt = pq3.recv1(conn)
+
+    assert pkt.type == pq3.types.AuthnRequest
+    assert pkt.payload.type == pq3.authn.SASLContinue
+    assert not pkt.payload.body
+
+    # Now send the initial data.
+    data = b"n,,\x01auth=Bearer " + token.encode("ascii") + b"\x01\x01"
+    pq3.send(conn, pq3.types.PasswordMessage, data)
+
+    # Server should now complete the handshake.
+    expect_handshake_success(conn)
+
+
+# TODO: see if there's a way to test this easily after the API switch
+def xtest_oauth_no_validator(setup_validator, oauth_ctx, connect):
+    # Clear out our validator command, then establish a new connection.
+    set_validator("")
+    conn = connect()
+
+    begin_oauth_handshake(conn, oauth_ctx)
+    send_initial_response(conn, bearer=bearer_token())
+
+    # The server should fail the connection.
+    expect_handshake_failure(conn, oauth_ctx)
+
+
+@pytest.mark.parametrize(
+    "user",
+    [
+        pytest.param(
+            lambda ctx: ctx.user,
+            id="basic username",
+        ),
+        pytest.param(
+            lambda ctx: ctx.punct_user,
+            id="'unsafe' characters are passed through correctly",
+        ),
+    ],
+)
+def test_oauth_validator_role(setup_validator, oauth_ctx, connect, user):
+    username = user(oauth_ctx)
+
+    # Tell the validator to reflect the PGUSER as the authenticated identity.
+    setup_validator(reflect_role=True)
+    conn = connect()
+
+    # Log in. Note that reflection ignores the bearer token.
+    begin_oauth_handshake(conn, oauth_ctx, user=username)
+    send_initial_response(conn, bearer=b"dontcare")
+    expect_handshake_success(conn)
+
+    # Check the user identity.
+    pq3.send(conn, pq3.types.Query, query=b"SELECT system_user;")
+    resp = receive_until(conn, pq3.types.DataRow)
+
+    row = resp.payload
+    expected = b"oauth:" + username.encode("utf-8")
+    assert row.columns == [expected]
+
+
+@pytest.fixture
+def odd_oauth_ctx(postgres_instance, oauth_ctx):
+    """
+    Adds an HBA entry with messed up issuer/scope settings, to pin the server
+    behavior.
+
+    TODO: these should really be rejected in the HBA rather than passed through
+    by the server.
+    """
+    id = secrets.token_hex(4)
+
+    class Context:
+        user = oauth_ctx.user
+        dbname = oauth_ctx.dbname
+
+        # Both of these embedded double-quotes are invalid; they're prohibited
+        # in both URLs and OAuth scope identifiers.
+        issuer = oauth_ctx.issuer + '/"/'
+        scope = oauth_ctx.scope + ' quo"ted'
+
+    ctx = Context()
+    hba_issuer = ctx.issuer.replace('"', '""')
+    hba_scope = ctx.scope.replace('"', '""')
+    hba_lines = [
+        f'host {ctx.dbname} {ctx.user} samehost oauth issuer="{hba_issuer}" scope="{hba_scope}"\n',
+    ]
+
+    if platform.system() == "Windows":
+        # XXX why is 'samehost' not behaving as expected on Windows?
+        for l in list(hba_lines):
+            hba_lines.append(l.replace("samehost", "::1/128"))
+
+    host, port = postgres_instance
+    conn = psycopg2.connect(host=host, port=port)
+    conn.autocommit = True
+
+    with contextlib.closing(conn):
+        c = conn.cursor()
+
+        # Replace pg_hba. Note that it's already been replaced once by
+        # oauth_ctx, so use a different backup prefix in prepend_file().
+        c.execute("SHOW hba_file;")
+        hba = c.fetchone()[0]
+
+        with prepend_file(hba, hba_lines, suffix=".bak2"):
+            c.execute("SELECT pg_reload_conf();")
+
+            yield ctx
+
+        # Put things back the way they were.
+        c.execute("SELECT pg_reload_conf();")
+
+
+def test_odd_server_response(odd_oauth_ctx, connect):
+    """
+    Verifies that the server is correctly escaping the JSON in its failure
+    response.
+    """
+    conn = connect()
+    begin_oauth_handshake(conn, odd_oauth_ctx, user=odd_oauth_ctx.user)
+
+    # Send an empty auth initial response, which will force an authn failure.
+    send_initial_response(conn, auth=b"")
+
+    expect_handshake_failure(conn, odd_oauth_ctx)
diff --git a/src/test/python/server/test_server.py b/src/test/python/server/test_server.py
new file mode 100644
index 00000000000..02126dba792
--- /dev/null
+++ b/src/test/python/server/test_server.py
@@ -0,0 +1,21 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import pq3
+
+
+def test_handshake(connect):
+    """Basic sanity check."""
+    conn = connect()
+
+    pq3.handshake(conn, user=pq3.pguser(), database=pq3.pgdatabase())
+
+    pq3.send(conn, pq3.types.Query, query=b"")
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.EmptyQueryResponse
+
+    resp = pq3.recv1(conn)
+    assert resp.type == pq3.types.ReadyForQuery
diff --git a/src/test/python/test_internals.py b/src/test/python/test_internals.py
new file mode 100644
index 00000000000..dee4855fc0b
--- /dev/null
+++ b/src/test/python/test_internals.py
@@ -0,0 +1,138 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import io
+
+from pq3 import _DebugStream
+
+
+def test_DebugStream_read():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    res = stream.read(16)
+    assert res == b"fghijklmnopqrstu"
+
+    stream.flush_debug()
+
+    res = stream.read()
+    assert res == b"vwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70\tabcdefghijklmnop\n"
+        "< 0010:\t71 72 73 74 75                                 \tqrstu\n"
+        "\n"
+        "< 0000:\t76 77 78 79 7a                                 \tvwxyz\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_write():
+    under = io.BytesIO()
+    out = io.StringIO()
+
+    stream = _DebugStream(under, out)
+
+    stream.write(b"\x00\x01\x02")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02"
+
+    stream.write(b"\xc0\xc1\xc2")
+    stream.flush()
+
+    assert under.getvalue() == b"\x00\x01\x02\xc0\xc1\xc2"
+
+    stream.flush_debug()
+
+    expected = "> 0000:\t00 01 02 c0 c1 c2                              \t......\n\n"
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_read_write():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    res = stream.read(5)
+    assert res == b"abcde"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnopqrstuvwxyz"
+
+    res = stream.read(5)
+    assert res == b"klmno"
+
+    stream.write(b"xxxxx")
+    stream.flush()
+
+    assert under.getvalue() == b"abcdexxxxxklmnoxxxxxuvwxyz"
+
+    stream.flush_debug()
+
+    expected = (
+        "< 0000:\t61 62 63 64 65 6b 6c 6d 6e 6f                  \tabcdeklmno\n"
+        "\n"
+        "> 0000:\t78 78 78 78 78 78 78 78 78 78                  \txxxxxxxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
+
+
+def test_DebugStream_end_packet():
+    under = io.BytesIO(b"abcdefghijklmnopqrstuvwxyz")
+    out = io.StringIO()
+    stream = _DebugStream(under, out)
+
+    stream.read(5)
+    stream.end_packet("read description", read=True, indent=" ")
+
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("write description", indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for read", read=True, indent=" ")
+
+    stream.read(5)
+    stream.write(b"xxxxx")
+    stream.flush()
+    stream.end_packet("read/write combo for write", indent=" ")
+
+    expected = (
+        " < 0000:\t61 62 63 64 65                                 \tabcde\n"
+        "\n"
+        "< read description\n"
+        "\n"
+        "> write description\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        " < 0000:\t6b 6c 6d 6e 6f                                 \tklmno\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+        "< read/write combo for read\n"
+        "\n"
+        "> read/write combo for write\n"
+        "\n"
+        " < 0000:\t75 76 77 78 79                                 \tuvwxy\n"
+        "\n"
+        " > 0000:\t78 78 78 78 78                                 \txxxxx\n"
+        "\n"
+    )
+    assert out.getvalue() == expected
diff --git a/src/test/python/test_pq3.py b/src/test/python/test_pq3.py
new file mode 100644
index 00000000000..7c6817de31c
--- /dev/null
+++ b/src/test/python/test_pq3.py
@@ -0,0 +1,574 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+import contextlib
+import getpass
+import io
+import os
+import platform
+import struct
+import sys
+
+import pytest
+from construct import Container, PaddingError, StreamError, TerminatedError
+
+import pq3
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00\x00\x10\x00\x04\x00\x00abcdefgh",
+            Container(len=16, proto=0x40000, payload=b"abcdefgh"),
+            b"",
+            id="8-byte payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x08\x00\x04\x00\x00",
+            Container(len=8, proto=0x40000, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x09\x00\x04\x00\x00abcde",
+            Container(len=9, proto=0x40000, payload=b"a"),
+            b"bcde",
+            id="1-byte payload and extra padding",
+        ),
+        pytest.param(
+            b"\x00\x00\x00\x0B\x00\x03\x00\x00hi\x00",
+            Container(len=11, proto=pq3.protocol(3, 0), payload=[b"hi"]),
+            b"",
+            id="implied parameter list when using proto version 3.0",
+        ),
+    ],
+)
+def test_Startup_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Startup.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "packet,expected_bytes",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="nothing set",
+        ),
+        pytest.param(
+            dict(len=10, proto=0x12345678),
+            b"\x00\x00\x00\x0A\x12\x34\x56\x78\x00\x00",
+            id="len and proto set explicitly",
+        ),
+        pytest.param(
+            dict(proto=0x12345678),
+            b"\x00\x00\x00\x08\x12\x34\x56\x78",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(proto=0x12345678, payload=b"abcd"),
+            b"\x00\x00\x00\x0C\x12\x34\x56\x78abcd",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(payload=[b""]),
+            b"\x00\x00\x00\x09\x00\x03\x00\x00\x00",
+            id="implied proto version 3 when sending parameters",
+        ),
+        pytest.param(
+            dict(payload=[b"hi", b""]),
+            b"\x00\x00\x00\x0C\x00\x03\x00\x00hi\x00\x00",
+            id="implied proto version 3 and len when sending more than one parameter",
+        ),
+        pytest.param(
+            dict(payload=dict(user="jsmith", database="postgres")),
+            b"\x00\x00\x00\x27\x00\x03\x00\x00user\x00jsmith\x00database\x00postgres\x00\x00",
+            id="auto-serialization of dict parameters",
+        ),
+    ],
+)
+def test_Startup_build(packet, expected_bytes):
+    actual = pq3.Startup.build(packet)
+    assert actual == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"*\x00\x00\x00\x08abcd",
+            dict(type=b"*", len=8, payload=b"abcd"),
+            b"",
+            id="4-byte payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x04",
+            dict(type=b"*", len=4, payload=b""),
+            b"",
+            id="no payload",
+        ),
+        pytest.param(
+            b"*\x00\x00\x00\x05xabcd",
+            dict(type=b"*", len=5, payload=b"x"),
+            b"abcd",
+            id="1-byte payload with extra padding",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=8,
+                payload=dict(type=pq3.authn.OK, body=None),
+            ),
+            b"",
+            id="AuthenticationOk",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x12\x00\x00\x00\x0AEXTERNAL\x00\x00",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=18,
+                payload=dict(type=pq3.authn.SASL, body=[b"EXTERNAL", b""]),
+            ),
+            b"",
+            id="AuthenticationSASL",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            dict(
+                type=pq3.types.AuthnRequest,
+                len=13,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"",
+            id="AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            b"p\x00\x00\x00\x0Bhunter2",
+            dict(
+                type=pq3.types.PasswordMessage,
+                len=11,
+                payload=b"hunter2",
+            ),
+            b"",
+            id="PasswordMessage",
+        ),
+        pytest.param(
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x00\x12\x34\x56\x78",
+            dict(
+                type=pq3.types.BackendKeyData,
+                len=12,
+                payload=dict(pid=0, key=0x12345678),
+            ),
+            b"",
+            id="BackendKeyData",
+        ),
+        pytest.param(
+            b"C\x00\x00\x00\x08SET\x00",
+            dict(
+                type=pq3.types.CommandComplete,
+                len=8,
+                payload=dict(tag=b"SET"),
+            ),
+            b"",
+            id="CommandComplete",
+        ),
+        pytest.param(
+            b"E\x00\x00\x00\x11Mbad!\x00Mdog!\x00\x00",
+            dict(type=b"E", len=17, payload=dict(fields=[b"Mbad!", b"Mdog!", b""])),
+            b"",
+            id="ErrorResponse",
+        ),
+        pytest.param(
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            dict(
+                type=pq3.types.ParameterStatus,
+                len=8,
+                payload=dict(name=b"a", value=b"b"),
+            ),
+            b"",
+            id="ParameterStatus",
+        ),
+        pytest.param(
+            b"Z\x00\x00\x00\x05x",
+            dict(type=b"Z", len=5, payload=dict(status=b"x")),
+            b"",
+            id="ReadyForQuery",
+        ),
+        pytest.param(
+            b"Q\x00\x00\x00\x06!\x00",
+            dict(type=pq3.types.Query, len=6, payload=dict(query=b"!")),
+            b"",
+            id="Query",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x0B\x00\x01\x00\x00\x00\x01!",
+            dict(type=pq3.types.DataRow, len=11, payload=dict(columns=[b"!"])),
+            b"",
+            id="DataRow",
+        ),
+        pytest.param(
+            b"D\x00\x00\x00\x06\x00\x00extra",
+            dict(type=pq3.types.DataRow, len=6, payload=dict(columns=[])),
+            b"extra",
+            id="DataRow with extra data",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04",
+            dict(type=pq3.types.EmptyQueryResponse, len=4, payload=None),
+            b"",
+            id="EmptyQueryResponse",
+        ),
+        pytest.param(
+            b"I\x00\x00\x00\x04\xFF",
+            dict(type=b"I", len=4, payload=None),
+            b"\xFF",
+            id="EmptyQueryResponse with extra bytes",
+        ),
+        pytest.param(
+            b"X\x00\x00\x00\x04",
+            dict(type=pq3.types.Terminate, len=4, payload=None),
+            b"",
+            id="Terminate",
+        ),
+    ],
+)
+def test_Pq3_parse(raw, expected, extra):
+    with io.BytesIO(raw) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(type=b"*", len=5),
+            b"*\x00\x00\x00\x05",
+            id="type and len set explicitly",
+        ),
+        pytest.param(
+            dict(type=b"*"),
+            b"*\x00\x00\x00\x04",
+            id="implied len with no payload",
+        ),
+        pytest.param(
+            dict(type=b"*", payload=b"1234"),
+            b"*\x00\x00\x00\x081234",
+            id="implied len with payload",
+        ),
+        pytest.param(
+            dict(type=b"*", len=12, payload=b"1234"),
+            b"*\x00\x00\x00\x0C1234",
+            id="overridden len (payload underflow)",
+        ),
+        pytest.param(
+            dict(type=b"*", len=5, payload=b"1234"),
+            b"*\x00\x00\x00\x051234",
+            id="overridden len (payload overflow)",
+        ),
+        pytest.param(
+            dict(type=pq3.types.AuthnRequest, payload=dict(type=pq3.authn.OK)),
+            b"R\x00\x00\x00\x08\x00\x00\x00\x00",
+            id="implied len/type for AuthenticationOK",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(
+                    type=pq3.authn.SASL,
+                    body=[b"SCRAM-SHA-256-PLUS", b"SCRAM-SHA-256", b""],
+                ),
+            ),
+            b"R\x00\x00\x00\x2A\x00\x00\x00\x0ASCRAM-SHA-256-PLUS\x00SCRAM-SHA-256\x00\x00",
+            id="implied len/type for AuthenticationSASL",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLContinue, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0B12345",
+            id="implied len/type for AuthenticationSASLContinue",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.AuthnRequest,
+                payload=dict(type=pq3.authn.SASLFinal, body=b"12345"),
+            ),
+            b"R\x00\x00\x00\x0D\x00\x00\x00\x0C12345",
+            id="implied len/type for AuthenticationSASLFinal",
+        ),
+        pytest.param(
+            dict(
+                type=pq3.types.PasswordMessage,
+                payload=b"hunter2",
+            ),
+            b"p\x00\x00\x00\x0Bhunter2",
+            id="implied len/type for PasswordMessage",
+        ),
+        pytest.param(
+            dict(type=pq3.types.BackendKeyData, payload=dict(pid=1, key=7)),
+            b"K\x00\x00\x00\x0C\x00\x00\x00\x01\x00\x00\x00\x07",
+            id="implied len/type for BackendKeyData",
+        ),
+        pytest.param(
+            dict(type=pq3.types.CommandComplete, payload=dict(tag=b"SET")),
+            b"C\x00\x00\x00\x08SET\x00",
+            id="implied len/type for CommandComplete",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ErrorResponse, payload=dict(fields=[b"error", b""])),
+            b"E\x00\x00\x00\x0Berror\x00\x00",
+            id="implied len/type for ErrorResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ParameterStatus, payload=dict(name=b"a", value=b"b")),
+            b"S\x00\x00\x00\x08a\x00b\x00",
+            id="implied len/type for ParameterStatus",
+        ),
+        pytest.param(
+            dict(type=pq3.types.ReadyForQuery, payload=dict(status=b"I")),
+            b"Z\x00\x00\x00\x05I",
+            id="implied len/type for ReadyForQuery",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Query, payload=dict(query=b"SELECT 1;")),
+            b"Q\x00\x00\x00\x0eSELECT 1;\x00",
+            id="implied len/type for Query",
+        ),
+        pytest.param(
+            dict(type=pq3.types.DataRow, payload=dict(columns=[b"abcd"])),
+            b"D\x00\x00\x00\x0E\x00\x01\x00\x00\x00\x04abcd",
+            id="implied len/type for DataRow",
+        ),
+        pytest.param(
+            dict(type=pq3.types.EmptyQueryResponse),
+            b"I\x00\x00\x00\x04",
+            id="implied len for EmptyQueryResponse",
+        ),
+        pytest.param(
+            dict(type=pq3.types.Terminate),
+            b"X\x00\x00\x00\x04",
+            id="implied len for Terminate",
+        ),
+    ],
+)
+def test_Pq3_build(fields, expected):
+    actual = pq3.Pq3.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,extra",
+    [
+        pytest.param(
+            b"\x00\x00",
+            dict(columns=[]),
+            b"",
+            id="no columns",
+        ),
+        pytest.param(
+            b"\x00\x01\x00\x00\x00\x04abcd",
+            dict(columns=[b"abcd"]),
+            b"",
+            id="one column",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x04abcd\x00\x00\x00\x01x",
+            dict(columns=[b"abcd", b"x"]),
+            b"",
+            id="multiple columns",
+        ),
+        pytest.param(
+            b"\x00\x02\x00\x00\x00\x00\x00\x00\x00\x01x",
+            dict(columns=[b"", b"x"]),
+            b"",
+            id="empty column value",
+        ),
+        pytest.param(
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            dict(columns=[None, None]),
+            b"",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_parse(raw, expected, extra):
+    pkt = b"D" + struct.pack("!i", len(raw) + 4) + raw
+    with io.BytesIO(pkt) as stream:
+        actual = pq3.Pq3.parse_stream(stream)
+
+        assert actual.type == pq3.types.DataRow
+        assert actual.payload == expected
+        assert stream.read() == extra
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(),
+            b"\x00\x00",
+            id="no columns",
+        ),
+        pytest.param(
+            dict(columns=[None, None]),
+            b"\x00\x02\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF",
+            id="null columns",
+        ),
+    ],
+)
+def test_DataRow_build(fields, expected):
+    actual = pq3.Pq3.build(dict(type=pq3.types.DataRow, payload=fields))
+
+    expected = b"D" + struct.pack("!i", len(expected) + 4) + expected
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "raw,expected,exception",
+    [
+        pytest.param(
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            dict(name=b"EXTERNAL", len=-1, data=None),
+            None,
+            id="no initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02me",
+            dict(name=b"EXTERNAL", len=2, data=b"me"),
+            None,
+            id="initial response",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\x02meextra",
+            None,
+            TerminatedError,
+            id="extra data",
+        ),
+        pytest.param(
+            b"EXTERNAL\x00\x00\x00\x00\xFFme",
+            None,
+            StreamError,
+            id="underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_parse(raw, expected, exception):
+    ctx = contextlib.nullcontext()
+    if exception:
+        ctx = pytest.raises(exception)
+
+    with ctx:
+        actual = pq3.SASLInitialResponse.parse(raw)
+        assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "fields,expected",
+    [
+        pytest.param(
+            dict(name=b"EXTERNAL"),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=None),
+            b"EXTERNAL\x00\xFF\xFF\xFF\xFF",
+            id="no initial response (explicit None)",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b""),
+            b"EXTERNAL\x00\x00\x00\x00\x00",
+            id="empty response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme@example.com",
+            id="initial response",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=2, data=b"me@example.com"),
+            b"EXTERNAL\x00\x00\x00\x00\x02me@example.com",
+            id="data overflow",
+        ),
+        pytest.param(
+            dict(name=b"EXTERNAL", len=14, data=b"me"),
+            b"EXTERNAL\x00\x00\x00\x00\x0Eme",
+            id="data underflow",
+        ),
+    ],
+)
+def test_SASLInitialResponse_build(fields, expected):
+    actual = pq3.SASLInitialResponse.build(fields)
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "version,expected_bytes",
+    [
+        pytest.param((3, 0), b"\x00\x03\x00\x00", id="version 3"),
+        pytest.param((1234, 5679), b"\x04\xd2\x16\x2f", id="SSLRequest"),
+    ],
+)
+def test_protocol(version, expected_bytes):
+    # Make sure the integer returned by protocol is correctly serialized on the
+    # wire.
+    assert struct.pack("!i", pq3.protocol(*version)) == expected_bytes
+
+
+@pytest.mark.parametrize(
+    "envvar,func,expected",
+    [
+        ("PGHOST", pq3.pghost, "localhost"),
+        ("PGPORT", pq3.pgport, 5432),
+        (
+            "PGUSER",
+            pq3.pguser,
+            os.getlogin() if platform.system() == "Windows" else getpass.getuser(),
+        ),
+        ("PGDATABASE", pq3.pgdatabase, "postgres"),
+    ],
+)
+def test_env_defaults(monkeypatch, envvar, func, expected):
+    monkeypatch.delenv(envvar, raising=False)
+
+    actual = func()
+    assert actual == expected
+
+
+@pytest.mark.parametrize(
+    "envvars,func,expected",
+    [
+        (dict(PGHOST="otherhost"), pq3.pghost, "otherhost"),
+        (dict(PGPORT="6789"), pq3.pgport, 6789),
+        (dict(PGUSER="postgres"), pq3.pguser, "postgres"),
+        (dict(PGDATABASE="template1"), pq3.pgdatabase, "template1"),
+    ],
+)
+def test_env(monkeypatch, envvars, func, expected):
+    for k, v in envvars.items():
+        monkeypatch.setenv(k, v)
+
+    actual = func()
+    assert actual == expected
diff --git a/src/test/python/tls.py b/src/test/python/tls.py
new file mode 100644
index 00000000000..075c02c1ca6
--- /dev/null
+++ b/src/test/python/tls.py
@@ -0,0 +1,195 @@
+#
+# Copyright 2021 VMware, Inc.
+# SPDX-License-Identifier: PostgreSQL
+#
+
+from construct import *
+
+#
+# TLS 1.3
+#
+# Most of the types below are transcribed from RFC 8446:
+#
+#     https://tools.ietf.org/html/rfc8446
+#
+
+
+def _Vector(size_field, element):
+    return Prefixed(size_field, GreedyRange(element))
+
+
+# Alerts
+
+AlertLevel = Enum(
+    Byte,
+    warning=1,
+    fatal=2,
+)
+
+AlertDescription = Enum(
+    Byte,
+    close_notify=0,
+    unexpected_message=10,
+    bad_record_mac=20,
+    decryption_failed_RESERVED=21,
+    record_overflow=22,
+    decompression_failure=30,
+    handshake_failure=40,
+    no_certificate_RESERVED=41,
+    bad_certificate=42,
+    unsupported_certificate=43,
+    certificate_revoked=44,
+    certificate_expired=45,
+    certificate_unknown=46,
+    illegal_parameter=47,
+    unknown_ca=48,
+    access_denied=49,
+    decode_error=50,
+    decrypt_error=51,
+    export_restriction_RESERVED=60,
+    protocol_version=70,
+    insufficient_security=71,
+    internal_error=80,
+    user_canceled=90,
+    no_renegotiation=100,
+    unsupported_extension=110,
+)
+
+Alert = Struct(
+    "level" / AlertLevel,
+    "description" / AlertDescription,
+)
+
+
+# Extensions
+
+ExtensionType = Enum(
+    Int16ub,
+    server_name=0,
+    max_fragment_length=1,
+    status_request=5,
+    supported_groups=10,
+    signature_algorithms=13,
+    use_srtp=14,
+    heartbeat=15,
+    application_layer_protocol_negotiation=16,
+    signed_certificate_timestamp=18,
+    client_certificate_type=19,
+    server_certificate_type=20,
+    padding=21,
+    pre_shared_key=41,
+    early_data=42,
+    supported_versions=43,
+    cookie=44,
+    psk_key_exchange_modes=45,
+    certificate_authorities=47,
+    oid_filters=48,
+    post_handshake_auth=49,
+    signature_algorithms_cert=50,
+    key_share=51,
+)
+
+Extension = Struct(
+    "extension_type" / ExtensionType,
+    "extension_data" / Prefixed(Int16ub, GreedyBytes),
+)
+
+
+# ClientHello
+
+
+class _CipherSuiteAdapter(Adapter):
+    class _hextuple(tuple):
+        def __repr__(self):
+            return f"(0x{self[0]:02X}, 0x{self[1]:02X})"
+
+    def _encode(self, obj, context, path):
+        return bytes(obj)
+
+    def _decode(self, obj, context, path):
+        assert len(obj) == 2
+        return self._hextuple(obj)
+
+
+ProtocolVersion = Hex(Int16ub)
+
+Random = Hex(Bytes(32))
+
+CipherSuite = _CipherSuiteAdapter(Byte[2])
+
+ClientHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suites" / _Vector(Int16ub, CipherSuite),
+    "legacy_compression_methods" / Prefixed(Byte, GreedyBytes),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# ServerHello
+
+ServerHello = Struct(
+    "legacy_version" / ProtocolVersion,
+    "random" / Random,
+    "legacy_session_id_echo" / Prefixed(Byte, Hex(GreedyBytes)),
+    "cipher_suite" / CipherSuite,
+    "legacy_compression_method" / Hex(Byte),
+    "extensions" / _Vector(Int16ub, Extension),
+)
+
+# Handshake
+
+HandshakeType = Enum(
+    Byte,
+    client_hello=1,
+    server_hello=2,
+    new_session_ticket=4,
+    end_of_early_data=5,
+    encrypted_extensions=8,
+    certificate=11,
+    certificate_request=13,
+    certificate_verify=15,
+    finished=20,
+    key_update=24,
+    message_hash=254,
+)
+
+Handshake = Struct(
+    "msg_type" / HandshakeType,
+    "length" / Int24ub,
+    "payload"
+    / Switch(
+        this.msg_type,
+        {
+            HandshakeType.client_hello: ClientHello,
+            HandshakeType.server_hello: ServerHello,
+            # HandshakeType.end_of_early_data: EndOfEarlyData,
+            # HandshakeType.encrypted_extensions: EncryptedExtensions,
+            # HandshakeType.certificate_request: CertificateRequest,
+            # HandshakeType.certificate: Certificate,
+            # HandshakeType.certificate_verify: CertificateVerify,
+            # HandshakeType.finished: Finished,
+            # HandshakeType.new_session_ticket: NewSessionTicket,
+            # HandshakeType.key_update: KeyUpdate,
+        },
+        default=FixedSized(this.length, GreedyBytes),
+    ),
+)
+
+# Records
+
+ContentType = Enum(
+    Byte,
+    invalid=0,
+    change_cipher_spec=20,
+    alert=21,
+    handshake=22,
+    application_data=23,
+)
+
+Plaintext = Struct(
+    "type" / ContentType,
+    "legacy_record_version" / ProtocolVersion,
+    "length" / Int16ub,
+    "fragment" / FixedSized(this.length, GreedyBytes),
+)
diff --git a/src/tools/make_venv b/src/tools/make_venv
new file mode 100755
index 00000000000..804307ee120
--- /dev/null
+++ b/src/tools/make_venv
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+
+import argparse
+import subprocess
+import os
+import platform
+import sys
+
+parser = argparse.ArgumentParser()
+
+parser.add_argument('--requirements', help='path to pip requirements file', type=str)
+parser.add_argument('--privatedir', help='private directory for target', type=str)
+parser.add_argument('venv_path', help='desired venv location')
+
+args = parser.parse_args()
+
+# Decide whether or not to capture stdout into a log file. We only do this if
+# we've been given our own private directory.
+#
+# FIXME Unfortunately this interferes with debugging on Cirrus, because the
+# private directory isn't uploaded in the sanity check's artifacts. When we
+# don't capture the log file, it gets spammed to stdout during build... Is there
+# a way to push this into the meson-log somehow? For now, the capture
+# implementation is commented out.
+logfile = None
+
+if args.privatedir:
+    if not os.path.isdir(args.privatedir):
+        os.mkdir(args.privatedir)
+
+    # FIXME see above comment
+    # logpath = os.path.join(args.privatedir, 'stdout.txt')
+    # logfile = open(logpath, 'w')
+
+def run(*args):
+    kwargs = dict(check=True)
+    if logfile:
+        kwargs.update(stdout=logfile)
+
+    subprocess.run(args, **kwargs)
+
+# Create the virtualenv first.
+run(sys.executable, '-m', 'venv', args.venv_path)
+
+# Update pip next. This helps avoid old pip bugs; the version inside system
+# Pythons tends to be pretty out of date.
+bindir = 'Scripts' if platform.system() == 'Windows' else 'bin'
+python = os.path.join(args.venv_path, bindir, 'python3')
+run(python, '-m', 'pip', 'install', '-U', 'pip')
+
+# Finally, install the test's requirements. We need pytest and pytest-tap, no
+# matter what the test needs.
+pip = os.path.join(args.venv_path, bindir, 'pip')
+run(pip, 'install', 'pytest', 'pytest-tap')
+if args.requirements:
+    run(pip, 'install', '-r', args.requirements)
diff --git a/src/tools/testwrap b/src/tools/testwrap
index 8ae8fb79ba7..ffdf760d79a 100755
--- a/src/tools/testwrap
+++ b/src/tools/testwrap
@@ -14,6 +14,7 @@ parser.add_argument('--testgroup', help='test group', type=str)
 parser.add_argument('--testname', help='test name', type=str)
 parser.add_argument('--skip', help='skip test (with reason)', type=str)
 parser.add_argument('--pg-test-extra', help='extra tests', type=str)
+parser.add_argument('--skip-without-extra', help='skip if PG_TEST_EXTRA is missing this arg', type=str)
 parser.add_argument('test_command', nargs='*')
 
 args = parser.parse_args()
@@ -29,6 +30,12 @@ if args.skip is not None:
     print('1..0 # Skipped: ' + args.skip)
     sys.exit(0)
 
+if args.skip_without_extra is not None:
+    extras = os.environ.get("PG_TEST_EXTRA", args.pg_test_extra)
+    if extras is None or args.skip_without_extra not in extras.split():
+        print(f'1..0 # Skipped: PG_TEST_EXTRA does not contain "{args.skip_without_extra}"')
+        sys.exit(0)
+
 if os.path.exists(testdir) and os.path.isdir(testdir):
     shutil.rmtree(testdir)
 os.makedirs(testdir)
-- 
2.34.1

#202Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#201)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 08.02.25 02:56, Jacob Champion wrote:

On Fri, Feb 7, 2025 at 12:12 PM Daniel Gustafsson<daniel@yesql.se> wrote:

Is it really enough to do this at build time? A very small percentage of users
running this will also be building their own libpq so the warning is lost on
them. That being said, I'm not entirely sure what else we could do (bleeping a
warning every time is clearly not userfriendly) so maybe this is a TODO in the
code?

I've added a TODO back. At the moment, I don't have any good ideas; if
the user isn't building libpq, they're not going to be able to take
action on the warning anyway, and for many use cases they're probably
not going to care.

This just depends on how people have built their libcurl, right?

Do we have any information whether the async-dns-free build is a common
configuration?

#203Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#201)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 08.02.25 02:56, Jacob Champion wrote:

+ oauth_json_set_error(ctx, /* don't bother translating */
With the project style format for translator comments this should be:

+ /* translator: xxx */
+ oauth_json_set_error(ctx,

This comment was just meant to draw attention to the lack of
libpq_gettext(). Does it still need a translator note if we don't run
it through translation?

No, that wouldn't have any effect.

I think you can just remove that comment. It's pretty established that
internal errors don't need translation, so it would be understood from
looking at the code.

#204Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#203)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Feb 12, 2025 at 6:55 AM Peter Eisentraut <peter@eisentraut.org> wrote:

This just depends on how people have built their libcurl, right?

Do we have any information whether the async-dns-free build is a common
configuration?

I don't think the annual Curl survey covers that, unfortunately.

On Wed, Feb 12, 2025 at 6:59 AM Peter Eisentraut <peter@eisentraut.org> wrote:

I think you can just remove that comment. It's pretty established that
internal errors don't need translation, so it would be understood from
looking at the code.

Okay, will do.

Thanks,
--Jacob

#205Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#201)
4 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 8 Feb 2025, at 02:56, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

Thanks for the new version!

- 0004 gets a missed pgperltidy and explicitly skips unsupported tests
on Windows.

+if ($Config{osname} eq 'MSWin32')
We can get away with nog importing Config at all since Test::Utils export a
symbol for this already in $windows_os. Fixed in my attached.

Daniel and I talked at FOSDEM about wanting to have additional
guardrails on the server-side validator API. Ideally, we'd wait for
major version boundaries to change APIs, as per usual. But if any bugs
come to light that affect the security of the system, we may want to
have more control over the boundary between the server and the
validator. So I've added two features to the API.

I think what you added there is quite sufficient for handling the worst case
that ideally should never happen. Even though I can't see us breaking this
given the code being trivial, I also don't feel like realizing after the fact
when we need it that it was subtly broken, so I added a test validator which
use the wrong ABI version.

I have now read the entire patch cover-to-cover twice to try and catch any
rough or sharp edges. Unsurprisingly given the number of revisions this patch
has gone through, and the number of hours that have been put into it, there
isn't much to be found. Most of my findings below are well and truly in the
nit- pickery category (and my favourite, paranoia-induced defensive
programming). There are no architectural flaws that I can detect, and cross-
referencing with the RFC's I don't see anything mixed up in spec compliance.

To make it easier for you to see what I mean I have implemented most of the
comments and attached as a fixup patch, from which you can cherry-pick hunks
you agree with. Those I didn't implement should be marked as such below.

As we discussed off-list I took the liberty of squashing the previous fixup
patches into a single one, and squashed your fixes for my comments against v47
into 0001. All of my proposals are in 0004.

Some comments:

+ The system which hosts the protected resources which are
The repetition of "which .. which" reads a bit off to me, I propose to
simplify as "The system hosting then.." instead.

+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
Since we define terminology here, shouldn't this be "OAuth resource servers"?
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
The "obtained from" part makes it sound like you need to get some server
software to run this with PostgreSQL.  How about "; it is the responsibility
of.."?
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
While not wrong in any way, I think it would be clearer to write "formatting"
here since that's really what we are talking about no?
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
I don't think we need to document that curl need to be installed since it's
likely already on the system of anyone reading this.  I do however think we
should state the minimum required version.

+ setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
Nitpickery: I prefer to have the period outside the <link> markup.

+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
Since mix-up attacks aren't very well documented I think we should aid the
readers by linking to the OAuth WG announcement of this class of attacks.  From
there readers can find the original paper but I think linking directly to that
is less helpful than the mailinglist post.

+ running a client via SSH. Client applications may implement their own flows
For consistency with the rest of the docs we should wrap SSH with an
<application> tag.

+ You will then log into your OAuth provider, which will ask whether you want
A think third person here reads more like the rest of the docs.

+ which <application>libpq</application> will call when when action is
"when when", I think this should really be "when an"?

+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
This might not be readily understandable by non-native speakers.

+ Similarly, if you are using Curl inside your application,
Should use <productname> markup.

+ more recent versions of Curl that are built to support threadsafe
s/threadsafe/thread-safe/g for documentation consistency (ditto in other places
where used as a adjective and not code identifier).

+ itself; validator modules provide the glue between the server and the OAuth
s/glue/integration layer/ to avoid confusing readers not used to english idioms.

+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
"Don't make any bugs" isn't very helpful advice =) Expanded on it slightly.

+ An OAuth validator module is loaded by dynamically loading one of the shared
The double use of load in "loaded .. loading", rewording to try and simplify.

+ The server has ensured that the token is well-formed syntactically, but no
"server" is an overloaded nomenclature here, perhaps using libpq instead to
clearly indicate that it's postgres and not an OAuth server.

+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
In other places we use the structure name as well and not just the member,
adding that here to be consistent.

+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
Off-by-one

+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
Replacing all tools.ietf.org mentions with datatracker.ietf.org to save a few
redirects.

+ p++;
+ if (*p != ',')
If the SASL exchange, are we certain that a rogue client cannot inject a
message which trips us past the end of string? Should we be doublecheck when
advancing p across the message?

+sanitize_char(char c)
+{
+ static char buf[5];
With the multithreading work on the horizon we should probably avoid static
variables like these to not create work for our future selves? The code isn't
as neat when passing in a buffer/length but it avoids the need for a static or
threadlocal variable. Or am I overthinking this?

+       initStringInfo(&issuer);
+       appendStringInfoString(&issuer, ctx->issuer);
The double StringInfoData variables in generate_error_response to be able to
JSON escape the issuer is a bit of an eye-sore, a version of
escape_json_with_len which also took an offset into the buffer on where to
start maybe?  Nothing that is urgent to address now (and I have not changed
anything here), but I'll keep it at the back of my head.

+ ereport(LOG, errmsg("internal error in OAuth validator module"));
I wonder if this should be using WARNING instead? It's really something that
should trigger alarm bells going off. I've also added an errcode for easier
fleet analysis.

In load_validator_library we don't explicitly verify that the required callback
is defined in the returned structure, which seems like a cheap enough belts and
suspenders level check.

+       if (parsed < 1)
+               return actx->debugging ? 0 : 1;
Is 1 second a sane lower bound on interval for all situations?  I'm starting to
wonder if we should be more conservative here, or even make it configurable in
some way? The default if not set of 5 seconds is quite a lot higher than 1.

+ if (INT_MAX <= parsed)
I think it's closer to project to style to keep the variable on the left side
in such comparisons, so changed these.

+       parsed = parse_json_number(expires_in_str);
+       parsed = round(parsed);
Shouldn't we floor() the value here to ensure we never report an expiration
time longer than the actual expiration?

+ * Some services (Google, Azure) spell verification_uri differently.
I did another round of documentation reading and couldn't find any provider
which also use "verification_url_complete". However, since it looks so similar
to verification_url it seems worthwhile to add a comment to save readers from
the same rabbithole.

register_socket() doesn't have an error catch for the case when neither epoll
nor kqeue is supported. Shouldn't it set actx_error() here as well? (Not done
in my review patch.)

+       if (actx->curl_err[0])
+       {
+               size_t          len;
+
+               appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
Should this also qualify that the error comes from outside of postgres?
Something like "(libcurl:%s)" to match?  I haven't changed this in the attached
since I'm still on the fence, but I'm leanings that we probably should.
Thoughts?
-   * We only support one mechanism at the moment, so rather than deal with a
+   * We only support two mechanisms at the moment, so rather than deal with a
While there's nothing incorrect about this comment, I have a feeling we won't
support more mechanisms than we can justify having a simple array for anytime
soon =)

Sorry for the wall of text.

In general, I feel that this is getting very close to its final form wrt being
a committable patch, and assuming we don't find anything structurally unsound
in the coming days I don't see a blocker for getting this into v18 before the
final commitfest. If anyone disagrees with this I'd love for that be brought
up.

--
Daniel Gustafsson

Attachments:

v49-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v49-0001-Add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 187890fbc3dadf0fed125dce7d164e245c3bb7c7 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v49 1/4] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   65 +
 configure                                     |  332 ++
 configure.ac                                  |   41 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  406 +++
 doc/src/sgml/oauth-validators.sgml            |  402 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/protocol.sgml                    |  133 +-
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |  100 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  864 +++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |   54 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2858 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1153 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   48 +-
 src/interfaces/libpq/libpq-fe.h               |   85 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   42 +
 src/test/modules/oauth_validator/meson.build  |   69 +
 .../oauth_validator/oauth_hook_client.c       |  293 ++
 .../modules/oauth_validator/t/001_server.pl   |  566 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  135 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 59 files changed, 9007 insertions(+), 39 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index fffa438cec1..2f5f5ef21a8 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -167,7 +167,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -329,6 +329,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -422,8 +423,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -799,8 +802,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 7b55c2664a6..ead427046f5 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -274,3 +274,68 @@ AC_DEFUN([PGAC_CHECK_STRIP],
   AC_SUBST(STRIP_STATIC_LIB)
   AC_SUBST(STRIP_SHARED_LIB)
 ])# PGAC_CHECK_STRIP
+
+
+
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for required libraries and headers, and test to see whether the current
+# installation of libcurl is threadsafe.
+
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[
+  AC_CHECK_HEADER(curl/curl.h, [],
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+  AC_CHECK_LIB(curl, curl_multi_init, [],
+			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+])],
+  [pgac_cv__libcurl_threadsafe_init=yes],
+  [pgac_cv__libcurl_threadsafe_init=no],
+  [pgac_cv__libcurl_threadsafe_init=unknown])])
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
+              [Define to 1 if curl_global_init() is guaranteed to be threadsafe.])
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  AC_CACHE_CHECK([for curl support for asynchronous DNS], [pgac_cv__libcurl_async_dns],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+])],
+  [pgac_cv__libcurl_async_dns=yes],
+  [pgac_cv__libcurl_async_dns=no],
+  [pgac_cv__libcurl_async_dns=unknown])])
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    AC_MSG_WARN([
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.])
+  fi
+])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0ffcaeb4367..93fddd69981 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,157 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12378,176 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
+fi
+
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
+$as_echo_n "checking for curl_global_init thread safety... " >&6; }
+if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_threadsafe_init=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_threadsafe_init=yes
+else
+  pgac_cv__libcurl_threadsafe_init=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
+$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+
+$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
+
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl support for asynchronous DNS" >&5
+$as_echo_n "checking for curl support for asynchronous DNS... " >&6; }
+if ${pgac_cv__libcurl_async_dns+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_async_dns=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_async_dns=yes
+else
+  pgac_cv__libcurl_async_dns=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_async_dns" >&5
+$as_echo "$pgac_cv__libcurl_async_dns" >&6; }
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&5
+$as_echo "$as_me: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&2;}
+  fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
diff --git a/configure.ac b/configure.ac
index f56681e0d91..b6d02f5ecc7 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,40 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1328,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..f84085dbac4 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system which hosts the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it's obtained from the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or format are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 5e4f201e099..6591a54124c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3f0a7e9c069..96e433179b9 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1143,6 +1143,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         This requires the <productname>curl</productname> package to be
+         installed.  Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2584,6 +2597,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        This requires the <productname>curl</productname> package to be
+        installed.  Building with this will check for the required header files
+        and libraries to make sure that your <productname>curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index c49e975b082..a51355e238f 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,106 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of "mix-up
+        attacks" on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10129,291 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   TODO
+  </para>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when when action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+    const char *verification_uri_complete;  /* optional combination of URI and
+                                             * code, or NULL */
+    int         expires_in;         /* seconds until user code expires */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+        <para>
+         If a non-NULL <structfield>verification_uri_complete</structfield> is
+         provided, it may optionally be used for non-textual verification (for
+         example, by displaying a QR code). The URL and user code should still
+         be displayed to the end user in this case, because the code will be
+         manually confirmed by the provider, and the URL lets users continue
+         even if they can't use the non-textual method. For more information,
+         see section 3.3.1 in
+         <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">RFC 8628</ulink>.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       sprays HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
@@ -10092,6 +10486,18 @@ int PQisthreadsafe();
    <application>libpq</application> source code for a way to do cooperative
    locking between <application>libpq</application> and your application.
   </para>
+
+  <para>
+   Similarly, if you are using Curl inside your application,
+   <emphasis>and</emphasis> you do not already
+   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
+   libcurl globally</ulink> before starting new threads, you will need to
+   cooperatively lock (again via <function>PQregisterThreadLock</function>)
+   around any code that may initialize libcurl. This restriction is lifted for
+   more recent versions of Curl that are built to support threadsafe
+   initialization; those builds can be identified by the advertisement of a
+   <literal>threadsafe</literal> feature in their version metadata.
+  </para>
  </sect1>
 
 
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..d0bca9196d9
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,402 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the glue between the server and the OAuth
+  provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    TODO
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   An OAuth validator module is loaded by dynamically loading one of the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
+   the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    The server has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
+    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    The caller assumes ownership of the returned memory allocation, the
+    validator module should not in any way access the memory after it has been
+    returned.  A validator may instead return NULL to signal an internal
+    error.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    server will not perform any checks on the value of
+    <structfield>authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index fb5dec1172e..3bd9e68e6ce 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1688,11 +1688,11 @@ SELCT 1/0;<!-- this typo is intentional -->
 
   <para>
    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
-   might be added in the future. The below steps illustrate how SASL
-   authentication is performed in general, while the next subsection gives
-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
+   protocols. At the moment, <productname>PostgreSQL</productname> implements three
+   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
+   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
+   authentication is performed in general, while the next subsections give
+   more details on particular mechanisms.
   </para>
 
   <procedure>
@@ -1727,7 +1727,7 @@ SELCT 1/0;<!-- this typo is intentional -->
    <step id="sasl-auth-end">
     <para>
      Finally, when the authentication exchange is completed successfully, the
-     server sends an AuthenticationSASLFinal message, followed
+     server sends an optional AuthenticationSASLFinal message, followed
      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
      contains additional server-to-client data, whose content is particular to the
      selected authentication mechanism. If the authentication mechanism doesn't
@@ -1746,9 +1746,9 @@ SELCT 1/0;<!-- this typo is intentional -->
    <title>SCRAM-SHA-256 Authentication</title>
 
    <para>
-    The implemented SASL mechanisms at the moment
-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
+    <literal>SCRAM-SHA-256</literal>, and its variant with channel
+    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
+    authentication mechanisms. They are described in
     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    </para>
@@ -1850,6 +1850,121 @@ SELCT 1/0;<!-- this typo is intentional -->
     </step>
    </procedure>
   </sect2>
+
+  <sect2 id="sasl-oauthbearer">
+   <title>OAUTHBEARER Authentication</title>
+
+   <para>
+    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
+    authentication. It is described in detail in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
+   </para>
+
+   <para>
+    A typical exchange differs depending on whether or not the client already
+    has a bearer token cached for the current user. If it does not, the exchange
+    will take place over two connections: the first "discovery" connection to
+    obtain OAuth metadata from the server, and the second connection to send
+    the token after the client has obtained it. (libpq does not currently
+    implement a caching method as part of its builtin flow, so it uses the
+    two-connection exchange.)
+   </para>
+
+   <para>
+    This mechanism is client-initiated, like SCRAM. The client initial response
+    consists of the standard "GS2" header used by SCRAM, followed by a list of
+    <literal>key=value</literal> pairs. The only key currently supported by
+    the server is <literal>auth</literal>, which contains the bearer token.
+    <literal>OAUTHBEARER</literal> additionally specifies three optional
+    components of the client initial response (the <literal>authzid</literal> of
+    the GS2 header, and the <structfield>host</structfield> and
+    <structfield>port</structfield> keys) which are currently ignored by the
+    server.
+   </para>
+
+   <para>
+    <literal>OAUTHBEARER</literal> does not support channel binding, and there
+    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
+    server data during a successful authentication, so the
+    AuthenticationSASLFinal message is not used in the exchange.
+   </para>
+
+   <procedure>
+    <title>Example</title>
+    <step>
+     <para>
+      During the first exchange, the server sends an AuthenticationSASL message
+      with the <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message which
+      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
+      client does not already have a valid bearer token for the current user,
+      the <structfield>auth</structfield> field is empty, indicating a discovery
+      connection.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an AuthenticationSASLContinue message containing an error
+      <literal>status</literal> alongside a well-known URI and scopes that the
+      client should use to conduct an OAuth flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Client sends a SASLResponse message containing the empty set (a single
+      <literal>0x01</literal> byte) to finish its half of the discovery
+      exchange.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an ErrorMessage to fail the first exchange.
+     </para>
+     <para>
+      At this point, the client conducts one of many possible OAuth flows to
+      obtain a bearer token, using any metadata that it has been configured with
+      in addition to that provided by the server. (This description is left
+      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
+      mandate any particular method for obtaining a token.)
+     </para>
+     <para>
+      Once it has a token, the client reconnects to the server for the final
+      exchange:
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server once again sends an AuthenticationSASL message with the
+      <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message, but this
+      time the <structfield>auth</structfield> field in the message contains the
+      bearer token that was obtained during the client flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server validates the token according to the instructions of the
+      token provider. If the client is authorized to connect, it sends an
+      AuthenticationOk message to end the SASL exchange.
+     </para>
+    </step>
+   </procedure>
+  </sect2>
  </sect1>
 
  <sect1 id="protocol-replication">
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index 7c474559bdf..0e5e8e8f309 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -347,6 +347,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 7dd7110318d..5c35159b4f1 100644
--- a/meson.build
+++ b/meson.build
@@ -855,6 +855,101 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+
+    # Check to see whether the current platform supports threadsafe Curl
+    # initialization.
+    libcurl_threadsafe_init = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+        #ifdef CURL_VERSION_THREADSAFE
+            if (info->features & CURL_VERSION_THREADSAFE)
+                return 0;
+        #endif
+
+            return 1;
+        }''',
+        name: 'test for curl_global_init thread safety',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_threadsafe_init = true
+        message('curl_global_init is threadsafe')
+      elif r.returncode() == 1
+        message('curl_global_init is not threadsafe')
+      else
+        message('curl_global_init failed; assuming not threadsafe')
+      endif
+    endif
+
+    if libcurl_threadsafe_init
+      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
+    endif
+
+    # Warn if a thread-friendly DNS resolver isn't built.
+    libcurl_async_dns = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+            return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+        }''',
+        name: 'test for curl support for asynchronous DNS',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_async_dns = true
+      endif
+    endif
+
+    if not libcurl_async_dns
+      warning('''
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.''')
+    endif
+  endif
+
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3045,6 +3140,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3721,6 +3820,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbe11e75bf0..3b620bac5ac 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..aa16977c643
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,864 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(void *arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message. (In
+	 * practice such configurations are rejected during HBA parsing.)
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = ValidatorCallbacks->validate_cb(validator_module_state,
+										  token, port->user_name);
+	if (ret == NULL)
+	{
+		ereport(LOG, errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+	MemoryContextCallback *mcb;
+
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	/* Shut down the library before cleaning up its state. */
+	mcb = palloc0(sizeof(*mcb));
+	mcb->func = shutdown_validator_library;
+
+	MemoryContextRegisterResetCallback(CurrentMemoryContext, mcb);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked during memory context reset.
+ */
+static void
+shutdown_validator_library(void *arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	char	   *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 226af43fe23..68833ca5fa3 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4852,6 +4853,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index d472987ed46..ccefd214143 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..8fe56267780
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..4fcdda74305
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,54 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	bool		authorized;
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+
+typedef struct OAuthValidatorCallbacks
+{
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..c04ee38d086 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -442,6 +445,9 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
+/* Define to 1 if curl_global_init() is guaranteed to be threadsafe. */
+#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
 /* Define to 1 if your compiler understands `typeof' or something similar. */
 #undef HAVE_TYPEOF
 
@@ -663,6 +669,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 701810a272a..90b0b65db6f 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..2179bb89800
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2858 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *verification_uri_complete;
+	char	   *expires_in_str;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			expires_in;
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->verification_uri_complete);
+	free(authz->expires_in_str);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+	int			timerfd;		/* descriptor for signaling async timeouts */
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * In general, none of the error cases below should ever happen if we have
+	 * no bugs above. But if we do hit them, surfacing those errors somehow
+	 * might be the only way to have a chance to debug them.
+	 *
+	 * TODO: At some point it'd be nice to have a standard way to warn about
+	 * teardown failures. Appending to the connection's error message only
+	 * helps if the bug caused a connection failure; otherwise it'll be
+	 * buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 */
+		if (ctx->active)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: started field '%s' before field '%s' was finished",
+								  name, ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+
+	/*
+	 * All fields should be fully processed by the end of the top-level
+	 * object.
+	 */
+	if (!ctx->nested && ctx->active)
+	{
+		Assert(false);
+		oauth_parse_set_error(ctx,
+							  "internal error: field '%s' still active at end of object",
+							  ctx->active->name);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Clear the target (which should be an array inside the top-level
+		 * object). For this to be safe, no target arrays can contain other
+		 * arrays; we check for that in the array_start callback.
+		 */
+		if (ctx->nested != 2 || ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: found unexpected array end while parsing field '%s'",
+								  ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			/* Ensure that we're parsing the top-level keys... */
+			if (ctx->nested != 1)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar target found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* ...and that a result has not already been set. */
+			if (*field->target.scalar)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar field '%s' would be assigned twice",
+									  ctx->active->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			/* The target array should be inside the top-level object. */
+			if (ctx->nested != 2)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: array member found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses a valid JSON number into a double. The input must have come from
+ * pg_parse_json(), so that we know the lexer has validated it; there's no
+ * in-band signal for invalid formats.
+ */
+static double
+parse_json_number(const char *s)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(s, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(false);
+		return 0;
+	}
+
+	return parsed;
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(interval_str);
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (INT_MAX <= parsed)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the "expires_in" JSON number, corresponding to the number of seconds
+ * remaining in the lifetime of the device code request.
+ *
+ * Similar to parse_interval, but we have even fewer requirements for reasonable
+ * values since we don't use the expiration time directly (it's passed to the
+ * PQAUTHDATA_PROMPT_OAUTH_DEVICE hook, in case the application wants to do
+ * something with it). We simply round and clamp to int range.
+ */
+static int
+parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(expires_in_str);
+	parsed = round(parsed);
+
+	if (INT_MAX <= parsed)
+		return INT_MAX;
+	else if (parsed <= INT_MIN)
+		return INT_MIN;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+		{"expires_in", JSON_TOKEN_NUMBER, {&authz->expires_in_str}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	Assert(authz->expires_in_str);	/* ensured by parse_oauth_json() */
+	authz->expires_in = parse_expires_in(actx, authz->expires_in_str);
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*---
+		 * We currently have no use for the following OPTIONAL fields:
+		 *
+		 * - expires_in: This will be important for maintaining a token cache,
+		 *               but we do not yet implement one.
+		 *
+		 * - refresh_token: Ditto.
+		 *
+		 * - scope: This is only sent when the authorization server sees fit to
+		 *          change our scope request. It's not clear what we should do
+		 *          about this; either it's been done as a matter of policy, or
+		 *          the user has explicitly denied part of the authorization,
+		 *          and either way the server-side validator is in a better
+		 *          place to complain if the change isn't acceptable.
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * For epoll, the timerfd is always part of the set; it's just disabled when
+ * we're not using it. For kqueue, the "timerfd" is actually a second kqueue
+ * instance which is only added to the set when needed.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		/*- translator: the term "kqueue" (kernel queue) should not be translated */
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	/*
+	 * Originally, we set EVFILT_TIMER directly on the top-level multiplexer.
+	 * This makes it difficult to implement timer_expired(), though, so now we
+	 * set EVFILT_TIMER on a separate actx->timerfd, which is chained to
+	 * actx->mux while the timer is active.
+	 */
+	actx->timerfd = kqueue();
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timer kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer).
+ *
+ * For epoll, rather than continually adding and removing the timer, we keep it
+ * in the set at all times and just disarm it when it's not needed. For kqueue,
+ * the timer is removed completely when disabled to prevent stale timeouts from
+ * remaining in the queue.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	/* Enable/disable the timer itself. */
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
+		   0, timeout, 0);
+	if (kevent(actx->timerfd, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+
+	/*
+	 * Add/remove the timer to/from the mux. (In contrast with epoll, if we
+	 * allowed the timer to remain registered here after being disabled, the
+	 * mux queue would retain any previous stale timeout notifications and
+	 * remain readable.)
+	 */
+	EV_SET(&ev, actx->timerfd, EVFILT_READ, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, 0, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "could not update timer on kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return false;
+}
+
+/*
+ * Returns 1 if the timeout in the multiplexer set has expired since the last
+ * call to set_timer(), 0 if the timer is still running, or -1 (with an
+ * actx_error() report) if the timer cannot be queried.
+ */
+static int
+timer_expired(struct async_ctx *actx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timerfd_gettime(actx->timerfd, &spec) < 0)
+	{
+		actx_error(actx, "getting timerfd value: %m");
+		return -1;
+	}
+
+	/*
+	 * This implementation assumes we're using single-shot timers. If you
+	 * change to using intervals, you'll need to reimplement this function
+	 * too, possibly with the read() or select() interfaces for timerfd.
+	 */
+	Assert(spec.it_interval.tv_sec == 0
+		   && spec.it_interval.tv_nsec == 0);
+
+	/* If the remaining time to expiration is zero, we're done. */
+	return (spec.it_value.tv_sec == 0
+			&& spec.it_value.tv_nsec == 0);
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	int			res;
+
+	/* Is the timer queue ready? */
+	res = PQsocketPoll(actx->timerfd, 1 /* forRead */ , 0, 0);
+	if (res < 0)
+	{
+		actx_error(actx, "checking kqueue for timeout: %m");
+		return -1;
+	}
+
+	return (res > 0);
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return -1;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * There might be an optimization opportunity here: if timeout == 0, we
+	 * could signal drive_request to immediately call
+	 * curl_multi_socket_action, rather than returning all the way up the
+	 * stack only to come right back. But it's not clear that the additional
+	 * code complexity is worth it.
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *prefix;
+	bool		printed_prefix = false;
+	PQExpBufferData buf;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	initPQExpBuffer(&buf);
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call. We also don't allow unprintable ASCII
+	 * through without a basic <XX> escape.
+	 */
+	for (int i = 0; i < size; i++)
+	{
+		char		c = data[i];
+
+		if (!printed_prefix)
+		{
+			appendPQExpBuffer(&buf, "[libcurl] %s ", prefix);
+			printed_prefix = true;
+		}
+
+		if (c >= 0x20 && c <= 0x7E)
+			appendPQExpBufferChar(&buf, c);
+		else if ((type == CURLINFO_HEADER_IN
+				  || type == CURLINFO_HEADER_OUT
+				  || type == CURLINFO_TEXT)
+				 && (c == '\r' || c == '\n'))
+		{
+			/*
+			 * Don't bother emitting <0D><0A> for headers and text; it's not
+			 * helpful noise.
+			 */
+		}
+		else
+			appendPQExpBuffer(&buf, "<%02X>", c);
+
+		if (c == '\n')
+		{
+			appendPQExpBufferChar(&buf, c);
+			printed_prefix = false;
+		}
+	}
+
+	if (printed_prefix)
+		appendPQExpBufferChar(&buf, '\n');	/* finish the line */
+
+	fprintf(stderr, "%s", buf.data);
+	termPQExpBuffer(&buf);
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 *
+	 * NB: If libcurl is not built against a friendly DNS resolver (c-ares or
+	 * threaded), setting this option prevents DNS lookups from timing out
+	 * correctly. We warn about this situation at configure time.
+	 *
+	 * TODO: Perhaps there's a clever way to warn the user about synchronous
+	 * DNS at runtime too? It's not immediately clear how to do that in a
+	 * helpful way: for many standard single-threaded use cases, the user
+	 * might not care at all, so spraying warnings to stderr would probably do
+	 * more harm than good.
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * If we're in debug mode, allow the developer to change the trusted CA
+	 * list. For now, this is not something we expose outside of the UNSAFE
+	 * mode, because it's not clear that it's useful in production: both libpq
+	 * and the user's browser must trust the same authorization servers for
+	 * the flow to work at all, so any changes to the roots are likely to be
+	 * done system-wide.
+	 */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define HTTPS_SCHEME "https://"
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * provides an authorization endpoint, and both the token and authorization
+ * endpoint URLs seem reasonable).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/*
+	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
+	 * was present in the discovery document's grant_types_supported list. MS
+	 * Entra does not advertise this grant type, though, and since it doesn't
+	 * make sense to stand up a device_authorization_endpoint without also
+	 * accepting device codes at the token_endpoint, that's the only thing we
+	 * currently require.
+	 */
+
+	/*
+	 * Although libcurl will fail later if the URL contains an unsupported
+	 * scheme, that error message is going to be a bit opaque. This is a
+	 * decent time to bail out if we're not using HTTPS for the endpoints
+	 * we'll use for the flow.
+	 */
+	if (!actx->debugging)
+	{
+		if (pg_strncasecmp(provider->device_authorization_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "device authorization endpoint \"%s\" must use HTTPS",
+					   provider->device_authorization_endpoint);
+			return false;
+		}
+
+		if (pg_strncasecmp(provider->token_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "token endpoint \"%s\" must use HTTPS",
+					   provider->token_endpoint);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		.verification_uri_complete = actx->authz.verification_uri_complete,
+		.expires_in = actx->authz.expires_in,
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Calls curl_global_init() in a thread-safe way.
+ *
+ * libcurl has stringent requirements for the thread context in which you call
+ * curl_global_init(), because it's going to try initializing a bunch of other
+ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
+ * the thread-safety situation, but there's a chicken-and-egg problem at
+ * runtime: you can't check the thread safety until you've initialized libcurl,
+ * which you can't do from within a thread unless you know it's thread-safe...
+ *
+ * Returns true if initialization was successful. Successful or not, this
+ * function will not try to reinitialize Curl on successive calls.
+ */
+static bool
+initialize_curl(PGconn *conn)
+{
+	/*
+	 * Don't let the compiler play tricks with this variable. In the
+	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
+	 * enter simultaneously, but we do care if this gets set transiently to
+	 * PG_BOOL_YES/NO in cases where that's not the final answer.
+	 */
+	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	curl_version_info_data *info;
+#endif
+
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * Lock around the whole function. If a libpq client performs its own work
+	 * with libcurl, it must either ensure that Curl is initialized safely
+	 * before calling us (in which case our call will be a no-op), or else it
+	 * must guard its own calls to curl_global_init() with a registered
+	 * threadlock handler. See PQregisterThreadLock().
+	 */
+	pglock_thread();
+#endif
+
+	/*
+	 * Skip initialization if we've already done it. (Curl tracks the number
+	 * of calls; there's no point in incrementing the counter every time we
+	 * connect.)
+	 */
+	if (init_successful == PG_BOOL_YES)
+		goto done;
+	else if (init_successful == PG_BOOL_NO)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init previously failed during OAuth setup");
+		goto done;
+	}
+
+	/*
+	 * We know we've already initialized Winsock by this point (see
+	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
+	 * we have to tell libcurl to initialize everything else, because other
+	 * pieces of our client executable may already be using libcurl for their
+	 * own purposes. If we initialize libcurl with only a subset of its
+	 * features, we could break those other clients nondeterministically, and
+	 * that would probably be a nightmare to debug.
+	 *
+	 * If some other part of the program has already called this, it's a
+	 * no-op.
+	 */
+	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init failed during OAuth setup");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * If we determined at configure time that the Curl installation is
+	 * threadsafe, our job here is much easier. We simply initialize above
+	 * without any locking (concurrent or duplicated calls are fine in that
+	 * situation), then double-check to make sure the runtime setting agrees,
+	 * to try to catch silent downgrades.
+	 */
+	info = curl_version_info(CURLVERSION_NOW);
+	if (!(info->features & CURL_VERSION_THREADSAFE))
+	{
+		/*
+		 * In a downgrade situation, the damage is already done. Curl global
+		 * state may be corrupted. Be noisy.
+		 */
+		libpq_append_conn_error(conn, "libcurl is no longer threadsafe\n"
+								"\tCurl initialization was reported threadsafe when libpq\n"
+								"\twas compiled, but the currently installed version of\n"
+								"\tlibcurl reports that it is not. Recompile libpq against\n"
+								"\tthe installed version of libcurl.");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+#endif
+
+	init_successful = PG_BOOL_YES;
+
+done:
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	pgunlock_thread();
+#endif
+	return (init_successful == PG_BOOL_YES);
+}
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	if (!initialize_curl(conn))
+		return PGRES_POLLING_FAILED;
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+		actx->timerfd = -1;
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+
+					break;
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+
+				/*
+				 * The client application is supposed to wait until our timer
+				 * expires before calling PQconnectPoll() again, but that
+				 * might not happen. To avoid sending a token request early,
+				 * check the timer before continuing.
+				 */
+				if (!timer_expired(actx))
+				{
+					conn->altsock = actx->timerfd;
+					return PGRES_POLLING_READING;
+				}
+
+				/* Disable the expired timer. */
+				if (!set_timer(actx, -1))
+					goto error_return;
+
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer.
+				 */
+				conn->altsock = actx->timerfd;
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..8beae9604c7
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1153 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	/* Only top-level keys are considered. */
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		if (ctx->nested != 1)
+		{
+			/*
+			 * ctx->target_field should not have been set for nested keys.
+			 * Assert and don't continue any further for production builds.
+			 */
+			Assert(false);
+			oauth_json_set_error(ctx,	/* don't bother translating */
+								 "internal error: target scalar found at nesting level %d during OAUTHBEARER parsing",
+								 ctx->nested);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..32598721686
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f7..ec7a9236044 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85d1ca2864f..d5051f5e820 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -399,6 +417,7 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 static const pg_fe_sasl_mech *supported_sasl_mechs[] =
 {
 	&pg_scram_mech,
+	&pg_oauth_mech,
 };
 #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
 
@@ -655,6 +674,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1144,7 +1164,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1519,6 +1539,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4111,7 +4135,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4390,6 +4426,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -5002,6 +5041,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5155,6 +5200,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..b7399dee58e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,86 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+	const char *verification_uri_complete;	/* optional combination of URI and
+											 * code, or NULL */
+	int			expires_in;		/* seconds until user code expires */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a50..f36f7f19d58 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index dd64d291b3e..19f4a52a97a 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -37,6 +38,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 89e78b7d114..4e4be3fa511 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index a57077b682e..2b057451473 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..f297ed5c968
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..138a8104622
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
+since localhost HTTP servers will be started. A Python installation is required
+to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..f77a3e115c6
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,42 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which always
+ *	  fails
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
+										 const char *token,
+										 const char *role);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static ValidatorModuleResult *
+fail_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..4b78c90557c
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,69 @@
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..fc003030ff8
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,293 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+	printf(" --stress-async			busy-loop on PQconnectPoll rather than polling\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static bool stress_async = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{"stress-async", no_argument, NULL, 1006},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			case 1006:			/* --stress-async */
+				stress_async = true;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	if (stress_async)
+	{
+		/*
+		 * Perform an asynchronous connection, busy-looping on PQconnectPoll()
+		 * without actually waiting on socket events. This stresses code paths
+		 * that rely on asynchronous work to be done before continuing with
+		 * the next step in the flow.
+		 */
+		PostgresPollingStatusType res;
+
+		conn = PQconnectStart(conninfo);
+
+		do
+		{
+			res = PQconnectPoll(conn);
+		} while (res != PGRES_POLLING_FAILED && res != PGRES_POLLING_OK);
+	}
+	else
+	{
+		/* Perform a standard synchronous connection. */
+		conn = PQconnectdb(conninfo);
+	}
+
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..f0b918390fd
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,566 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+# Stress test: make sure our builtin flow operates correctly even if the client
+# application isn't respecting PGRES_POLLING_READING/WRITING signals returned
+# from PQconnectPoll().
+$base_connstr =
+  "$common_connstr port=" . $node->port . " host=" . $node->host;
+my @cmd = (
+	"oauth_hook_client", "--no-hook", "--stress-async",
+	connstr(stage => 'all', retries => 1, interval => 1));
+
+note "running '" . join("' '", @cmd) . "'";
+my ($stdout, $stderr) = run_command(\@cmd);
+
+like($stdout, qr/connection succeeded/, "stress-async: stdout matches");
+unlike($stderr, qr/connection to database failed/, "stress-async: stderr matches");
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..95cccf90dd8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..f0f23d1d1a8
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2024, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item SSL::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..4faf3323d38
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires_in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..ef9bbb2866f
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,135 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
+											 const char *token,
+											 const char *role);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(PANIC, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static ValidatorModuleResult *
+validate_token(ValidatorModuleState *state, const char *token, const char *role)
+{
+	ValidatorModuleResult *res;
+
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	res = palloc(sizeof(ValidatorModuleResult));
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return res;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index f521ad0b12f..ab7d7452ede 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2515,6 +2515,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2558,7 +2563,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b6c170ac249..ed8ef8ddc89 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1952,6 +1959,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3091,6 +3099,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3488,6 +3498,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.39.3 (Apple Git-146)

v49-0002-v48-fixup-patches-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v49-0002-v48-fixup-patches-Add-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From e723c2040405237155eeca58cab675866763b876 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 7 Feb 2025 14:23:40 -0800
Subject: [PATCH v49 2/4] v48 fixup patches! Add OAUTHBEARER SASL mechanism

---
 doc/src/sgml/libpq.sgml                       | 40 +++++++++++++++-
 doc/src/sgml/oauth-validators.sgml            | 31 ++++++++----
 src/backend/libpq/auth-oauth.c                | 20 ++++++--
 src/include/libpq/oauth.h                     | 48 ++++++++++++++++++-
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 14 +++++-
 .../modules/oauth_validator/fail_validator.c  | 15 ++++--
 .../modules/oauth_validator/t/001_server.pl   | 11 ++++-
 src/test/modules/oauth_validator/validator.c  | 28 +++++++----
 8 files changed, 175 insertions(+), 32 deletions(-)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index a51355e238f..b2abae8deee 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10133,8 +10133,46 @@ void PQinitSSL(int do_ssl);
   <title>OAuth Support</title>
 
   <para>
-   TODO
+   libpq implements support for the OAuth v2 Device Authorization client flow,
+   documented in
+   <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
+   which it will attempt to use by default if the server
+   <link linkend="auth-oauth">requests a bearer token</link> during
+   authentication. This flow can be utilized even if the system running the
+   client application does not have a usable web browser, for example when
+   running a client via SSH. Client applications may implement their own flows
+   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
   </para>
+  <para>
+   The builtin flow will, by default, print a URL to visit and a user code to
+   enter there:
+<programlisting>
+$ psql 'dbname=postgres oauth_issuer=https://example.com oauth_client_id=...'
+Visit https://example.com/device and enter the code: ABCD-EFGH
+</programlisting>
+   (This prompt may be
+   <link linkend="libpq-oauth-authdata-prompt-oauth-device">customized</link>.)
+   You will then log into your OAuth provider, which will ask whether you want
+   to allow libpq and the server to perform actions on your behalf. It is always
+   a good idea to carefully review the URL and permissions displayed, to ensure
+   they match your expectations, before continuing. Do not give permissions to
+   untrusted third parties.
+  </para>
+  <para>
+   For an OAuth client flow to be usable, the connection string must at minimum
+   contain <xref linkend="libpq-connect-oauth-issuer"/> and
+   <xref linkend="libpq-connect-oauth-client-id"/>. (These settings are
+   determined by your organization's OAuth provider.) The builtin flow
+   additionally requires the OAuth authorization server to publish a device
+   authorization endpoint.
+  </para>
+
+  <note>
+   <para>
+    The builtin Device Authorization flow is not currently supported on Windows.
+    Custom client flows may still be implemented.
+   </para>
+  </note>
 
   <sect2 id="libpq-oauth-authdata-hooks">
    <title>Authdata Hooks</title>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index d0bca9196d9..eb8c4431c2d 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -41,7 +41,9 @@
   <sect2 id="oauth-validator-design-responsibilities">
    <title>Validator Responsibilities</title>
    <para>
-    TODO
+    Although different modules may take very different approaches to token
+    validation, implementations generally need to perform three separate
+    actions:
    </para>
    <variablelist>
     <varlistentry>
@@ -121,6 +123,11 @@
        </footnote>
        if users are not prompted for additional scopes.
       </para>
+      <para>
+       Even if authorization fails, a module may choose to continue to pull
+       authentication information from the token for use in auditing and
+       debugging.
+      </para>
      </listitem>
     </varlistentry>
     <varlistentry>
@@ -290,13 +297,15 @@
    validator module a function named
    <function>_PG_oauth_validator_module_init</function> must be provided. The
    return value of the function must be a pointer to a struct of type
-   <structname>OAuthValidatorCallbacks</structname>, which contains pointers to
-   the module's token validation functions. The returned
+   <structname>OAuthValidatorCallbacks</structname>, which contains a magic
+   number and pointers to the module's token validation functions. The returned
    pointer must be of server lifetime, which is typically achieved by defining
    it as a <literal>static const</literal> variable in global scope.
 <programlisting>
 typedef struct OAuthValidatorCallbacks
 {
+    uint32        magic;            /* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
     ValidatorStartupCB startup_cb;
     ValidatorShutdownCB shutdown_cb;
     ValidatorValidateCB validate_cb;
@@ -341,14 +350,16 @@ typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
     previous calls will be available in <structfield>state->private_data</structfield>.
 
 <programlisting>
-typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+                                     const char *token, const char *role,
+                                     ValidatorModuleResult *result);
 </programlisting>
 
     <replaceable>token</replaceable> will contain the bearer token to validate.
     The server has ensured that the token is well-formed syntactically, but no
     other validation has been performed.  <replaceable>role</replaceable> will
     contain the role the user has requested to log in as.  The callback must
-    return a palloc'd <literal>ValidatorModuleResult</literal> struct, which is
+    set output parameters in the <literal>result</literal> struct, which is
     defined as below:
 
 <programlisting>
@@ -368,17 +379,17 @@ typedef struct ValidatorModuleResult
     determined.
    </para>
    <para>
-    The caller assumes ownership of the returned memory allocation, the
-    validator module should not in any way access the memory after it has been
-    returned.  A validator may instead return NULL to signal an internal
-    error.
+    A validator may return <literal>false</literal> to signal an internal error,
+    in which case any result parameters are ignored and the connection fails.
+    Otherwise the validator should return <literal>true</literal> to indicate
+    that it has processed the token and made an authorization decision.
    </para>
    <para>
     The behavior after <function>validate_cb</function> returns depends on the
     specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
     name must exactly match the role that the user is logging in as.  (This
     behavior may be modified with a usermap.)  But when authenticating against
-    an HBA rule with <literal>trust_validator_authz</literal> turned on, the
+    an HBA rule with <literal>delegate_ident_mapping</literal> turned on, the
     server will not perform any checks on the value of
     <structfield>authn_id</structfield> at all; in this case it is up to the
     validator to ensure that the token carries enough privileges for the user to
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index aa16977c643..e2b5d1ed913 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -656,9 +656,9 @@ validate(Port *port, const char *auth)
 				errmsg("validation of OAuth token requested without a validator loaded"));
 
 	/* Call the validation function from the validator module */
-	ret = ValidatorCallbacks->validate_cb(validator_module_state,
-										  token, port->user_name);
-	if (ret == NULL)
+	ret = palloc0(sizeof(ValidatorModuleResult));
+	if (!ValidatorCallbacks->validate_cb(validator_module_state, token,
+										 port->user_name, ret))
 	{
 		ereport(LOG, errmsg("internal error in OAuth validator module"));
 		return false;
@@ -756,8 +756,22 @@ load_validator_library(const char *libname)
 	ValidatorCallbacks = (*validator_init) ();
 	Assert(ValidatorCallbacks);
 
+	/*
+	 * Check the magic number, to protect against break-glass scenarios where
+	 * the ABI must change within a major version. load_external_function()
+	 * already checks for compatibility across major versions.
+	 */
+	if (ValidatorCallbacks->magic != PG_OAUTH_VALIDATOR_MAGIC)
+		ereport(ERROR,
+				errmsg("%s module \"%s\": magic number mismatch",
+					   "OAuth validator", libname),
+				errdetail("Server has magic number 0x%08X, module has 0x%08X.",
+						  PG_OAUTH_VALIDATOR_MAGIC, ValidatorCallbacks->magic));
+
 	/* Allocate memory for validator library private state data */
 	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	validator_module_state->sversion = PG_VERSION_NUM;
+
 	if (ValidatorCallbacks->startup_cb != NULL)
 		ValidatorCallbacks->startup_cb(validator_module_state);
 
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
index 4fcdda74305..7e249613e10 100644
--- a/src/include/libpq/oauth.h
+++ b/src/include/libpq/oauth.h
@@ -20,26 +20,72 @@ extern PGDLLIMPORT char *oauth_validator_libraries_string;
 
 typedef struct ValidatorModuleState
 {
+	/* Holds the server's PG_VERSION_NUM. Reserved for future extensibility. */
+	int			sversion;
+
+	/*
+	 * Private data pointer for use by a validator module. This can be used to
+	 * store state for the module that will be passed to each of its
+	 * callbacks.
+	 */
 	void	   *private_data;
 } ValidatorModuleState;
 
 typedef struct ValidatorModuleResult
 {
+	/*
+	 * Should be set to true if the token carries sufficient permissions for
+	 * the bearer to connect.
+	 */
 	bool		authorized;
+
+	/*
+	 * If the token authenticates the user, this should be set to a palloc'd
+	 * string containing the SYSTEM_USER to use for HBA mapping. Consider
+	 * setting this even if result->authorized is false so that DBAs may use
+	 * the logs to match end users to token failures.
+	 *
+	 * This is required if the module is not configured for ident mapping
+	 * delegation. See the validator module documentation for details.
+	 */
 	char	   *authn_id;
 } ValidatorModuleResult;
 
+/*
+ * Validator module callbacks
+ *
+ * These callback functions should be defined by validator modules and returned
+ * via _PG_oauth_validator_module_init().  ValidatorValidateCB is the only
+ * required callback. For more information about the purpose of each callback,
+ * refer to the OAuth validator modules documentation.
+ */
 typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
 typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
-typedef ValidatorModuleResult *(*ValidatorValidateCB) (ValidatorModuleState *state, const char *token, const char *role);
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+									 const char *token, const char *role,
+									 ValidatorModuleResult *result);
+
+/*
+ * Identifies the compiled ABI version of the validator module. Since the server
+ * already enforces the PG_MODULE_MAGIC number for modules across major
+ * versions, this is reserved for emergency use within a stable release line.
+ * May it never need to change.
+ */
+#define PG_OAUTH_VALIDATOR_MAGIC 0x20250207
 
 typedef struct OAuthValidatorCallbacks
 {
+	uint32		magic;			/* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
 	ValidatorStartupCB startup_cb;
 	ValidatorShutdownCB shutdown_cb;
 	ValidatorValidateCB validate_cb;
 } OAuthValidatorCallbacks;
 
+/*
+ * Type of the shared library symbol _PG_oauth_validator_module_init that is
+ * looked up when loading a validator module.
+ */
 typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
 extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
 
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 2179bb89800..74323de309a 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -32,7 +32,19 @@
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
 
-#define MAX_OAUTH_RESPONSE_SIZE (1024 * 1024)
+/*
+ * It's generally prudent to set a maximum response size to buffer in memory,
+ * but it's less clear what size to choose. The biggest of our expected
+ * responses is the server metadata JSON, which will only continue to grow in
+ * size; the number of IANA-registered parameters in that document is up to 78
+ * as of February 2025.
+ *
+ * Even if every single parameter were to take up 2k on average (a previously
+ * common limit on the size of a URL), 256k gives us 128 parameter values before
+ * we give up. (That's almost certainly complete overkill in practice; 2-4k
+ * appears to be common among popular providers at the moment.)
+ */
+#define MAX_OAUTH_RESPONSE_SIZE (256 * 1024)
 
 /*
  * Parsed JSON Representations
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
index f77a3e115c6..7b1e69518d9 100644
--- a/src/test/modules/oauth_validator/fail_validator.c
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -19,12 +19,15 @@
 
 PG_MODULE_MAGIC;
 
-static ValidatorModuleResult *fail_token(ValidatorModuleState *state,
-										 const char *token,
-										 const char *role);
+static bool fail_token(const ValidatorModuleState *state,
+					   const char *token,
+					   const char *role,
+					   ValidatorModuleResult *result);
 
 /* Callback implementations (we only need the main one) */
 static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
 	.validate_cb = fail_token,
 };
 
@@ -34,8 +37,10 @@ _PG_oauth_validator_module_init(void)
 	return &validator_callbacks;
 }
 
-static ValidatorModuleResult *
-fail_token(ValidatorModuleState *state, const char *token, const char *role)
+static bool
+fail_token(const ValidatorModuleState *state,
+		   const char *token, const char *role,
+		   ValidatorModuleResult *res)
 {
 	elog(FATAL, "fail_validator: sentinel error");
 	pg_unreachable();
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index f0b918390fd..d2dda62a2d4 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -14,12 +14,18 @@ use MIME::Base64 qw(encode_base64);
 use PostgreSQL::Test::Cluster;
 use PostgreSQL::Test::Utils;
 use Test::More;
+use Config;
 
 use FindBin;
 use lib $FindBin::RealBin;
 
 use OAuth::Server;
 
+if ($Config{osname} eq 'MSWin32')
+{
+	plan skip_all => 'OAuth server-side tests are not supported on Windows';
+}
+
 if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
 {
 	plan skip_all =>
@@ -402,7 +408,10 @@ note "running '" . join("' '", @cmd) . "'";
 my ($stdout, $stderr) = run_command(\@cmd);
 
 like($stdout, qr/connection succeeded/, "stress-async: stdout matches");
-unlike($stderr, qr/connection to database failed/, "stress-async: stderr matches");
+unlike(
+	$stderr,
+	qr/connection to database failed/,
+	"stress-async: stderr matches");
 
 #
 # This section of tests reconfigures the validator module itself, rather than
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
index ef9bbb2866f..e218f5c8902 100644
--- a/src/test/modules/oauth_validator/validator.c
+++ b/src/test/modules/oauth_validator/validator.c
@@ -23,12 +23,15 @@ PG_MODULE_MAGIC;
 
 static void validator_startup(ValidatorModuleState *state);
 static void validator_shutdown(ValidatorModuleState *state);
-static ValidatorModuleResult *validate_token(ValidatorModuleState *state,
-											 const char *token,
-											 const char *role);
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
 
 /* Callback implementations (exercise all three) */
 static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
 	.startup_cb = validator_startup,
 	.shutdown_cb = validator_shutdown,
 	.validate_cb = validate_token
@@ -89,6 +92,13 @@ _PG_oauth_validator_module_init(void)
 static void
 validator_startup(ValidatorModuleState *state)
 {
+	/*
+	 * Make sure the server is correctly setting sversion. (Real modules
+	 * should not do this; it would defeat upgrade compatibility.)
+	 */
+	if (state->sversion != PG_VERSION_NUM)
+		elog(ERROR, "oauth_validator: sversion set to %d", state->sversion);
+
 	state->private_data = PRIVATE_COOKIE;
 }
 
@@ -108,18 +118,16 @@ validator_shutdown(ValidatorModuleState *state)
  * Validator implementation. Logs the incoming data and authorizes the token by
  * default; the behavior can be modified via the module's GUC settings.
  */
-static ValidatorModuleResult *
-validate_token(ValidatorModuleState *state, const char *token, const char *role)
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
 {
-	ValidatorModuleResult *res;
-
 	/* Check to make sure our private state still exists. */
 	if (state->private_data != PRIVATE_COOKIE)
 		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
 			 state->private_data);
 
-	res = palloc(sizeof(ValidatorModuleResult));
-
 	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
 	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
 		 MyProcPort->hba->oauth_issuer,
@@ -131,5 +139,5 @@ validate_token(ValidatorModuleState *state, const char *token, const char *role)
 	else
 		res->authn_id = pstrdup(role);
 
-	return res;
+	return true;
 }
-- 
2.39.3 (Apple Git-146)

v49-0003-XXX-fix-libcurl-link-error.patchapplication/octet-stream; name=v49-0003-XXX-fix-libcurl-link-error.patch; x-unix-mode=0644Download
From 03742dd74dd3948c2d03b13009b08fbe09f928d8 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 13 Jan 2025 12:31:59 -0800
Subject: [PATCH v49 3/4] XXX fix libcurl link error

The ftp/curl port appears to be missing a minimum version dependency on
libssh2, so the following starts showing up after upgrading to curl
8.11.1_1:

    libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

But 13.3 is EOL, so it's not clear if anyone would be interested in a
bug report, and a FreeBSD 14 Cirrus image is in progress. Hack past it
for now.
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 2f5f5ef21a8..91b51142d2e 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -168,6 +168,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.39.3 (Apple Git-146)

v49-0004-v49-Fixups-proposed-by-Daniel.patchapplication/octet-stream; name=v49-0004-v49-Fixups-proposed-by-Daniel.patch; x-unix-mode=0644Download
From eaa6078e9fe70940c359cca0883227746456fc94 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Thu, 13 Feb 2025 23:45:50 +0100
Subject: [PATCH v49 4/4] v49 Fixups proposed by Daniel

---
 config/programs.m4                            |  4 +-
 doc/src/sgml/client-auth.sgml                 |  8 +-
 doc/src/sgml/installation.sgml                | 10 +--
 doc/src/sgml/libpq.sgml                       | 25 +++---
 doc/src/sgml/oauth-validators.sgml            | 27 ++++---
 meson.build                                   |  8 +-
 src/backend/libpq/auth-oauth.c                | 79 +++++++++++++------
 src/include/common/oauth-common.h             |  2 +-
 src/include/libpq/oauth.h                     |  7 +-
 src/include/pg_config.h.in                    |  2 +-
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 18 +++--
 src/interfaces/libpq/fe-auth-oauth.c          |  4 +-
 src/test/modules/oauth_validator/Makefile     |  2 +-
 src/test/modules/oauth_validator/README       |  6 +-
 .../modules/oauth_validator/fail_validator.c  |  6 +-
 .../modules/oauth_validator/magic_validator.c | 48 +++++++++++
 src/test/modules/oauth_validator/meson.build  | 18 ++++-
 .../oauth_validator/oauth_hook_client.c       |  2 +-
 .../modules/oauth_validator/t/001_server.pl   | 31 ++++++--
 .../modules/oauth_validator/t/002_client.pl   |  2 +-
 .../modules/oauth_validator/t/OAuth/Server.pm |  4 +-
 src/test/modules/oauth_validator/validator.c  |  2 +-
 22 files changed, 220 insertions(+), 95 deletions(-)
 create mode 100644 src/test/modules/oauth_validator/magic_validator.c

diff --git a/config/programs.m4 b/config/programs.m4
index ead427046f5..061b13376ac 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -280,7 +280,7 @@ AC_DEFUN([PGAC_CHECK_STRIP],
 # PGAC_CHECK_LIBCURL
 # ------------------
 # Check for required libraries and headers, and test to see whether the current
-# installation of libcurl is threadsafe.
+# installation of libcurl is thread-safe.
 
 AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
@@ -313,7 +313,7 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
   [pgac_cv__libcurl_threadsafe_init=unknown])])
   if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
     AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
-              [Define to 1 if curl_global_init() is guaranteed to be threadsafe.])
+              [Define to 1 if curl_global_init() is guaranteed to be thread-safe.])
   fi
 
   # Warn if a thread-friendly DNS resolver isn't built.
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index f84085dbac4..6fc0da57f1b 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -2397,7 +2397,7 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
       <term>Resource Server</term>
       <listitem>
        <para>
-        The system which hosts the protected resources which are
+        The system hosting the protected resources which are
         accessed by the client. The <productname>PostgreSQL</productname>
         cluster being connected to is the resource server.
        </para>
@@ -2409,7 +2409,7 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
       <listitem>
        <para>
         The organization, product vendor, or other entity which develops and/or
-        administers the OAuth servers and clients for a given application.
+        administers the OAuth resource servers and clients for a given application.
         Different providers typically choose different implementation details
         for their OAuth systems; a client of one provider is not generally
         guaranteed to have access to the servers of another.
@@ -2432,7 +2432,7 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
         The system which receives requests from, and issues access tokens to,
         the client after the authenticated resource owner has given approval.
         <productname>PostgreSQL</productname> does not provide an authorization
-        server; it's obtained from the OAuth provider.
+        server; it is the responsibility of the OAuth provider.
        </para>
       </listitem>
      </varlistentry>
@@ -2500,7 +2500,7 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
          exactly match the issuer identifier which is provided in the discovery
          document, which must in turn match the client's
          <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
-         case or format are permitted.
+         case or formatting are permitted.
         </para>
        </warning>
       </listitem>
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 96e433179b9..3c95c15a1e4 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1148,8 +1148,8 @@ build-postgresql:
        <listitem>
         <para>
          Build with libcurl support for OAuth 2.0 client flows.
-         This requires the <productname>curl</productname> package to be
-         installed.  Building with this will check for the required header files
+         Libcurl version 7.61.0 or later is required for this feature.
+         Building with this will check for the required header files
          and libraries to make sure that your <productname>curl</productname>
          installation is sufficient before proceeding.
         </para>
@@ -2602,9 +2602,9 @@ ninja install
       <listitem>
        <para>
         Build with libcurl support for OAuth 2.0 client flows.
-        This requires the <productname>curl</productname> package to be
-        installed.  Building with this will check for the required header files
-        and libraries to make sure that your <productname>curl</productname>
+        Libcurl version 7.61.0 or later is required for this feature.
+        Building with this will check for the required header files
+        and libraries to make sure that your <productname>Curl</productname>
         installation is sufficient before proceeding. The default for this
         option is auto.
        </para>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index b2abae8deee..ca84226755d 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -2390,7 +2390,7 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
         The HTTPS URL of a trusted issuer to contact if the server requests an
         OAuth token for the connection. This parameter is required for all OAuth
         connections; it should exactly match the <literal>issuer</literal>
-        setting in <link linkend="auth-oauth">the server's HBA configuration.</link>
+        setting in <link linkend="auth-oauth">the server's HBA configuration</link>.
        </para>
        <para>
         As part of the standard authentication handshake, <application>libpq</application>
@@ -2399,8 +2399,9 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
         provide a URL that is directly constructed from the components of the
         <literal>oauth_issuer</literal>, and this value must exactly match the
         issuer identifier that is declared in the discovery document itself, or
-        the connection will fail. This is required to prevent a class of "mix-up
-        attacks" on OAuth clients.
+        the connection will fail. This is required to prevent a class of
+        <ulink url="https://mailarchive.ietf.org/arch/msg/oauth/JIVxFBGsJBVtm7ljwJhPUm3Fr-w/">
+        "mix-up attacks"</ulink> on OAuth clients.
        </para>
        <para>
         You may also explicitly set <literal>oauth_issuer</literal> to the
@@ -10140,7 +10141,7 @@ void PQinitSSL(int do_ssl);
    <link linkend="auth-oauth">requests a bearer token</link> during
    authentication. This flow can be utilized even if the system running the
    client application does not have a usable web browser, for example when
-   running a client via SSH. Client applications may implement their own flows
+   running a client via <application>SSH</application>. Client applications may implement their own flows
    instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
   </para>
   <para>
@@ -10152,11 +10153,11 @@ Visit https://example.com/device and enter the code: ABCD-EFGH
 </programlisting>
    (This prompt may be
    <link linkend="libpq-oauth-authdata-prompt-oauth-device">customized</link>.)
-   You will then log into your OAuth provider, which will ask whether you want
-   to allow libpq and the server to perform actions on your behalf. It is always
+   The user will then log into their OAuth provider, which will ask whether
+   to allow libpq and the server to perform actions on their behalf. It is always
    a good idea to carefully review the URL and permissions displayed, to ensure
-   they match your expectations, before continuing. Do not give permissions to
-   untrusted third parties.
+   they match expectations, before continuing. Permissions should not be given
+   to untrusted third parties.
   </para>
   <para>
    For an OAuth client flow to be usable, the connection string must at minimum
@@ -10199,7 +10200,7 @@ void PQsetAuthDataHook(PQauthDataHook_type hook);
 <programlisting>
 int hook_fn(PGauthData type, PGconn *conn, void *data);
 </programlisting>
-        which <application>libpq</application> will call when when action is
+        which <application>libpq</application> will call when an action is
         required of the application. <replaceable>type</replaceable> describes
         the request being made, <replaceable>conn</replaceable> is the
         connection handle being authenticated, and <replaceable>data</replaceable>
@@ -10431,7 +10432,7 @@ typedef struct _PGoauthBearerRequest
      </listitem>
      <listitem>
       <para>
-       sprays HTTP traffic (containing several critical secrets) to standard
+       prints HTTP traffic (containing several critical secrets) to standard
        error during the OAuth flow
       </para>
      </listitem>
@@ -10526,13 +10527,13 @@ int PQisthreadsafe();
   </para>
 
   <para>
-   Similarly, if you are using Curl inside your application,
+   Similarly, if you are using <productname></productname>Curl</productname> inside your application,
    <emphasis>and</emphasis> you do not already
    <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
    libcurl globally</ulink> before starting new threads, you will need to
    cooperatively lock (again via <function>PQregisterThreadLock</function>)
    around any code that may initialize libcurl. This restriction is lifted for
-   more recent versions of Curl that are built to support threadsafe
+   more recent versions of <productname>Curl</productname> that are built to support thread-safe
    initialization; those builds can be identified by the advertisement of a
    <literal>threadsafe</literal> feature in their version metadata.
   </para>
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index eb8c4431c2d..e9d28d3daea 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -10,8 +10,8 @@
   custom modules to perform server-side validation of OAuth bearer tokens.
   Because OAuth implementations vary so wildly, and bearer token validation is
   heavily dependent on the issuing party, the server cannot check the token
-  itself; validator modules provide the glue between the server and the OAuth
-  provider in use.
+  itself; validator modules provide the integration layer between the server
+  and the OAuth provider in use.
  </para>
  <para>
   OAuth validator modules must at least consist of an initialization function
@@ -21,7 +21,7 @@
  <warning>
   <para>
    Since a misbehaving validator might let unauthorized users into the database,
-   correct implementation is critical. See
+   validating the correctness of the implementation is critical. See
    <xref linkend="oauth-validator-design"/> for design considerations.
   </para>
  </warning>
@@ -290,8 +290,9 @@
    <primary>_PG_oauth_validator_module_init</primary>
   </indexterm>
   <para>
-   An OAuth validator module is loaded by dynamically loading one of the shared
+   OAuth validator modules are dynamically loaded from the shared
    libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   Modules are loaded on demand when requested from a login in progress.
    The normal library search path is used to locate the library. To
    provide the validator callbacks and to indicate that the library is an OAuth
    validator module a function named
@@ -356,7 +357,7 @@ typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
 </programlisting>
 
     <replaceable>token</replaceable> will contain the bearer token to validate.
-    The server has ensured that the token is well-formed syntactically, but no
+    <application>libpq</application> has ensured that the token is well-formed syntactically, but no
     other validation has been performed.  <replaceable>role</replaceable> will
     contain the role the user has requested to log in as.  The callback must
     set output parameters in the <literal>result</literal> struct, which is
@@ -371,10 +372,10 @@ typedef struct ValidatorModuleResult
 </programlisting>
 
     The connection will only proceed if the module sets
-    <structfield>authorized</structfield> to <literal>true</literal>.  To
+    <structfield>result->authorized</structfield> to <literal>true</literal>.  To
     authenticate the user, the authenticated user name (as determined using the
-    token) shall be palloc'd and returned in the <structfield>authn_id</structfield>
-    field.  Alternatively, <structfield>authn_id</structfield> may be set to
+    token) shall be palloc'd and returned in the <structfield>result->authn_id</structfield>
+    field.  Alternatively, <structfield>result->authn_id</structfield> may be set to
     NULL if the token is valid but the associated user identity cannot be
     determined.
    </para>
@@ -386,12 +387,12 @@ typedef struct ValidatorModuleResult
    </para>
    <para>
     The behavior after <function>validate_cb</function> returns depends on the
-    specific HBA setup.  Normally, the <structfield>authn_id</structfield> user
+    specific HBA setup.  Normally, the <structfield>result->authn_id</structfield> user
     name must exactly match the role that the user is logging in as.  (This
     behavior may be modified with a usermap.)  But when authenticating against
-    an HBA rule with <literal>delegate_ident_mapping</literal> turned on, the
-    server will not perform any checks on the value of
-    <structfield>authn_id</structfield> at all; in this case it is up to the
+    an HBA rule with <literal>delegate_ident_mapping</literal> turned on,
+    <productname>PostgreSQL</productname> will not perform any checks on the value of
+    <structfield>result->authn_id</structfield> at all; in this case it is up to the
     validator to ensure that the token carries enough privileges for the user to
     log in under the indicated <replaceable>role</replaceable>.
    </para>
@@ -402,7 +403,7 @@ typedef struct ValidatorModuleResult
    <para>
     The <function>shutdown_cb</function> callback is executed when the backend
     process associated with the connection exits. If the validator module has
-    any state, this callback should free it to avoid resource leaks.
+    any allocated state, this callback should free it to avoid resource leaks.
 <programlisting>
 typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
 </programlisting>
diff --git a/meson.build b/meson.build
index 5c35159b4f1..574f992ed49 100644
--- a/meson.build
+++ b/meson.build
@@ -867,7 +867,7 @@ if not libcurlopt.disabled()
   if libcurl.found()
     cdata.set('USE_LIBCURL', 1)
 
-    # Check to see whether the current platform supports threadsafe Curl
+    # Check to see whether the current platform supports thread-safe Curl
     # initialization.
     libcurl_threadsafe_init = false
 
@@ -897,11 +897,11 @@ if not libcurlopt.disabled()
       assert(r.compiled())
       if r.returncode() == 0
         libcurl_threadsafe_init = true
-        message('curl_global_init is threadsafe')
+        message('curl_global_init is thread-safe')
       elif r.returncode() == 1
-        message('curl_global_init is not threadsafe')
+        message('curl_global_init is not thread-safe')
       else
-        message('curl_global_init failed; assuming not threadsafe')
+        message('curl_global_init failed; assuming not thread-safe')
       endif
     endif
 
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index e2b5d1ed913..db56cd8c200 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -4,9 +4,9 @@
  *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
  *
  * See the following RFC for more details:
- * - RFC 7628: https://tools.ietf.org/html/rfc7628
+ * - RFC 7628: https://datatracker.ietf.org/doc/html/rfc7628
  *
- * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/backend/libpq/auth-oauth.c
@@ -70,7 +70,7 @@ struct oauth_ctx
 	const char *scope;
 };
 
-static char *sanitize_char(char c);
+static void sanitize_char(char c, char *buf, size_t buflen);
 static char *parse_kvpairs_for_auth(char **input);
 static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
 static bool validate(Port *port, const char *auth);
@@ -139,6 +139,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	char		cbind_flag;
 	char	   *auth;
 	int			status;
+	char		errmsgbuf[5];
 
 	struct oauth_ctx *ctx = opaq;
 
@@ -162,6 +163,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 
 	/*
 	 * Check that the input length agrees with the string length of the input.
+	 * Possible reasons for discrepancies include embedded nulls in the string.
 	 */
 	if (inputlen == 0)
 		ereport(ERROR,
@@ -223,22 +225,29 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 
 		case 'y':				/* fall through */
 		case 'n':
-			p++;
+			if (!*(++p))
+				goto endofmessage;
+
 			if (*p != ',')
+			{
+				sanitize_char(*p, errmsgbuf, sizeof(errmsgbuf));
 				ereport(ERROR,
 						errcode(ERRCODE_PROTOCOL_VIOLATION),
 						errmsg("malformed OAUTHBEARER message"),
 						errdetail("Comma expected, but found character \"%s\".",
-								  sanitize_char(*p)));
-			p++;
+								  errmsgbuf));
+			}
+			if (!*(++p))
+				goto endofmessage;
 			break;
 
 		default:
+			sanitize_char(*p, errmsgbuf, sizeof(errmsgbuf));
 			ereport(ERROR,
 					errcode(ERRCODE_PROTOCOL_VIOLATION),
 					errmsg("malformed OAUTHBEARER message"),
 					errdetail("Unexpected channel-binding flag \"%s\".",
-							  sanitize_char(cbind_flag)));
+							  errmsgbuf));
 	}
 
 	/*
@@ -249,21 +258,29 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 				errmsg("client uses authorization identity, but it is not supported"));
 	if (*p != ',')
+	{
+		sanitize_char(*p, errmsgbuf, sizeof(errmsgbuf));
 		ereport(ERROR,
 				errcode(ERRCODE_PROTOCOL_VIOLATION),
 				errmsg("malformed OAUTHBEARER message"),
 				errdetail("Unexpected attribute \"%s\" in client-first-message.",
-						  sanitize_char(*p)));
-	p++;
+						  errmsgbuf));
+	}
+	if (!*(++p))
+		goto endofmessage;
 
 	/* All remaining fields are separated by the RFC's kvsep (\x01). */
 	if (*p != KVSEP)
+	{
+		sanitize_char(*p, errmsgbuf, sizeof(errmsgbuf));
 		ereport(ERROR,
 				errcode(ERRCODE_PROTOCOL_VIOLATION),
 				errmsg("malformed OAUTHBEARER message"),
 				errdetail("Key-value separator expected, but found character \"%s\".",
-						  sanitize_char(*p)));
-	p++;
+						  errmsgbuf));
+	}
+	if (!*(++p))
+		goto endofmessage;
 
 	auth = parse_kvpairs_for_auth(&p);
 	if (!auth)
@@ -296,6 +313,13 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	explicit_bzero(input_copy, inputlen);
 
 	return status;
+
+endofmessage:
+	explicit_bzero(input_copy, inputlen);
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"));
+	pg_unreachable();
 }
 
 /*
@@ -303,19 +327,14 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
  *
  * If it's a printable ASCII character, print it as a single character.
  * otherwise, print it in hex.
- *
- * The returned pointer points to a static buffer.
  */
-static char *
-sanitize_char(char c)
+static void
+sanitize_char(char c, char *buf, size_t buflen)
 {
-	static char buf[5];
-
 	if (c >= 0x21 && c <= 0x7E)
-		snprintf(buf, sizeof(buf), "'%c'", c);
+		snprintf(buf, buflen, "'%c'", c);
 	else
-		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
-	return buf;
+		snprintf(buf, buflen, "0x%02x", (unsigned char) c);
 }
 
 /*
@@ -660,7 +679,9 @@ validate(Port *port, const char *auth)
 	if (!ValidatorCallbacks->validate_cb(validator_module_state, token,
 										 port->user_name, ret))
 	{
-		ereport(LOG, errmsg("internal error in OAuth validator module"));
+		ereport(WARNING,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("internal error in OAuth validator module"));
 		return false;
 	}
 
@@ -738,6 +759,11 @@ load_validator_library(const char *libname)
 	OAuthValidatorModuleInit validator_init;
 	MemoryContextCallback *mcb;
 
+	/*
+	 * Thre presence, and validity, of libname has already been established by
+	 * check_oauth_validator so we don't need to perform more than Assert level
+	 * checking here.
+	 */
 	Assert(libname && *libname);
 
 	validator_init = (OAuthValidatorModuleInit)
@@ -768,6 +794,15 @@ load_validator_library(const char *libname)
 				errdetail("Server has magic number 0x%08X, module has 0x%08X.",
 						  PG_OAUTH_VALIDATOR_MAGIC, ValidatorCallbacks->magic));
 
+	/*
+	 * Make sure all required callbacks are present in the ValidatorCallbacks
+	 * structure. Right now only the validation callback is required.
+	 */
+	if (ValidatorCallbacks->validate_cb == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "validate_cb"));
+
 	/* Allocate memory for validator library private state data */
 	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
 	validator_module_state->sversion = PG_VERSION_NUM;
@@ -804,7 +839,7 @@ bool
 check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
 {
 	int			line_num = hbaline->linenumber;
-	char	   *file_name = hbaline->sourcefile;
+	const char *file_name = hbaline->sourcefile;
 	char	   *rawstring;
 	List	   *elemlist = NIL;
 
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
index 8fe56267780..5fb559d84b2 100644
--- a/src/include/common/oauth-common.h
+++ b/src/include/common/oauth-common.h
@@ -3,7 +3,7 @@
  * oauth-common.h
  *		Declarations for helper functions used for OAuth/OIDC authentication
  *
- * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/include/common/oauth-common.h
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
index 7e249613e10..2f01b669633 100644
--- a/src/include/libpq/oauth.h
+++ b/src/include/libpq/oauth.h
@@ -3,7 +3,7 @@
  * oauth.h
  *	  Interface to libpq/auth-oauth.c
  *
- * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/include/libpq/oauth.h
@@ -83,8 +83,9 @@ typedef struct OAuthValidatorCallbacks
 } OAuthValidatorCallbacks;
 
 /*
- * Type of the shared library symbol _PG_oauth_validator_module_init that is
- * looked up when loading a validator module.
+ * Type of the shared library symbol _PG_oauth_validator_module_init which is
+ * required for all validator modules.  This function will be invoked during
+ * module loading.
  */
 typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
 extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index c04ee38d086..db6454090d2 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -445,7 +445,7 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
-/* Define to 1 if curl_global_init() is guaranteed to be threadsafe. */
+/* Define to 1 if curl_global_init() is guaranteed to be thread-safe. */
 #undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
 
 /* Define to 1 if your compiler understands `typeof' or something similar. */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 74323de309a..c9aa51b1007 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -4,7 +4,7 @@
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
- * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
@@ -917,7 +917,7 @@ parse_interval(struct async_ctx *actx, const char *interval_str)
 	if (parsed < 1)
 		return actx->debugging ? 0 : 1;
 
-	else if (INT_MAX <= parsed)
+	else if (parsed >= INT_MAX)
 		return INT_MAX;
 
 	return parsed;
@@ -940,7 +940,7 @@ parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
 	parsed = parse_json_number(expires_in_str);
 	parsed = round(parsed);
 
-	if (INT_MAX <= parsed)
+	if (parsed >= INT_MAX)
 		return INT_MAX;
 	else if (parsed <= INT_MIN)
 		return INT_MIN;
@@ -966,6 +966,10 @@ parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
 		 */
 		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
 
+		/*
+		 * There is no evidence of verification_uri_complete being spelled
+		 * with "url" instead with any service provider, so only support "uri".
+		 */
 		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
 		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
 
@@ -1870,6 +1874,7 @@ append_urlencoded(PQExpBuffer buf, const char *s)
 	char	   *haystack;
 	char	   *match;
 
+	/* The first parameter to curl_easy_escape is deprecated by Curl */
 	escaped = curl_easy_escape(NULL, s, 0);
 	if (!escaped)
 	{
@@ -2273,6 +2278,7 @@ finish_device_authz(struct async_ctx *actx)
 			return false;
 		}
 
+		/* Copy the token error into the context error buffer */
 		record_token_error(actx, &err);
 
 		free_token_error(&err);
@@ -2541,7 +2547,7 @@ initialize_curl(PGconn *conn)
 
 	/*
 	 * If we determined at configure time that the Curl installation is
-	 * threadsafe, our job here is much easier. We simply initialize above
+	 * thread-safe, our job here is much easier. We simply initialize above
 	 * without any locking (concurrent or duplicated calls are fine in that
 	 * situation), then double-check to make sure the runtime setting agrees,
 	 * to try to catch silent downgrades.
@@ -2553,8 +2559,8 @@ initialize_curl(PGconn *conn)
 		 * In a downgrade situation, the damage is already done. Curl global
 		 * state may be corrupted. Be noisy.
 		 */
-		libpq_append_conn_error(conn, "libcurl is no longer threadsafe\n"
-								"\tCurl initialization was reported threadsafe when libpq\n"
+		libpq_append_conn_error(conn, "libcurl is no longer thread-safe\n"
+								"\tCurl initialization was reported thread-safe when libpq\n"
 								"\twas compiled, but the currently installed version of\n"
 								"\tlibcurl reports that it is not. Recompile libpq against\n"
 								"\tthe installed version of libcurl.");
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 8beae9604c7..24448c3e209 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -4,7 +4,7 @@
  *	   The front-end (client) implementation of OAuth/OIDC authentication
  *	   using the SASL OAUTHBEARER mechanism.
  *
- * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
@@ -272,7 +272,7 @@ oauth_json_scalar(void *state, char *token, JsonTokenType type)
 			 * Assert and don't continue any further for production builds.
 			 */
 			Assert(false);
-			oauth_json_set_error(ctx,	/* don't bother translating */
+			oauth_json_set_error(ctx,
 								 "internal error: target scalar found at nesting level %d during OAUTHBEARER parsing",
 								 ctx->nested);
 			return JSON_SEM_ACTION_FAILED;
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
index f297ed5c968..bbd2a98023b 100644
--- a/src/test/modules/oauth_validator/Makefile
+++ b/src/test/modules/oauth_validator/Makefile
@@ -2,7 +2,7 @@
 #
 # Makefile for src/test/modules/oauth_validator
 #
-# Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
 # Portions Copyright (c) 1994, Regents of the University of California
 #
 # src/test/modules/oauth_validator/Makefile
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
index 138a8104622..54eac5b117e 100644
--- a/src/test/modules/oauth_validator/README
+++ b/src/test/modules/oauth_validator/README
@@ -8,6 +8,6 @@ by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
 Authorization flow. The tests in t/002_client exercise custom OAuth flows and
 don't need an authorization server.
 
-Tests in this folder generally require 'oauth' to be present in PG_TEST_EXTRA,
-since localhost HTTP servers will be started. A Python installation is required
-to run the mock authorization server.
+Tests in this folder require 'oauth' to be present in PG_TEST_EXTRA, since
+HTTPS servers listening on localhost with TCP/IP sockets will be started. A
+Python installation is required to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
index 7b1e69518d9..a4c7a4451d3 100644
--- a/src/test/modules/oauth_validator/fail_validator.c
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -1,10 +1,10 @@
 /*-------------------------------------------------------------------------
  *
  * fail_validator.c
- *	  Test module for serverside OAuth token validation callbacks, which always
- *	  fails
+ *	  Test module for serverside OAuth token validation callbacks, which is
+ *	  guaranteed to always fail in the validation callback
  *
- * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/test/modules/oauth_validator/fail_validator.c
diff --git a/src/test/modules/oauth_validator/magic_validator.c b/src/test/modules/oauth_validator/magic_validator.c
new file mode 100644
index 00000000000..5ce68cdf405
--- /dev/null
+++ b/src/test/modules/oauth_validator/magic_validator.c
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * magic_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which
+ *	  should fail due to using the wrong PG_OAUTH_VALIDATOR_MAGIC marker
+ *	  and thus the wrong ABI version
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	0xdeadbeef,
+
+	.validate_cb = validate_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
+{
+	elog(FATAL, "magic_validator: this should be unreachable");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index 4b78c90557c..36d1b26369f 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -1,4 +1,4 @@
-# Copyright (c) 2024, PostgreSQL Global Development Group
+# Copyright (c) 2025, PostgreSQL Global Development Group
 
 validator_sources = files(
   'validator.c',
@@ -32,6 +32,22 @@ fail_validator = shared_module('fail_validator',
 )
 test_install_libs += fail_validator
 
+magic_validator_sources = files(
+  'magic_validator.c',
+)
+
+if host_system == 'windows'
+  magic_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'magic_validator',
+    '--FILEDESC', 'magic_validator - ABI incompatible OAuth validator module',])
+endif
+
+magic_validator = shared_module('magic_validator',
+  magic_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += magic_validator
+
 oauth_hook_client_sources = files(
   'oauth_hook_client.c',
 )
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
index fc003030ff8..9f553792c05 100644
--- a/src/test/modules/oauth_validator/oauth_hook_client.c
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -4,7 +4,7 @@
  *		Test driver for t/002_client.pl, which verifies OAuth hook
  *		functionality in libpq.
  *
- * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  *
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index d2dda62a2d4..dada89e95cc 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -3,7 +3,7 @@
 # Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
 # setup.
 #
-# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
 #
 
 use strict;
@@ -14,24 +14,23 @@ use MIME::Base64 qw(encode_base64);
 use PostgreSQL::Test::Cluster;
 use PostgreSQL::Test::Utils;
 use Test::More;
-use Config;
 
 use FindBin;
 use lib $FindBin::RealBin;
 
 use OAuth::Server;
 
-if ($Config{osname} eq 'MSWin32')
-{
-	plan skip_all => 'OAuth server-side tests are not supported on Windows';
-}
-
 if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
 {
 	plan skip_all =>
 	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
 }
 
+if ($windows_os)
+{
+	plan skip_all => 'OAuth server-side tests are not supported on Windows';
+}
+
 if ($ENV{with_libcurl} ne 'yes')
 {
 	plan skip_all => 'client-side OAuth not supported by this build';
@@ -570,6 +569,24 @@ $node->connect_fails(
 	"fail_validator is used for $user",
 	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
 
+#
+# Test ABI compatability magic marker
+#
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'magic_validator'\n");
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=magic_validator      issuer="$issuer"           scope="openid postgres"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"magic_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+OAuth validator module "magic_validator": magic number mismatch/);
 $node->stop;
 
 done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
index 95cccf90dd8..ab83258d736 100644
--- a/src/test/modules/oauth_validator/t/002_client.pl
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -2,7 +2,7 @@
 # Exercises the API for custom OAuth client flows, using the oauth_hook_client
 # test driver.
 #
-# Copyright (c) 2021-2024, PostgreSQL Global Development Group
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
 #
 
 use strict;
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
index f0f23d1d1a8..655b2870b0b 100644
--- a/src/test/modules/oauth_validator/t/OAuth/Server.pm
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -1,5 +1,5 @@
 
-# Copyright (c) 2024, PostgreSQL Global Development Group
+# Copyright (c) 2025, PostgreSQL Global Development Group
 
 =pod
 
@@ -46,7 +46,7 @@ use Test::More;
 
 =over
 
-=item SSL::Server->new()
+=item OAuth::Server->new()
 
 Create a new OAuth Server object.
 
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
index e218f5c8902..b2e5d182e1b 100644
--- a/src/test/modules/oauth_validator/validator.c
+++ b/src/test/modules/oauth_validator/validator.c
@@ -3,7 +3,7 @@
  * validator.c
  *	  Test module for serverside OAuth token validation callbacks
  *
- * Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * src/test/modules/oauth_validator/validator.c
-- 
2.39.3 (Apple Git-146)

#206Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#204)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 13 Feb 2025, at 22:23, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Wed, Feb 12, 2025 at 6:55 AM Peter Eisentraut <peter@eisentraut.org> wrote:

This just depends on how people have built their libcurl, right?

Do we have any information whether the async-dns-free build is a common
configuration?

I don't think the annual Curl survey covers that, unfortunately.

We should be able to get a decent idea by inspecting the packaging scripts for
the major distributions I think.

--
Daniel Gustafsson

#207Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#205)
4 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Feb 13, 2025 at 2:56 PM Daniel Gustafsson <daniel@yesql.se> wrote:

To make it easier for you to see what I mean I have implemented most of the
comments and attached as a fixup patch, from which you can cherry-pick hunks
you agree with. Those I didn't implement should be marked as such below.

As we discussed off-list I took the liberty of squashing the previous fixup
patches into a single one, and squashed your fixes for my comments against v47
into 0001. All of my proposals are in 0004.

Great! I have attached v50; 0001 has almost all of v49 squashed in,
0002 has my changes on top (includes a pgindent/pgperltidy), and 0003
holds the only part I don't like (see below). 0004 contains the
FreeBSD hack that I suspect we'll need to merge in until Cirrus images
are updated.

+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth servers and clients for a given application.
Since we define terminology here, shouldn't this be "OAuth resource servers"?

The resource server is Postgres for our purposes; I've changed it to
"authorization servers".

+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
"Don't make any bugs" isn't very helpful advice =) Expanded on it slightly.

Hmm, I think the overloading of "validate" in the replacement text
could be confusing. I guess my point is less "don't write bugs" and
more "a bug here has extreme impact"? I've taken another shot at it;
see what you think.

+ The server has ensured that the token is well-formed syntactically, but no
"server" is an overloaded nomenclature here, perhaps using libpq instead to
clearly indicate that it's postgres and not an OAuth server.

I've replaced this with "PostgreSQL" to match up with Peter's earlier
feedback (we were using "libpq" to describe the backend and he wanted
to avoid that).

+sanitize_char(char c)
+{
+ static char buf[5];
With the multithreading work on the horizon we should probably avoid static
variables like these to not create work for our future selves? The code isn't
as neat when passing in a buffer/length but it avoids the need for a static or
threadlocal variable. Or am I overthinking this?

This is the only part of the feedback patch that I'm not a fan of,
mostly because it begins to diverge heavily from the SCRAM code it
copied from. I don't disagree with the goal of getting rid of the
static buffers, but I would like to see them modified at the same time
so that we can refactor easily if/when a third SASL mechanism shows
up. (Maybe with a psprintf() rather than buffers?)

+ p++;
+ if (*p != ',')
If the SASL exchange, are we certain that a rogue client cannot inject a
message which trips us past the end of string? Should we be doublecheck when
advancing p across the message?

The existing != checks will bail out if they get to the end of the
string. It relies on byte-at-a-time advancement for safety, as well as
the SASL code higher in the stack that ensures that the input buffer
is always null terminated. (SCRAM relies on that too.) If we ever
jumped farther than a byte, we'd need stronger checks, but at the
moment I don't think this change helps us.

In load_validator_library we don't explicitly verify that the required callback
is defined in the returned structure, which seems like a cheap enough belts and
suspenders level check.

Yeah, there's a later check at time of use, but it's not as
user-friendly. I've adjusted the new error message to make it a bit
closer to the logical plugin wording.

+       if (parsed < 1)
+               return actx->debugging ? 0 : 1;
Is 1 second a sane lower bound on interval for all situations?  I'm starting to
wonder if we should be more conservative here, or even make it configurable in
some way? The default if not set of 5 seconds is quite a lot higher than 1.

Mmm, maybe it should be made configurable, but one second seems like a
long time from a CPU perspective. Maybe it would be applicable to
embedded clients? But only if some provider out there actually starts
using smaller intervals than their clients can stand... Should we wait
to hear from someone who is interested in configuring it?

+       parsed = parse_json_number(expires_in_str);
+       parsed = round(parsed);
Shouldn't we floor() the value here to ensure we never report an expiration
time longer than the actual expiration?

Sounds reasonable. Done in 0002.

register_socket() doesn't have an error catch for the case when neither epoll
nor kqeue is supported. Shouldn't it set actx_error() here as well? (Not done
in my review patch.)

Done.

+       if (actx->curl_err[0])
+       {
+               size_t          len;
+
+               appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
Should this also qualify that the error comes from outside of postgres?
Something like "(libcurl:%s)" to match?  I haven't changed this in the attached
since I'm still on the fence, but I'm leanings that we probably should.
Thoughts?

Done. More context is probably better than less here.

-   * We only support one mechanism at the moment, so rather than deal with a
+   * We only support two mechanisms at the moment, so rather than deal with a
While there's nothing incorrect about this comment, I have a feeling we won't
support more mechanisms than we can justify having a simple array for anytime
soon =)

Yeah. My goal was mostly to justify the use of the (unusual)
static-length list to future readers.

Thank you so much for the reviews!

--Jacob

Attachments:

v50-0001-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v50-0001-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From 1e9e97d298352fe07c4d87c0aa72693286cadd1d Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 23 Oct 2024 09:37:33 -0700
Subject: [PATCH v50 1/4] Add OAUTHBEARER SASL mechanism

DO NOT USE THIS PROOF OF CONCEPT IN PRODUCTION.

Implement OAUTHBEARER (RFC 7628) and OAuth 2.0 Device Authorization
Grants (RFC 8628). This adds a new auth method, oauth, to pg_hba. When
speaking to a OAuth-enabled server, it looks a bit like this:

    $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
    Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

The OAuth issuer must support device authorization. No other OAuth flows
are currently implemented (but clients may provide their own flows).

The client implementation requires libcurl and its development headers.
Pass --with-libcurl/-Dlibcurl=enabled during configuration. The server
implementation does not require additional build-time dependencies, but
an external validator module must be supplied.

Thomas Munro wrote the kqueue() implementation for oauth-curl; thanks!

Several TODOs:
- perform several sanity checks on the OAuth issuer's responses
- improve error debuggability during the OAuth handshake
- fix libcurl initialization thread-safety
- harden the libcurl flow implementation
- fill in documentation stubs
- support protocol "variants" implemented by major providers
- implement more helpful handling of HBA misconfigurations
- use logdetail during auth failures
- ...and more.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   65 +
 configure                                     |  332 ++
 configure.ac                                  |   41 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  445 +++
 doc/src/sgml/oauth-validators.sgml            |  414 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/protocol.sgml                    |  133 +-
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |  100 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  894 +++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |  101 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2876 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1153 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   48 +-
 src/interfaces/libpq/libpq-fe.h               |   85 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   47 +
 .../modules/oauth_validator/magic_validator.c |   48 +
 src/test/modules/oauth_validator/meson.build  |   85 +
 .../oauth_validator/oauth_hook_client.c       |  293 ++
 .../modules/oauth_validator/t/001_server.pl   |  592 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  143 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 60 files changed, 9256 insertions(+), 39 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/magic_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index fffa438cec1..2f5f5ef21a8 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -167,7 +167,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -329,6 +329,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -422,8 +423,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -799,8 +802,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 7b55c2664a6..061b13376ac 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -274,3 +274,68 @@ AC_DEFUN([PGAC_CHECK_STRIP],
   AC_SUBST(STRIP_STATIC_LIB)
   AC_SUBST(STRIP_SHARED_LIB)
 ])# PGAC_CHECK_STRIP
+
+
+
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for required libraries and headers, and test to see whether the current
+# installation of libcurl is thread-safe.
+
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[
+  AC_CHECK_HEADER(curl/curl.h, [],
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+  AC_CHECK_LIB(curl, curl_multi_init, [],
+			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+])],
+  [pgac_cv__libcurl_threadsafe_init=yes],
+  [pgac_cv__libcurl_threadsafe_init=no],
+  [pgac_cv__libcurl_threadsafe_init=unknown])])
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
+              [Define to 1 if curl_global_init() is guaranteed to be thread-safe.])
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  AC_CACHE_CHECK([for curl support for asynchronous DNS], [pgac_cv__libcurl_async_dns],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+])],
+  [pgac_cv__libcurl_async_dns=yes],
+  [pgac_cv__libcurl_async_dns=no],
+  [pgac_cv__libcurl_async_dns=unknown])])
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    AC_MSG_WARN([
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.])
+  fi
+])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0ffcaeb4367..93fddd69981 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,157 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12378,176 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
+fi
+
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
+$as_echo_n "checking for curl_global_init thread safety... " >&6; }
+if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_threadsafe_init=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_threadsafe_init=yes
+else
+  pgac_cv__libcurl_threadsafe_init=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
+$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+
+$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
+
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl support for asynchronous DNS" >&5
+$as_echo_n "checking for curl support for asynchronous DNS... " >&6; }
+if ${pgac_cv__libcurl_async_dns+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_async_dns=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_async_dns=yes
+else
+  pgac_cv__libcurl_async_dns=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_async_dns" >&5
+$as_echo "$pgac_cv__libcurl_async_dns" >&6; }
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&5
+$as_echo "$as_me: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&2;}
+  fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
diff --git a/configure.ac b/configure.ac
index f56681e0d91..b6d02f5ecc7 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,40 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1328,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..6fc0da57f1b 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system hosting the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth resource servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it is the responsibility of the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or formatting are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 5e4f201e099..6591a54124c 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3f0a7e9c069..3c95c15a1e4 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1143,6 +1143,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         Libcurl version 7.61.0 or later is required for this feature.
+         Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2584,6 +2597,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        Libcurl version 7.61.0 or later is required for this feature.
+        Building with this will check for the required header files
+        and libraries to make sure that your <productname>Curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index c49e975b082..ca84226755d 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,107 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration</link>.
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of
+        <ulink url="https://mailarchive.ietf.org/arch/msg/oauth/JIVxFBGsJBVtm7ljwJhPUm3Fr-w/">
+        "mix-up attacks"</ulink> on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10130,329 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   libpq implements support for the OAuth v2 Device Authorization client flow,
+   documented in
+   <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
+   which it will attempt to use by default if the server
+   <link linkend="auth-oauth">requests a bearer token</link> during
+   authentication. This flow can be utilized even if the system running the
+   client application does not have a usable web browser, for example when
+   running a client via <application>SSH</application>. Client applications may implement their own flows
+   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
+  </para>
+  <para>
+   The builtin flow will, by default, print a URL to visit and a user code to
+   enter there:
+<programlisting>
+$ psql 'dbname=postgres oauth_issuer=https://example.com oauth_client_id=...'
+Visit https://example.com/device and enter the code: ABCD-EFGH
+</programlisting>
+   (This prompt may be
+   <link linkend="libpq-oauth-authdata-prompt-oauth-device">customized</link>.)
+   The user will then log into their OAuth provider, which will ask whether
+   to allow libpq and the server to perform actions on their behalf. It is always
+   a good idea to carefully review the URL and permissions displayed, to ensure
+   they match expectations, before continuing. Permissions should not be given
+   to untrusted third parties.
+  </para>
+  <para>
+   For an OAuth client flow to be usable, the connection string must at minimum
+   contain <xref linkend="libpq-connect-oauth-issuer"/> and
+   <xref linkend="libpq-connect-oauth-client-id"/>. (These settings are
+   determined by your organization's OAuth provider.) The builtin flow
+   additionally requires the OAuth authorization server to publish a device
+   authorization endpoint.
+  </para>
+
+  <note>
+   <para>
+    The builtin Device Authorization flow is not currently supported on Windows.
+    Custom client flows may still be implemented.
+   </para>
+  </note>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when an action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+    const char *verification_uri_complete;  /* optional combination of URI and
+                                             * code, or NULL */
+    int         expires_in;         /* seconds until user code expires */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+        <para>
+         If a non-NULL <structfield>verification_uri_complete</structfield> is
+         provided, it may optionally be used for non-textual verification (for
+         example, by displaying a QR code). The URL and user code should still
+         be displayed to the end user in this case, because the code will be
+         manually confirmed by the provider, and the URL lets users continue
+         even if they can't use the non-textual method. For more information,
+         see section 3.3.1 in
+         <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">RFC 8628</ulink>.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       prints HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
@@ -10092,6 +10525,18 @@ int PQisthreadsafe();
    <application>libpq</application> source code for a way to do cooperative
    locking between <application>libpq</application> and your application.
   </para>
+
+  <para>
+   Similarly, if you are using <productname></productname>Curl</productname> inside your application,
+   <emphasis>and</emphasis> you do not already
+   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
+   libcurl globally</ulink> before starting new threads, you will need to
+   cooperatively lock (again via <function>PQregisterThreadLock</function>)
+   around any code that may initialize libcurl. This restriction is lifted for
+   more recent versions of <productname>Curl</productname> that are built to support thread-safe
+   initialization; those builds can be identified by the advertisement of a
+   <literal>threadsafe</literal> feature in their version metadata.
+  </para>
  </sect1>
 
 
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..e9d28d3daea
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,414 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the integration layer between the server
+  and the OAuth provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   validating the correctness of the implementation is critical. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    Although different modules may take very different approaches to token
+    validation, implementations generally need to perform three separate
+    actions:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+      <para>
+       Even if authorization fails, a module may choose to continue to pull
+       authentication information from the token for use in auditing and
+       debugging.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   OAuth validator modules are dynamically loaded from the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   Modules are loaded on demand when requested from a login in progress.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains a magic
+   number and pointers to the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    uint32        magic;            /* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+                                     const char *token, const char *role,
+                                     ValidatorModuleResult *result);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    <application>libpq</application> has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    set output parameters in the <literal>result</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>result->authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>result->authn_id</structfield>
+    field.  Alternatively, <structfield>result->authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    A validator may return <literal>false</literal> to signal an internal error,
+    in which case any result parameters are ignored and the connection fails.
+    Otherwise the validator should return <literal>true</literal> to indicate
+    that it has processed the token and made an authorization decision.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>result->authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>delegate_ident_mapping</literal> turned on,
+    <productname>PostgreSQL</productname> will not perform any checks on the value of
+    <structfield>result->authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any allocated state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index fb5dec1172e..3bd9e68e6ce 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1688,11 +1688,11 @@ SELCT 1/0;<!-- this typo is intentional -->
 
   <para>
    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
-   might be added in the future. The below steps illustrate how SASL
-   authentication is performed in general, while the next subsection gives
-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
+   protocols. At the moment, <productname>PostgreSQL</productname> implements three
+   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
+   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
+   authentication is performed in general, while the next subsections give
+   more details on particular mechanisms.
   </para>
 
   <procedure>
@@ -1727,7 +1727,7 @@ SELCT 1/0;<!-- this typo is intentional -->
    <step id="sasl-auth-end">
     <para>
      Finally, when the authentication exchange is completed successfully, the
-     server sends an AuthenticationSASLFinal message, followed
+     server sends an optional AuthenticationSASLFinal message, followed
      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
      contains additional server-to-client data, whose content is particular to the
      selected authentication mechanism. If the authentication mechanism doesn't
@@ -1746,9 +1746,9 @@ SELCT 1/0;<!-- this typo is intentional -->
    <title>SCRAM-SHA-256 Authentication</title>
 
    <para>
-    The implemented SASL mechanisms at the moment
-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
+    <literal>SCRAM-SHA-256</literal>, and its variant with channel
+    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
+    authentication mechanisms. They are described in
     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    </para>
@@ -1850,6 +1850,121 @@ SELCT 1/0;<!-- this typo is intentional -->
     </step>
    </procedure>
   </sect2>
+
+  <sect2 id="sasl-oauthbearer">
+   <title>OAUTHBEARER Authentication</title>
+
+   <para>
+    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
+    authentication. It is described in detail in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
+   </para>
+
+   <para>
+    A typical exchange differs depending on whether or not the client already
+    has a bearer token cached for the current user. If it does not, the exchange
+    will take place over two connections: the first "discovery" connection to
+    obtain OAuth metadata from the server, and the second connection to send
+    the token after the client has obtained it. (libpq does not currently
+    implement a caching method as part of its builtin flow, so it uses the
+    two-connection exchange.)
+   </para>
+
+   <para>
+    This mechanism is client-initiated, like SCRAM. The client initial response
+    consists of the standard "GS2" header used by SCRAM, followed by a list of
+    <literal>key=value</literal> pairs. The only key currently supported by
+    the server is <literal>auth</literal>, which contains the bearer token.
+    <literal>OAUTHBEARER</literal> additionally specifies three optional
+    components of the client initial response (the <literal>authzid</literal> of
+    the GS2 header, and the <structfield>host</structfield> and
+    <structfield>port</structfield> keys) which are currently ignored by the
+    server.
+   </para>
+
+   <para>
+    <literal>OAUTHBEARER</literal> does not support channel binding, and there
+    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
+    server data during a successful authentication, so the
+    AuthenticationSASLFinal message is not used in the exchange.
+   </para>
+
+   <procedure>
+    <title>Example</title>
+    <step>
+     <para>
+      During the first exchange, the server sends an AuthenticationSASL message
+      with the <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message which
+      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
+      client does not already have a valid bearer token for the current user,
+      the <structfield>auth</structfield> field is empty, indicating a discovery
+      connection.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an AuthenticationSASLContinue message containing an error
+      <literal>status</literal> alongside a well-known URI and scopes that the
+      client should use to conduct an OAuth flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Client sends a SASLResponse message containing the empty set (a single
+      <literal>0x01</literal> byte) to finish its half of the discovery
+      exchange.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an ErrorMessage to fail the first exchange.
+     </para>
+     <para>
+      At this point, the client conducts one of many possible OAuth flows to
+      obtain a bearer token, using any metadata that it has been configured with
+      in addition to that provided by the server. (This description is left
+      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
+      mandate any particular method for obtaining a token.)
+     </para>
+     <para>
+      Once it has a token, the client reconnects to the server for the final
+      exchange:
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server once again sends an AuthenticationSASL message with the
+      <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message, but this
+      time the <structfield>auth</structfield> field in the message contains the
+      bearer token that was obtained during the client flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server validates the token according to the instructions of the
+      token provider. If the client is authorized to connect, it sends an
+      AuthenticationOk message to end the SASL exchange.
+     </para>
+    </step>
+   </procedure>
+  </sect2>
  </sect1>
 
  <sect1 id="protocol-replication">
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index 7c474559bdf..0e5e8e8f309 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -347,6 +347,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 7dd7110318d..574f992ed49 100644
--- a/meson.build
+++ b/meson.build
@@ -855,6 +855,101 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+
+    # Check to see whether the current platform supports thread-safe Curl
+    # initialization.
+    libcurl_threadsafe_init = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+        #ifdef CURL_VERSION_THREADSAFE
+            if (info->features & CURL_VERSION_THREADSAFE)
+                return 0;
+        #endif
+
+            return 1;
+        }''',
+        name: 'test for curl_global_init thread safety',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_threadsafe_init = true
+        message('curl_global_init is thread-safe')
+      elif r.returncode() == 1
+        message('curl_global_init is not thread-safe')
+      else
+        message('curl_global_init failed; assuming not thread-safe')
+      endif
+    endif
+
+    if libcurl_threadsafe_init
+      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
+    endif
+
+    # Warn if a thread-friendly DNS resolver isn't built.
+    libcurl_async_dns = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+            return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+        }''',
+        name: 'test for curl support for asynchronous DNS',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_async_dns = true
+      endif
+    endif
+
+    if not libcurl_async_dns
+      warning('''
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.''')
+    endif
+  endif
+
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3045,6 +3140,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3721,6 +3820,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbe11e75bf0..3b620bac5ac 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..830f2002683
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,894 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://datatracker.ietf.org/doc/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(void *arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message. (In
+	 * practice such configurations are rejected during HBA parsing.)
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = palloc0(sizeof(ValidatorModuleResult));
+	if (!ValidatorCallbacks->validate_cb(validator_module_state, token,
+										 port->user_name, ret))
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+	MemoryContextCallback *mcb;
+
+	/*
+	 * Thre presence, and validity, of libname has already been established by
+	 * check_oauth_validator so we don't need to perform more than Assert level
+	 * checking here.
+	 */
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/*
+	 * Check the magic number, to protect against break-glass scenarios where
+	 * the ABI must change within a major version. load_external_function()
+	 * already checks for compatibility across major versions.
+	 */
+	if (ValidatorCallbacks->magic != PG_OAUTH_VALIDATOR_MAGIC)
+		ereport(ERROR,
+				errmsg("%s module \"%s\": magic number mismatch",
+					   "OAuth validator", libname),
+				errdetail("Server has magic number 0x%08X, module has 0x%08X.",
+						  PG_OAUTH_VALIDATOR_MAGIC, ValidatorCallbacks->magic));
+
+	/*
+	 * Make sure all required callbacks are present in the ValidatorCallbacks
+	 * structure. Right now only the validation callback is required.
+	 */
+	if (ValidatorCallbacks->validate_cb == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "validate_cb"));
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	validator_module_state->sversion = PG_VERSION_NUM;
+
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	/* Shut down the library before cleaning up its state. */
+	mcb = palloc0(sizeof(*mcb));
+	mcb->func = shutdown_validator_library;
+
+	MemoryContextRegisterResetCallback(CurrentMemoryContext, mcb);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked during memory context reset.
+ */
+static void
+shutdown_validator_library(void *arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	const char *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 226af43fe23..68833ca5fa3 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4852,6 +4853,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index d472987ed46..ccefd214143 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..5fb559d84b2
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..2f01b669633
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,101 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	/* Holds the server's PG_VERSION_NUM. Reserved for future extensibility. */
+	int			sversion;
+
+	/*
+	 * Private data pointer for use by a validator module. This can be used to
+	 * store state for the module that will be passed to each of its
+	 * callbacks.
+	 */
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	/*
+	 * Should be set to true if the token carries sufficient permissions for
+	 * the bearer to connect.
+	 */
+	bool		authorized;
+
+	/*
+	 * If the token authenticates the user, this should be set to a palloc'd
+	 * string containing the SYSTEM_USER to use for HBA mapping. Consider
+	 * setting this even if result->authorized is false so that DBAs may use
+	 * the logs to match end users to token failures.
+	 *
+	 * This is required if the module is not configured for ident mapping
+	 * delegation. See the validator module documentation for details.
+	 */
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+/*
+ * Validator module callbacks
+ *
+ * These callback functions should be defined by validator modules and returned
+ * via _PG_oauth_validator_module_init().  ValidatorValidateCB is the only
+ * required callback. For more information about the purpose of each callback,
+ * refer to the OAuth validator modules documentation.
+ */
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+									 const char *token, const char *role,
+									 ValidatorModuleResult *result);
+
+/*
+ * Identifies the compiled ABI version of the validator module. Since the server
+ * already enforces the PG_MODULE_MAGIC number for modules across major
+ * versions, this is reserved for emergency use within a stable release line.
+ * May it never need to change.
+ */
+#define PG_OAUTH_VALIDATOR_MAGIC 0x20250207
+
+typedef struct OAuthValidatorCallbacks
+{
+	uint32		magic;			/* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+/*
+ * Type of the shared library symbol _PG_oauth_validator_module_init which is
+ * required for all validator modules.  This function will be invoked during
+ * module loading.
+ */
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..db6454090d2 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -442,6 +445,9 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
+/* Define to 1 if curl_global_init() is guaranteed to be thread-safe. */
+#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
 /* Define to 1 if your compiler understands `typeof' or something similar. */
 #undef HAVE_TYPEOF
 
@@ -663,6 +669,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 701810a272a..90b0b65db6f 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..c9aa51b1007
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2876 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * It's generally prudent to set a maximum response size to buffer in memory,
+ * but it's less clear what size to choose. The biggest of our expected
+ * responses is the server metadata JSON, which will only continue to grow in
+ * size; the number of IANA-registered parameters in that document is up to 78
+ * as of February 2025.
+ *
+ * Even if every single parameter were to take up 2k on average (a previously
+ * common limit on the size of a URL), 256k gives us 128 parameter values before
+ * we give up. (That's almost certainly complete overkill in practice; 2-4k
+ * appears to be common among popular providers at the moment.)
+ */
+#define MAX_OAUTH_RESPONSE_SIZE (256 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *verification_uri_complete;
+	char	   *expires_in_str;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			expires_in;
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->verification_uri_complete);
+	free(authz->expires_in_str);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+	int			timerfd;		/* descriptor for signaling async timeouts */
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * In general, none of the error cases below should ever happen if we have
+	 * no bugs above. But if we do hit them, surfacing those errors somehow
+	 * might be the only way to have a chance to debug them.
+	 *
+	 * TODO: At some point it'd be nice to have a standard way to warn about
+	 * teardown failures. Appending to the connection's error message only
+	 * helps if the bug caused a connection failure; otherwise it'll be
+	 * buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 */
+		if (ctx->active)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: started field '%s' before field '%s' was finished",
+								  name, ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+
+	/*
+	 * All fields should be fully processed by the end of the top-level
+	 * object.
+	 */
+	if (!ctx->nested && ctx->active)
+	{
+		Assert(false);
+		oauth_parse_set_error(ctx,
+							  "internal error: field '%s' still active at end of object",
+							  ctx->active->name);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Clear the target (which should be an array inside the top-level
+		 * object). For this to be safe, no target arrays can contain other
+		 * arrays; we check for that in the array_start callback.
+		 */
+		if (ctx->nested != 2 || ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: found unexpected array end while parsing field '%s'",
+								  ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			/* Ensure that we're parsing the top-level keys... */
+			if (ctx->nested != 1)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar target found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* ...and that a result has not already been set. */
+			if (*field->target.scalar)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar field '%s' would be assigned twice",
+									  ctx->active->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			/* The target array should be inside the top-level object. */
+			if (ctx->nested != 2)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: array member found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses a valid JSON number into a double. The input must have come from
+ * pg_parse_json(), so that we know the lexer has validated it; there's no
+ * in-band signal for invalid formats.
+ */
+static double
+parse_json_number(const char *s)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(s, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(false);
+		return 0;
+	}
+
+	return parsed;
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(interval_str);
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (parsed >= INT_MAX)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the "expires_in" JSON number, corresponding to the number of seconds
+ * remaining in the lifetime of the device code request.
+ *
+ * Similar to parse_interval, but we have even fewer requirements for reasonable
+ * values since we don't use the expiration time directly (it's passed to the
+ * PQAUTHDATA_PROMPT_OAUTH_DEVICE hook, in case the application wants to do
+ * something with it). We simply round and clamp to int range.
+ */
+static int
+parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(expires_in_str);
+	parsed = round(parsed);
+
+	if (parsed >= INT_MAX)
+		return INT_MAX;
+	else if (parsed <= INT_MIN)
+		return INT_MIN;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+		{"expires_in", JSON_TOKEN_NUMBER, {&authz->expires_in_str}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * There is no evidence of verification_uri_complete being spelled
+		 * with "url" instead with any service provider, so only support "uri".
+		 */
+		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	Assert(authz->expires_in_str);	/* ensured by parse_oauth_json() */
+	authz->expires_in = parse_expires_in(actx, authz->expires_in_str);
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*---
+		 * We currently have no use for the following OPTIONAL fields:
+		 *
+		 * - expires_in: This will be important for maintaining a token cache,
+		 *               but we do not yet implement one.
+		 *
+		 * - refresh_token: Ditto.
+		 *
+		 * - scope: This is only sent when the authorization server sees fit to
+		 *          change our scope request. It's not clear what we should do
+		 *          about this; either it's been done as a matter of policy, or
+		 *          the user has explicitly denied part of the authorization,
+		 *          and either way the server-side validator is in a better
+		 *          place to complain if the change isn't acceptable.
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * For epoll, the timerfd is always part of the set; it's just disabled when
+ * we're not using it. For kqueue, the "timerfd" is actually a second kqueue
+ * instance which is only added to the set when needed.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		/*- translator: the term "kqueue" (kernel queue) should not be translated */
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	/*
+	 * Originally, we set EVFILT_TIMER directly on the top-level multiplexer.
+	 * This makes it difficult to implement timer_expired(), though, so now we
+	 * set EVFILT_TIMER on a separate actx->timerfd, which is chained to
+	 * actx->mux while the timer is active.
+	 */
+	actx->timerfd = kqueue();
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timer kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+#endif
+
+	return 0;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer).
+ *
+ * For epoll, rather than continually adding and removing the timer, we keep it
+ * in the set at all times and just disarm it when it's not needed. For kqueue,
+ * the timer is removed completely when disabled to prevent stale timeouts from
+ * remaining in the queue.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	/* Enable/disable the timer itself. */
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
+		   0, timeout, 0);
+	if (kevent(actx->timerfd, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+
+	/*
+	 * Add/remove the timer to/from the mux. (In contrast with epoll, if we
+	 * allowed the timer to remain registered here after being disabled, the
+	 * mux queue would retain any previous stale timeout notifications and
+	 * remain readable.)
+	 */
+	EV_SET(&ev, actx->timerfd, EVFILT_READ, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, 0, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "could not update timer on kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return false;
+}
+
+/*
+ * Returns 1 if the timeout in the multiplexer set has expired since the last
+ * call to set_timer(), 0 if the timer is still running, or -1 (with an
+ * actx_error() report) if the timer cannot be queried.
+ */
+static int
+timer_expired(struct async_ctx *actx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timerfd_gettime(actx->timerfd, &spec) < 0)
+	{
+		actx_error(actx, "getting timerfd value: %m");
+		return -1;
+	}
+
+	/*
+	 * This implementation assumes we're using single-shot timers. If you
+	 * change to using intervals, you'll need to reimplement this function
+	 * too, possibly with the read() or select() interfaces for timerfd.
+	 */
+	Assert(spec.it_interval.tv_sec == 0
+		   && spec.it_interval.tv_nsec == 0);
+
+	/* If the remaining time to expiration is zero, we're done. */
+	return (spec.it_value.tv_sec == 0
+			&& spec.it_value.tv_nsec == 0);
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	int			res;
+
+	/* Is the timer queue ready? */
+	res = PQsocketPoll(actx->timerfd, 1 /* forRead */ , 0, 0);
+	if (res < 0)
+	{
+		actx_error(actx, "checking kqueue for timeout: %m");
+		return -1;
+	}
+
+	return (res > 0);
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return -1;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * There might be an optimization opportunity here: if timeout == 0, we
+	 * could signal drive_request to immediately call
+	 * curl_multi_socket_action, rather than returning all the way up the
+	 * stack only to come right back. But it's not clear that the additional
+	 * code complexity is worth it.
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *prefix;
+	bool		printed_prefix = false;
+	PQExpBufferData buf;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	initPQExpBuffer(&buf);
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call. We also don't allow unprintable ASCII
+	 * through without a basic <XX> escape.
+	 */
+	for (int i = 0; i < size; i++)
+	{
+		char		c = data[i];
+
+		if (!printed_prefix)
+		{
+			appendPQExpBuffer(&buf, "[libcurl] %s ", prefix);
+			printed_prefix = true;
+		}
+
+		if (c >= 0x20 && c <= 0x7E)
+			appendPQExpBufferChar(&buf, c);
+		else if ((type == CURLINFO_HEADER_IN
+				  || type == CURLINFO_HEADER_OUT
+				  || type == CURLINFO_TEXT)
+				 && (c == '\r' || c == '\n'))
+		{
+			/*
+			 * Don't bother emitting <0D><0A> for headers and text; it's not
+			 * helpful noise.
+			 */
+		}
+		else
+			appendPQExpBuffer(&buf, "<%02X>", c);
+
+		if (c == '\n')
+		{
+			appendPQExpBufferChar(&buf, c);
+			printed_prefix = false;
+		}
+	}
+
+	if (printed_prefix)
+		appendPQExpBufferChar(&buf, '\n');	/* finish the line */
+
+	fprintf(stderr, "%s", buf.data);
+	termPQExpBuffer(&buf);
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 *
+	 * NB: If libcurl is not built against a friendly DNS resolver (c-ares or
+	 * threaded), setting this option prevents DNS lookups from timing out
+	 * correctly. We warn about this situation at configure time.
+	 *
+	 * TODO: Perhaps there's a clever way to warn the user about synchronous
+	 * DNS at runtime too? It's not immediately clear how to do that in a
+	 * helpful way: for many standard single-threaded use cases, the user
+	 * might not care at all, so spraying warnings to stderr would probably do
+	 * more harm than good.
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * If we're in debug mode, allow the developer to change the trusted CA
+	 * list. For now, this is not something we expose outside of the UNSAFE
+	 * mode, because it's not clear that it's useful in production: both libpq
+	 * and the user's browser must trust the same authorization servers for
+	 * the flow to work at all, so any changes to the roots are likely to be
+	 * done system-wide.
+	 */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	/* The first parameter to curl_easy_escape is deprecated by Curl */
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define HTTPS_SCHEME "https://"
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * provides an authorization endpoint, and both the token and authorization
+ * endpoint URLs seem reasonable).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/*
+	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
+	 * was present in the discovery document's grant_types_supported list. MS
+	 * Entra does not advertise this grant type, though, and since it doesn't
+	 * make sense to stand up a device_authorization_endpoint without also
+	 * accepting device codes at the token_endpoint, that's the only thing we
+	 * currently require.
+	 */
+
+	/*
+	 * Although libcurl will fail later if the URL contains an unsupported
+	 * scheme, that error message is going to be a bit opaque. This is a
+	 * decent time to bail out if we're not using HTTPS for the endpoints
+	 * we'll use for the flow.
+	 */
+	if (!actx->debugging)
+	{
+		if (pg_strncasecmp(provider->device_authorization_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "device authorization endpoint \"%s\" must use HTTPS",
+					   provider->device_authorization_endpoint);
+			return false;
+		}
+
+		if (pg_strncasecmp(provider->token_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "token endpoint \"%s\" must use HTTPS",
+					   provider->token_endpoint);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		/* Copy the token error into the context error buffer */
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		.verification_uri_complete = actx->authz.verification_uri_complete,
+		.expires_in = actx->authz.expires_in,
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Calls curl_global_init() in a thread-safe way.
+ *
+ * libcurl has stringent requirements for the thread context in which you call
+ * curl_global_init(), because it's going to try initializing a bunch of other
+ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
+ * the thread-safety situation, but there's a chicken-and-egg problem at
+ * runtime: you can't check the thread safety until you've initialized libcurl,
+ * which you can't do from within a thread unless you know it's thread-safe...
+ *
+ * Returns true if initialization was successful. Successful or not, this
+ * function will not try to reinitialize Curl on successive calls.
+ */
+static bool
+initialize_curl(PGconn *conn)
+{
+	/*
+	 * Don't let the compiler play tricks with this variable. In the
+	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
+	 * enter simultaneously, but we do care if this gets set transiently to
+	 * PG_BOOL_YES/NO in cases where that's not the final answer.
+	 */
+	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	curl_version_info_data *info;
+#endif
+
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * Lock around the whole function. If a libpq client performs its own work
+	 * with libcurl, it must either ensure that Curl is initialized safely
+	 * before calling us (in which case our call will be a no-op), or else it
+	 * must guard its own calls to curl_global_init() with a registered
+	 * threadlock handler. See PQregisterThreadLock().
+	 */
+	pglock_thread();
+#endif
+
+	/*
+	 * Skip initialization if we've already done it. (Curl tracks the number
+	 * of calls; there's no point in incrementing the counter every time we
+	 * connect.)
+	 */
+	if (init_successful == PG_BOOL_YES)
+		goto done;
+	else if (init_successful == PG_BOOL_NO)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init previously failed during OAuth setup");
+		goto done;
+	}
+
+	/*
+	 * We know we've already initialized Winsock by this point (see
+	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
+	 * we have to tell libcurl to initialize everything else, because other
+	 * pieces of our client executable may already be using libcurl for their
+	 * own purposes. If we initialize libcurl with only a subset of its
+	 * features, we could break those other clients nondeterministically, and
+	 * that would probably be a nightmare to debug.
+	 *
+	 * If some other part of the program has already called this, it's a
+	 * no-op.
+	 */
+	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init failed during OAuth setup");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * If we determined at configure time that the Curl installation is
+	 * thread-safe, our job here is much easier. We simply initialize above
+	 * without any locking (concurrent or duplicated calls are fine in that
+	 * situation), then double-check to make sure the runtime setting agrees,
+	 * to try to catch silent downgrades.
+	 */
+	info = curl_version_info(CURLVERSION_NOW);
+	if (!(info->features & CURL_VERSION_THREADSAFE))
+	{
+		/*
+		 * In a downgrade situation, the damage is already done. Curl global
+		 * state may be corrupted. Be noisy.
+		 */
+		libpq_append_conn_error(conn, "libcurl is no longer thread-safe\n"
+								"\tCurl initialization was reported thread-safe when libpq\n"
+								"\twas compiled, but the currently installed version of\n"
+								"\tlibcurl reports that it is not. Recompile libpq against\n"
+								"\tthe installed version of libcurl.");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+#endif
+
+	init_successful = PG_BOOL_YES;
+
+done:
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	pgunlock_thread();
+#endif
+	return (init_successful == PG_BOOL_YES);
+}
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	if (!initialize_curl(conn))
+		return PGRES_POLLING_FAILED;
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+		actx->timerfd = -1;
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+
+					break;
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+
+				/*
+				 * The client application is supposed to wait until our timer
+				 * expires before calling PQconnectPoll() again, but that
+				 * might not happen. To avoid sending a token request early,
+				 * check the timer before continuing.
+				 */
+				if (!timer_expired(actx))
+				{
+					conn->altsock = actx->timerfd;
+					return PGRES_POLLING_READING;
+				}
+
+				/* Disable the expired timer. */
+				if (!set_timer(actx, -1))
+					goto error_return;
+
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer.
+				 */
+				conn->altsock = actx->timerfd;
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..24448c3e209
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1153 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	/* Only top-level keys are considered. */
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		if (ctx->nested != 1)
+		{
+			/*
+			 * ctx->target_field should not have been set for nested keys.
+			 * Assert and don't continue any further for production builds.
+			 */
+			Assert(false);
+			oauth_json_set_error(ctx,
+								 "internal error: target scalar found at nesting level %d during OAUTHBEARER parsing",
+								 ctx->nested);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..32598721686
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f7..ec7a9236044 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85d1ca2864f..d5051f5e820 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -399,6 +417,7 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 static const pg_fe_sasl_mech *supported_sasl_mechs[] =
 {
 	&pg_scram_mech,
+	&pg_oauth_mech,
 };
 #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
 
@@ -655,6 +674,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1144,7 +1164,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1519,6 +1539,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4111,7 +4135,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4390,6 +4426,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -5002,6 +5041,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5155,6 +5200,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..b7399dee58e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,86 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+	const char *verification_uri_complete;	/* optional combination of URI and
+											 * code, or NULL */
+	int			expires_in;		/* seconds until user code expires */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a50..f36f7f19d58 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index dd64d291b3e..19f4a52a97a 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -37,6 +38,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 89e78b7d114..4e4be3fa511 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index a57077b682e..2b057451473 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..bbd2a98023b
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..54eac5b117e
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder require 'oauth' to be present in PG_TEST_EXTRA, since
+HTTPS servers listening on localhost with TCP/IP sockets will be started. A
+Python installation is required to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..a4c7a4451d3
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which is
+ *	  guaranteed to always fail in the validation callback
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static bool fail_token(const ValidatorModuleState *state,
+					   const char *token,
+					   const char *role,
+					   ValidatorModuleResult *result);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static bool
+fail_token(const ValidatorModuleState *state,
+		   const char *token, const char *role,
+		   ValidatorModuleResult *res)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/magic_validator.c b/src/test/modules/oauth_validator/magic_validator.c
new file mode 100644
index 00000000000..5ce68cdf405
--- /dev/null
+++ b/src/test/modules/oauth_validator/magic_validator.c
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * magic_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which
+ *	  should fail due to using the wrong PG_OAUTH_VALIDATOR_MAGIC marker
+ *	  and thus the wrong ABI version
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	0xdeadbeef,
+
+	.validate_cb = validate_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
+{
+	elog(FATAL, "magic_validator: this should be unreachable");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..36d1b26369f
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,85 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+magic_validator_sources = files(
+  'magic_validator.c',
+)
+
+if host_system == 'windows'
+  magic_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'magic_validator',
+    '--FILEDESC', 'magic_validator - ABI incompatible OAuth validator module',])
+endif
+
+magic_validator = shared_module('magic_validator',
+  magic_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += magic_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..9f553792c05
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,293 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+	printf(" --stress-async			busy-loop on PQconnectPoll rather than polling\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static bool stress_async = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{"stress-async", no_argument, NULL, 1006},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			case 1006:			/* --stress-async */
+				stress_async = true;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	if (stress_async)
+	{
+		/*
+		 * Perform an asynchronous connection, busy-looping on PQconnectPoll()
+		 * without actually waiting on socket events. This stresses code paths
+		 * that rely on asynchronous work to be done before continuing with
+		 * the next step in the flow.
+		 */
+		PostgresPollingStatusType res;
+
+		conn = PQconnectStart(conninfo);
+
+		do
+		{
+			res = PQconnectPoll(conn);
+		} while (res != PGRES_POLLING_FAILED && res != PGRES_POLLING_OK);
+	}
+	else
+	{
+		/* Perform a standard synchronous connection. */
+		conn = PQconnectdb(conninfo);
+	}
+
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..dada89e95cc
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,592 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($windows_os)
+{
+	plan skip_all => 'OAuth server-side tests are not supported on Windows';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+# Stress test: make sure our builtin flow operates correctly even if the client
+# application isn't respecting PGRES_POLLING_READING/WRITING signals returned
+# from PQconnectPoll().
+$base_connstr =
+  "$common_connstr port=" . $node->port . " host=" . $node->host;
+my @cmd = (
+	"oauth_hook_client", "--no-hook", "--stress-async",
+	connstr(stage => 'all', retries => 1, interval => 1));
+
+note "running '" . join("' '", @cmd) . "'";
+my ($stdout, $stderr) = run_command(\@cmd);
+
+like($stdout, qr/connection succeeded/, "stress-async: stdout matches");
+unlike(
+	$stderr,
+	qr/connection to database failed/,
+	"stress-async: stderr matches");
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+#
+# Test ABI compatability magic marker
+#
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'magic_validator'\n");
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=magic_validator      issuer="$issuer"           scope="openid postgres"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"magic_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+OAuth validator module "magic_validator": magic number mismatch/);
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..ab83258d736
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..655b2870b0b
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item OAuth::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..4faf3323d38
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires_in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..b2e5d182e1b
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,143 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	/*
+	 * Make sure the server is correctly setting sversion. (Real modules
+	 * should not do this; it would defeat upgrade compatibility.)
+	 */
+	if (state->sversion != PG_VERSION_NUM)
+		elog(ERROR, "oauth_validator: sversion set to %d", state->sversion);
+
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(PANIC, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return true;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index f521ad0b12f..ab7d7452ede 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2515,6 +2515,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2558,7 +2563,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index b6c170ac249..ed8ef8ddc89 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1952,6 +1959,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3091,6 +3099,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3488,6 +3498,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v50-0002-fixup-Add-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v50-0002-fixup-Add-OAUTHBEARER-SASL-mechanism.patchDownload
From eee0ee6d369e6baba327cd73fb7238eb0e1cb881 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 14 Feb 2025 16:50:45 -0800
Subject: [PATCH v50 2/4] fixup! Add OAUTHBEARER SASL mechanism

---
 doc/src/sgml/client-auth.sgml                 |  2 +-
 doc/src/sgml/libpq.sgml                       |  2 +-
 doc/src/sgml/oauth-validators.sgml            |  4 ++--
 src/backend/libpq/auth-oauth.c                |  8 ++++----
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 19 +++++++++++++------
 src/test/modules/oauth_validator/Makefile     |  2 +-
 .../modules/oauth_validator/magic_validator.c |  2 +-
 .../modules/oauth_validator/t/001_server.pl   |  6 ++++--
 8 files changed, 27 insertions(+), 18 deletions(-)

diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 6fc0da57f1b..832b616a7bb 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -2409,7 +2409,7 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
       <listitem>
        <para>
         The organization, product vendor, or other entity which develops and/or
-        administers the OAuth resource servers and clients for a given application.
+        administers the OAuth authorization servers and clients for a given application.
         Different providers typically choose different implementation details
         for their OAuth systems; a client of one provider is not generally
         guaranteed to have access to the servers of another.
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index ca84226755d..ddb3596df83 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10527,7 +10527,7 @@ int PQisthreadsafe();
   </para>
 
   <para>
-   Similarly, if you are using <productname></productname>Curl</productname> inside your application,
+   Similarly, if you are using <productname>Curl</productname> inside your application,
    <emphasis>and</emphasis> you do not already
    <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
    libcurl globally</ulink> before starting new threads, you will need to
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index e9d28d3daea..356f11d3bd8 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -21,7 +21,7 @@
  <warning>
   <para>
    Since a misbehaving validator might let unauthorized users into the database,
-   validating the correctness of the implementation is critical. See
+   correct implementation is crucial for server safety. See
    <xref linkend="oauth-validator-design"/> for design considerations.
   </para>
  </warning>
@@ -357,7 +357,7 @@ typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
 </programlisting>
 
     <replaceable>token</replaceable> will contain the bearer token to validate.
-    <application>libpq</application> has ensured that the token is well-formed syntactically, but no
+    <application>PostgreSQL</application> has ensured that the token is well-formed syntactically, but no
     other validation has been performed.  <replaceable>role</replaceable> will
     contain the role the user has requested to log in as.  The callback must
     set output parameters in the <literal>result</literal> struct, which is
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 830f2002683..27f7af7be00 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -741,9 +741,9 @@ load_validator_library(const char *libname)
 	MemoryContextCallback *mcb;
 
 	/*
-	 * Thre presence, and validity, of libname has already been established by
-	 * check_oauth_validator so we don't need to perform more than Assert level
-	 * checking here.
+	 * The presence, and validity, of libname has already been established by
+	 * check_oauth_validator so we don't need to perform more than Assert
+	 * level checking here.
 	 */
 	Assert(libname && *libname);
 
@@ -781,7 +781,7 @@ load_validator_library(const char *libname)
 	 */
 	if (ValidatorCallbacks->validate_cb == NULL)
 		ereport(ERROR,
-				errmsg("%s module \"%s\" must define the symbol %s",
+				errmsg("%s module \"%s\" must provide a %s callback",
 					   "OAuth validator", libname, "validate_cb"));
 
 	/* Allocate memory for validator library private state data */
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index c9aa51b1007..a80e2047bb7 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -211,7 +211,7 @@ struct async_ctx
 	 * something like the following, with errctx and/or curl_err omitted when
 	 * absent:
 	 *
-	 *     connection to server ... failed: errctx: errbuf (curl_err)
+	 *     connection to server ... failed: errctx: errbuf (libcurl: curl_err)
 	 */
 	const char *errctx;			/* not freed; must point to static allocation */
 	PQExpBufferData errbuf;
@@ -930,7 +930,7 @@ parse_interval(struct async_ctx *actx, const char *interval_str)
  * Similar to parse_interval, but we have even fewer requirements for reasonable
  * values since we don't use the expiration time directly (it's passed to the
  * PQAUTHDATA_PROMPT_OAUTH_DEVICE hook, in case the application wants to do
- * something with it). We simply round and clamp to int range.
+ * something with it). We simply round down and clamp to int range.
  */
 static int
 parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
@@ -938,7 +938,7 @@ parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
 	double		parsed;
 
 	parsed = parse_json_number(expires_in_str);
-	parsed = round(parsed);
+	parsed = floor(parsed);
 
 	if (parsed >= INT_MAX)
 		return INT_MAX;
@@ -968,7 +968,8 @@ parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
 
 		/*
 		 * There is no evidence of verification_uri_complete being spelled
-		 * with "url" instead with any service provider, so only support "uri".
+		 * with "url" instead with any service provider, so only support
+		 * "uri".
 		 */
 		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
 		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
@@ -1226,6 +1227,8 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 
 		return -1;
 	}
+
+	return 0;
 #endif
 #ifdef HAVE_SYS_EVENT_H
 	struct async_ctx *actx = ctx;
@@ -1307,9 +1310,12 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 			return -1;
 		}
 	}
-#endif
 
 	return 0;
+#endif
+
+	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
+	return -1;
 }
 
 /*
@@ -2808,7 +2814,8 @@ error_return:
 	{
 		size_t		len;
 
-		appendPQExpBuffer(&conn->errorMessage, " (%s)", actx->curl_err);
+		appendPQExpBuffer(&conn->errorMessage,
+						  " (libcurl: %s)", actx->curl_err);
 
 		/* Sometimes libcurl adds a newline to the error buffer. :( */
 		len = conn->errorMessage.len;
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
index bbd2a98023b..05b9f06ed73 100644
--- a/src/test/modules/oauth_validator/Makefile
+++ b/src/test/modules/oauth_validator/Makefile
@@ -9,7 +9,7 @@
 #
 #-------------------------------------------------------------------------
 
-MODULES = validator fail_validator
+MODULES = validator fail_validator magic_validator
 PGFILEDESC = "validator - test OAuth validator module"
 
 PROGRAM = oauth_hook_client
diff --git a/src/test/modules/oauth_validator/magic_validator.c b/src/test/modules/oauth_validator/magic_validator.c
index 5ce68cdf405..9dc55b602e3 100644
--- a/src/test/modules/oauth_validator/magic_validator.c
+++ b/src/test/modules/oauth_validator/magic_validator.c
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
- * src/test/modules/oauth_validator/fail_validator.c
+ * src/test/modules/oauth_validator/magic_validator.c
  *
  *-------------------------------------------------------------------------
  */
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index dada89e95cc..6fa59fbeb25 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -570,7 +570,7 @@ $node->connect_fails(
 	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
 
 #
-# Test ABI compatability magic marker
+# Test ABI compatibility magic marker
 #
 $node->append_conf('postgresql.conf',
 	"oauth_validator_libraries = 'magic_validator'\n");
@@ -586,7 +586,9 @@ $log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
 $node->connect_fails(
 	"user=test dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
 	"magic_validator is used for $user",
-	expected_stderr => qr/FATAL:\s+OAuth validator module "magic_validator": magic number mismatch/);
+	expected_stderr =>
+	  qr/FATAL:\s+OAuth validator module "magic_validator": magic number mismatch/
+);
 $node->stop;
 
 done_testing();
-- 
2.34.1

v50-0003-fixup-changes-to-sanitize_char-et-al.patchapplication/octet-stream; name=v50-0003-fixup-changes-to-sanitize_char-et-al.patchDownload
From bcfb09a15e3b46a7b2f3a19324be9b5f8d81740c Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 14 Feb 2025 14:36:54 -0800
Subject: [PATCH v50 3/4] fixup! changes to sanitize_char et al

---
 src/backend/libpq/auth-oauth.c | 55 +++++++++++++++++++++++-----------
 1 file changed, 37 insertions(+), 18 deletions(-)

diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
index 27f7af7be00..e4aa6f357b5 100644
--- a/src/backend/libpq/auth-oauth.c
+++ b/src/backend/libpq/auth-oauth.c
@@ -70,7 +70,7 @@ struct oauth_ctx
 	const char *scope;
 };
 
-static char *sanitize_char(char c);
+static void sanitize_char(char c, char *buf, size_t buflen);
 static char *parse_kvpairs_for_auth(char **input);
 static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
 static bool validate(Port *port, const char *auth);
@@ -139,6 +139,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	char		cbind_flag;
 	char	   *auth;
 	int			status;
+	char		errmsgbuf[5];
 
 	struct oauth_ctx *ctx = opaq;
 
@@ -162,6 +163,7 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 
 	/*
 	 * Check that the input length agrees with the string length of the input.
+	 * Possible reasons for discrepancies include embedded nulls in the string.
 	 */
 	if (inputlen == 0)
 		ereport(ERROR,
@@ -223,22 +225,29 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 
 		case 'y':				/* fall through */
 		case 'n':
-			p++;
+			if (!*(++p))
+				goto endofmessage;
+
 			if (*p != ',')
+			{
+				sanitize_char(*p, errmsgbuf, sizeof(errmsgbuf));
 				ereport(ERROR,
 						errcode(ERRCODE_PROTOCOL_VIOLATION),
 						errmsg("malformed OAUTHBEARER message"),
 						errdetail("Comma expected, but found character \"%s\".",
-								  sanitize_char(*p)));
-			p++;
+								  errmsgbuf));
+			}
+			if (!*(++p))
+				goto endofmessage;
 			break;
 
 		default:
+			sanitize_char(*p, errmsgbuf, sizeof(errmsgbuf));
 			ereport(ERROR,
 					errcode(ERRCODE_PROTOCOL_VIOLATION),
 					errmsg("malformed OAUTHBEARER message"),
 					errdetail("Unexpected channel-binding flag \"%s\".",
-							  sanitize_char(cbind_flag)));
+							  errmsgbuf));
 	}
 
 	/*
@@ -249,21 +258,29 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 				errmsg("client uses authorization identity, but it is not supported"));
 	if (*p != ',')
+	{
+		sanitize_char(*p, errmsgbuf, sizeof(errmsgbuf));
 		ereport(ERROR,
 				errcode(ERRCODE_PROTOCOL_VIOLATION),
 				errmsg("malformed OAUTHBEARER message"),
 				errdetail("Unexpected attribute \"%s\" in client-first-message.",
-						  sanitize_char(*p)));
-	p++;
+						  errmsgbuf));
+	}
+	if (!*(++p))
+		goto endofmessage;
 
 	/* All remaining fields are separated by the RFC's kvsep (\x01). */
 	if (*p != KVSEP)
+	{
+		sanitize_char(*p, errmsgbuf, sizeof(errmsgbuf));
 		ereport(ERROR,
 				errcode(ERRCODE_PROTOCOL_VIOLATION),
 				errmsg("malformed OAUTHBEARER message"),
 				errdetail("Key-value separator expected, but found character \"%s\".",
-						  sanitize_char(*p)));
-	p++;
+						  errmsgbuf));
+	}
+	if (!*(++p))
+		goto endofmessage;
 
 	auth = parse_kvpairs_for_auth(&p);
 	if (!auth)
@@ -296,6 +313,13 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
 	explicit_bzero(input_copy, inputlen);
 
 	return status;
+
+endofmessage:
+	explicit_bzero(input_copy, inputlen);
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"));
+	pg_unreachable();
 }
 
 /*
@@ -303,19 +327,14 @@ oauth_exchange(void *opaq, const char *input, int inputlen,
  *
  * If it's a printable ASCII character, print it as a single character.
  * otherwise, print it in hex.
- *
- * The returned pointer points to a static buffer.
  */
-static char *
-sanitize_char(char c)
+static void
+sanitize_char(char c, char *buf, size_t buflen)
 {
-	static char buf[5];
-
 	if (c >= 0x21 && c <= 0x7E)
-		snprintf(buf, sizeof(buf), "'%c'", c);
+		snprintf(buf, buflen, "'%c'", c);
 	else
-		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
-	return buf;
+		snprintf(buf, buflen, "0x%02x", (unsigned char) c);
 }
 
 /*
-- 
2.34.1

v50-0004-XXX-fix-libcurl-link-error.patchapplication/octet-stream; name=v50-0004-XXX-fix-libcurl-link-error.patchDownload
From 99424d85a46bcfc8f3b7ac8cb463c27eb3d7e653 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 13 Jan 2025 12:31:59 -0800
Subject: [PATCH v50 4/4] XXX fix libcurl link error

The ftp/curl port appears to be missing a minimum version dependency on
libssh2, so the following starts showing up after upgrading to curl
8.11.1_1:

    libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

But 13.3 is EOL, so it's not clear if anyone would be interested in a
bug report, and a FreeBSD 14 Cirrus image is in progress. Hack past it
for now.
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 2f5f5ef21a8..91b51142d2e 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -168,6 +168,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.34.1

#208Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#207)
2 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 15 Feb 2025, at 02:14, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is critical. See
"Don't make any bugs" isn't very helpful advice =) Expanded on it slightly.

Hmm, I think the overloading of "validate" in the replacement text
could be confusing. I guess my point is less "don't write bugs" and
more "a bug here has extreme impact"? I've taken another shot at it;
see what you think.

I'm not sure we're at the right wording still. I have a feeling this topic is
worth a longer paragraph describing the potential severity of various error
conditions, but I don't think that needs to go in now, we can iterate on that
over time as well.

+ The server has ensured that the token is well-formed syntactically, but no
"server" is an overloaded nomenclature here, perhaps using libpq instead to
clearly indicate that it's postgres and not an OAuth server.

I've replaced this with "PostgreSQL" to match up with Peter's earlier
feedback (we were using "libpq" to describe the backend and he wanted
to avoid that).

Ah yes, much better.

+sanitize_char(char c)
+{
+ static char buf[5];
With the multithreading work on the horizon we should probably avoid static
variables like these to not create work for our future selves? The code isn't
as neat when passing in a buffer/length but it avoids the need for a static or
threadlocal variable. Or am I overthinking this?

This is the only part of the feedback patch that I'm not a fan of,
mostly because it begins to diverge heavily from the SCRAM code it
copied from. I don't disagree with the goal of getting rid of the
static buffers, but I would like to see them modified at the same time
so that we can refactor easily if/when a third SASL mechanism shows
up. (Maybe with a psprintf() rather than buffers?)

Fair enough, I can get behind that.

+ p++;
+ if (*p != ',')
If the SASL exchange, are we certain that a rogue client cannot inject a
message which trips us past the end of string? Should we be doublecheck when
advancing p across the message?

The existing != checks will bail out if they get to the end of the
string. It relies on byte-at-a-time advancement for safety, as well as
the SASL code higher in the stack that ensures that the input buffer
is always null terminated. (SCRAM relies on that too.) If we ever
jumped farther than a byte, we'd need stronger checks, but at the
moment I don't think this change helps us.

Thanks for clarifying.

In load_validator_library we don't explicitly verify that the required callback
is defined in the returned structure, which seems like a cheap enough belts and
suspenders level check.

Yeah, there's a later check at time of use, but it's not as
user-friendly. I've adjusted the new error message to make it a bit
closer to the logical plugin wording.

-    errmsg("%s module \"%s\" must define the symbol %s",
+    errmsg("%s module \"%s\" must provide a %s callback",

My rationale for picking the former message was that it's same as we have
earlier in the file, so it didn't add more translator work for (ideally) rarely
used errors.

That being said, I agree that should probably align these messages with the
counterparts for archive modules and logical plugins, which currently use the
following:

errmsg("archive modules have to define the symbol %s", "_PG_archive_module_init")
errmsg("archive modules must register an archive callback")
elog(ERROR, "output plugins have to declare the _PG_output_plugin_init symbol");
elog(ERROR, "output plugins have to register a begin callback");

It's a bit surprising to me that we use elog() for output plugins, while these
errors should be rare they can be triggered by third-party code so it seems
more appropriate to use ereport() IMHO. Given that these are so similar we
should be able to reduce translator burden by providing more or less just two
messages.

Since this will be reaching into other parts of the code, it should be its own
patch though, so for now let's go with what you proposed and we can revisit this.

+       if (parsed < 1)
+               return actx->debugging ? 0 : 1;
Is 1 second a sane lower bound on interval for all situations?  I'm starting to
wonder if we should be more conservative here, or even make it configurable in
some way? The default if not set of 5 seconds is quite a lot higher than 1.

Mmm, maybe it should be made configurable, but one second seems like a
long time from a CPU perspective. Maybe it would be applicable to
embedded clients? But only if some provider out there actually starts
using smaller intervals than their clients can stand... Should we wait
to hear from someone who is interested in configuring it?

I indeed think we should await feedback, making it configurable isn't exactly
free so I hesitate to do it if nobody wants it.

The attached v51 squashes your commits together, discarding the changes
discussed here, and takes a stab at a commit message for these as this is
getting very close to be able to go in. There are no additional changes.

--
Daniel Gustafsson

Attachments:

v51-0002-cirrus-Temporarily-fix-libcurl-link-error.patchapplication/octet-stream; name=v51-0002-cirrus-Temporarily-fix-libcurl-link-error.patch; x-unix-mode=0644Download
From 9161ce3fb6e993686ec4cd4bccb0ca037f8a9316 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Mon, 17 Feb 2025 12:57:38 +0100
Subject: [PATCH v51 2/2] cirrus: Temporarily fix libcurl link error

On FreeBSD the ftp/curl port appears to be missing a minimum
version dependency on libssh2, so the following starts showing
up after upgrading to curl 8.11.1_1:

  libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

Awaiting an upgrade of the FreeBSD CI images to version 14, work
around the issue.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAOYmi+kZAka0sdxCOBxsQc2ozEZGZKHWU_9nrPXg3sG1NJ-zJw@mail.gmail.com
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 2f5f5ef21a8..91b51142d2e 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -168,6 +168,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.39.3 (Apple Git-146)

v51-0001-Add-support-for-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v51-0001-Add-support-for-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 7cf50dabc6a33462cb9fe28841af8d02f93b2692 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Mon, 17 Feb 2025 12:57:34 +0100
Subject: [PATCH v51 1/2] Add support for OAUTHBEARER SASL mechanism

This commit implements OAUTHBEARER, RFC 7628, and OAuth 2.0 Device
Authorization Grants, RFC 8628.  In order to use this there is a
new pg_hba auth method called oauth.  When speaking to a OAuth-
enabled server, it looks a bit like this:

  $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
  Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

Device authorization is currently the only supported flow so the
OAuth issuer must support that in order for users to authenticate.
Third-party clients may however extend this and provide their own
flows.

In order for validation to happen server side a new framework for
plugging in OAuth validation modules is added.  As validation is
implementation specific, with no default specified in the standard,
postgres cannot ship with one built-in.  Each pg_hba entry can
specify one, or more, validators or be left blank for the validator
installed as default.

This adds a requirement on libucurl for the client side support,
which is optional to build, but the server side has no additional
build requirements.  In order to run the tests, Python is required
as this adds a https server written in Python.  Tests are gated
behind PG_TEST_EXTRA as they open ports.

This patch has been a multi-year project with many contributors
involved on review and in-depth discussion: Michael Paquier,
Heikki Linnakangas, Zhihong Yu, Mahendrakar s, Andrey Chudnovsky
and Stephen Frost to name a few.  While Jacob Champion is the main
author there have been some levels of hacking by others. Daniel
Gustafsson contributed the validation module and various bits and
pieces; Thomas Munro wrote the client side support for kqueue.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Reviewed-by: Kashif Zeeshan <kashi.zeeshan@gmail.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   65 +
 configure                                     |  332 ++
 configure.ac                                  |   41 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  445 +++
 doc/src/sgml/oauth-validators.sgml            |  414 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/protocol.sgml                    |  133 +-
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |  100 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  894 +++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |  101 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2883 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1153 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   48 +-
 src/interfaces/libpq/libpq-fe.h               |   85 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   47 +
 .../modules/oauth_validator/magic_validator.c |   48 +
 src/test/modules/oauth_validator/meson.build  |   85 +
 .../oauth_validator/oauth_hook_client.c       |  293 ++
 .../modules/oauth_validator/t/001_server.pl   |  594 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  143 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 60 files changed, 9265 insertions(+), 39 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/magic_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index fffa438cec1..2f5f5ef21a8 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -167,7 +167,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -329,6 +329,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -422,8 +423,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -799,8 +802,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 7b55c2664a6..061b13376ac 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -274,3 +274,68 @@ AC_DEFUN([PGAC_CHECK_STRIP],
   AC_SUBST(STRIP_STATIC_LIB)
   AC_SUBST(STRIP_SHARED_LIB)
 ])# PGAC_CHECK_STRIP
+
+
+
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for required libraries and headers, and test to see whether the current
+# installation of libcurl is thread-safe.
+
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[
+  AC_CHECK_HEADER(curl/curl.h, [],
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+  AC_CHECK_LIB(curl, curl_multi_init, [],
+			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+])],
+  [pgac_cv__libcurl_threadsafe_init=yes],
+  [pgac_cv__libcurl_threadsafe_init=no],
+  [pgac_cv__libcurl_threadsafe_init=unknown])])
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
+              [Define to 1 if curl_global_init() is guaranteed to be thread-safe.])
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  AC_CACHE_CHECK([for curl support for asynchronous DNS], [pgac_cv__libcurl_async_dns],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+])],
+  [pgac_cv__libcurl_async_dns=yes],
+  [pgac_cv__libcurl_async_dns=no],
+  [pgac_cv__libcurl_async_dns=unknown])])
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    AC_MSG_WARN([
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.])
+  fi
+])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0ffcaeb4367..93fddd69981 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,157 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12378,176 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
+fi
+
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
+$as_echo_n "checking for curl_global_init thread safety... " >&6; }
+if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_threadsafe_init=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_threadsafe_init=yes
+else
+  pgac_cv__libcurl_threadsafe_init=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
+$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+
+$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
+
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl support for asynchronous DNS" >&5
+$as_echo_n "checking for curl support for asynchronous DNS... " >&6; }
+if ${pgac_cv__libcurl_async_dns+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_async_dns=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_async_dns=yes
+else
+  pgac_cv__libcurl_async_dns=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_async_dns" >&5
+$as_echo "$pgac_cv__libcurl_async_dns" >&6; }
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&5
+$as_echo "$as_me: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&2;}
+  fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
diff --git a/configure.ac b/configure.ac
index f56681e0d91..b6d02f5ecc7 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,40 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1328,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..832b616a7bb 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system hosting the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth authorization servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it is the responsibility of the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or formatting are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 336630ce417..c4dfa8ba039 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3f0a7e9c069..3c95c15a1e4 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1143,6 +1143,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         Libcurl version 7.61.0 or later is required for this feature.
+         Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2584,6 +2597,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        Libcurl version 7.61.0 or later is required for this feature.
+        Building with this will check for the required header files
+        and libraries to make sure that your <productname>Curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index c49e975b082..ddb3596df83 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,107 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration</link>.
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of
+        <ulink url="https://mailarchive.ietf.org/arch/msg/oauth/JIVxFBGsJBVtm7ljwJhPUm3Fr-w/">
+        "mix-up attacks"</ulink> on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10130,329 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   libpq implements support for the OAuth v2 Device Authorization client flow,
+   documented in
+   <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
+   which it will attempt to use by default if the server
+   <link linkend="auth-oauth">requests a bearer token</link> during
+   authentication. This flow can be utilized even if the system running the
+   client application does not have a usable web browser, for example when
+   running a client via <application>SSH</application>. Client applications may implement their own flows
+   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
+  </para>
+  <para>
+   The builtin flow will, by default, print a URL to visit and a user code to
+   enter there:
+<programlisting>
+$ psql 'dbname=postgres oauth_issuer=https://example.com oauth_client_id=...'
+Visit https://example.com/device and enter the code: ABCD-EFGH
+</programlisting>
+   (This prompt may be
+   <link linkend="libpq-oauth-authdata-prompt-oauth-device">customized</link>.)
+   The user will then log into their OAuth provider, which will ask whether
+   to allow libpq and the server to perform actions on their behalf. It is always
+   a good idea to carefully review the URL and permissions displayed, to ensure
+   they match expectations, before continuing. Permissions should not be given
+   to untrusted third parties.
+  </para>
+  <para>
+   For an OAuth client flow to be usable, the connection string must at minimum
+   contain <xref linkend="libpq-connect-oauth-issuer"/> and
+   <xref linkend="libpq-connect-oauth-client-id"/>. (These settings are
+   determined by your organization's OAuth provider.) The builtin flow
+   additionally requires the OAuth authorization server to publish a device
+   authorization endpoint.
+  </para>
+
+  <note>
+   <para>
+    The builtin Device Authorization flow is not currently supported on Windows.
+    Custom client flows may still be implemented.
+   </para>
+  </note>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when an action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+    const char *verification_uri_complete;  /* optional combination of URI and
+                                             * code, or NULL */
+    int         expires_in;         /* seconds until user code expires */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+        <para>
+         If a non-NULL <structfield>verification_uri_complete</structfield> is
+         provided, it may optionally be used for non-textual verification (for
+         example, by displaying a QR code). The URL and user code should still
+         be displayed to the end user in this case, because the code will be
+         manually confirmed by the provider, and the URL lets users continue
+         even if they can't use the non-textual method. For more information,
+         see section 3.3.1 in
+         <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">RFC 8628</ulink>.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       prints HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
@@ -10092,6 +10525,18 @@ int PQisthreadsafe();
    <application>libpq</application> source code for a way to do cooperative
    locking between <application>libpq</application> and your application.
   </para>
+
+  <para>
+   Similarly, if you are using <productname>Curl</productname> inside your application,
+   <emphasis>and</emphasis> you do not already
+   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
+   libcurl globally</ulink> before starting new threads, you will need to
+   cooperatively lock (again via <function>PQregisterThreadLock</function>)
+   around any code that may initialize libcurl. This restriction is lifted for
+   more recent versions of <productname>Curl</productname> that are built to support thread-safe
+   initialization; those builds can be identified by the advertisement of a
+   <literal>threadsafe</literal> feature in their version metadata.
+  </para>
  </sect1>
 
 
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..356f11d3bd8
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,414 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the integration layer between the server
+  and the OAuth provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is crucial for server safety. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    Although different modules may take very different approaches to token
+    validation, implementations generally need to perform three separate
+    actions:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+      <para>
+       Even if authorization fails, a module may choose to continue to pull
+       authentication information from the token for use in auditing and
+       debugging.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   OAuth validator modules are dynamically loaded from the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   Modules are loaded on demand when requested from a login in progress.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains a magic
+   number and pointers to the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    uint32        magic;            /* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+                                     const char *token, const char *role,
+                                     ValidatorModuleResult *result);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    <application>PostgreSQL</application> has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    set output parameters in the <literal>result</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>result->authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>result->authn_id</structfield>
+    field.  Alternatively, <structfield>result->authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    A validator may return <literal>false</literal> to signal an internal error,
+    in which case any result parameters are ignored and the connection fails.
+    Otherwise the validator should return <literal>true</literal> to indicate
+    that it has processed the token and made an authorization decision.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>result->authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>delegate_ident_mapping</literal> turned on,
+    <productname>PostgreSQL</productname> will not perform any checks on the value of
+    <structfield>result->authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any allocated state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index fb5dec1172e..3bd9e68e6ce 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1688,11 +1688,11 @@ SELCT 1/0;<!-- this typo is intentional -->
 
   <para>
    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
-   might be added in the future. The below steps illustrate how SASL
-   authentication is performed in general, while the next subsection gives
-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
+   protocols. At the moment, <productname>PostgreSQL</productname> implements three
+   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
+   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
+   authentication is performed in general, while the next subsections give
+   more details on particular mechanisms.
   </para>
 
   <procedure>
@@ -1727,7 +1727,7 @@ SELCT 1/0;<!-- this typo is intentional -->
    <step id="sasl-auth-end">
     <para>
      Finally, when the authentication exchange is completed successfully, the
-     server sends an AuthenticationSASLFinal message, followed
+     server sends an optional AuthenticationSASLFinal message, followed
      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
      contains additional server-to-client data, whose content is particular to the
      selected authentication mechanism. If the authentication mechanism doesn't
@@ -1746,9 +1746,9 @@ SELCT 1/0;<!-- this typo is intentional -->
    <title>SCRAM-SHA-256 Authentication</title>
 
    <para>
-    The implemented SASL mechanisms at the moment
-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
+    <literal>SCRAM-SHA-256</literal>, and its variant with channel
+    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
+    authentication mechanisms. They are described in
     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    </para>
@@ -1850,6 +1850,121 @@ SELCT 1/0;<!-- this typo is intentional -->
     </step>
    </procedure>
   </sect2>
+
+  <sect2 id="sasl-oauthbearer">
+   <title>OAUTHBEARER Authentication</title>
+
+   <para>
+    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
+    authentication. It is described in detail in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
+   </para>
+
+   <para>
+    A typical exchange differs depending on whether or not the client already
+    has a bearer token cached for the current user. If it does not, the exchange
+    will take place over two connections: the first "discovery" connection to
+    obtain OAuth metadata from the server, and the second connection to send
+    the token after the client has obtained it. (libpq does not currently
+    implement a caching method as part of its builtin flow, so it uses the
+    two-connection exchange.)
+   </para>
+
+   <para>
+    This mechanism is client-initiated, like SCRAM. The client initial response
+    consists of the standard "GS2" header used by SCRAM, followed by a list of
+    <literal>key=value</literal> pairs. The only key currently supported by
+    the server is <literal>auth</literal>, which contains the bearer token.
+    <literal>OAUTHBEARER</literal> additionally specifies three optional
+    components of the client initial response (the <literal>authzid</literal> of
+    the GS2 header, and the <structfield>host</structfield> and
+    <structfield>port</structfield> keys) which are currently ignored by the
+    server.
+   </para>
+
+   <para>
+    <literal>OAUTHBEARER</literal> does not support channel binding, and there
+    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
+    server data during a successful authentication, so the
+    AuthenticationSASLFinal message is not used in the exchange.
+   </para>
+
+   <procedure>
+    <title>Example</title>
+    <step>
+     <para>
+      During the first exchange, the server sends an AuthenticationSASL message
+      with the <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message which
+      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
+      client does not already have a valid bearer token for the current user,
+      the <structfield>auth</structfield> field is empty, indicating a discovery
+      connection.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an AuthenticationSASLContinue message containing an error
+      <literal>status</literal> alongside a well-known URI and scopes that the
+      client should use to conduct an OAuth flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Client sends a SASLResponse message containing the empty set (a single
+      <literal>0x01</literal> byte) to finish its half of the discovery
+      exchange.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an ErrorMessage to fail the first exchange.
+     </para>
+     <para>
+      At this point, the client conducts one of many possible OAuth flows to
+      obtain a bearer token, using any metadata that it has been configured with
+      in addition to that provided by the server. (This description is left
+      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
+      mandate any particular method for obtaining a token.)
+     </para>
+     <para>
+      Once it has a token, the client reconnects to the server for the final
+      exchange:
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server once again sends an AuthenticationSASL message with the
+      <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message, but this
+      time the <structfield>auth</structfield> field in the message contains the
+      bearer token that was obtained during the client flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server validates the token according to the instructions of the
+      token provider. If the client is authorized to connect, it sends an
+      AuthenticationOk message to end the SASL exchange.
+     </para>
+    </step>
+   </procedure>
+  </sect2>
  </sect1>
 
  <sect1 id="protocol-replication">
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index 7c474559bdf..0e5e8e8f309 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -347,6 +347,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 7dd7110318d..574f992ed49 100644
--- a/meson.build
+++ b/meson.build
@@ -855,6 +855,101 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+
+    # Check to see whether the current platform supports thread-safe Curl
+    # initialization.
+    libcurl_threadsafe_init = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+        #ifdef CURL_VERSION_THREADSAFE
+            if (info->features & CURL_VERSION_THREADSAFE)
+                return 0;
+        #endif
+
+            return 1;
+        }''',
+        name: 'test for curl_global_init thread safety',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_threadsafe_init = true
+        message('curl_global_init is thread-safe')
+      elif r.returncode() == 1
+        message('curl_global_init is not thread-safe')
+      else
+        message('curl_global_init failed; assuming not thread-safe')
+      endif
+    endif
+
+    if libcurl_threadsafe_init
+      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
+    endif
+
+    # Warn if a thread-friendly DNS resolver isn't built.
+    libcurl_async_dns = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+            return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+        }''',
+        name: 'test for curl support for asynchronous DNS',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_async_dns = true
+      endif
+    endif
+
+    if not libcurl_async_dns
+      warning('''
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.''')
+    endif
+  endif
+
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3045,6 +3140,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3721,6 +3820,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbe11e75bf0..3b620bac5ac 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..27f7af7be00
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,894 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://datatracker.ietf.org/doc/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(void *arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message. (In
+	 * practice such configurations are rejected during HBA parsing.)
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = palloc0(sizeof(ValidatorModuleResult));
+	if (!ValidatorCallbacks->validate_cb(validator_module_state, token,
+										 port->user_name, ret))
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+	MemoryContextCallback *mcb;
+
+	/*
+	 * The presence, and validity, of libname has already been established by
+	 * check_oauth_validator so we don't need to perform more than Assert
+	 * level checking here.
+	 */
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/*
+	 * Check the magic number, to protect against break-glass scenarios where
+	 * the ABI must change within a major version. load_external_function()
+	 * already checks for compatibility across major versions.
+	 */
+	if (ValidatorCallbacks->magic != PG_OAUTH_VALIDATOR_MAGIC)
+		ereport(ERROR,
+				errmsg("%s module \"%s\": magic number mismatch",
+					   "OAuth validator", libname),
+				errdetail("Server has magic number 0x%08X, module has 0x%08X.",
+						  PG_OAUTH_VALIDATOR_MAGIC, ValidatorCallbacks->magic));
+
+	/*
+	 * Make sure all required callbacks are present in the ValidatorCallbacks
+	 * structure. Right now only the validation callback is required.
+	 */
+	if (ValidatorCallbacks->validate_cb == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must provide a %s callback",
+					   "OAuth validator", libname, "validate_cb"));
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	validator_module_state->sversion = PG_VERSION_NUM;
+
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	/* Shut down the library before cleaning up its state. */
+	mcb = palloc0(sizeof(*mcb));
+	mcb->func = shutdown_validator_library;
+
+	MemoryContextRegisterResetCallback(CurrentMemoryContext, mcb);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked during memory context reset.
+ */
+static void
+shutdown_validator_library(void *arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	const char *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index cce73314609..515091a3844 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4861,6 +4862,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index d472987ed46..ccefd214143 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..5fb559d84b2
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..2f01b669633
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,101 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	/* Holds the server's PG_VERSION_NUM. Reserved for future extensibility. */
+	int			sversion;
+
+	/*
+	 * Private data pointer for use by a validator module. This can be used to
+	 * store state for the module that will be passed to each of its
+	 * callbacks.
+	 */
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	/*
+	 * Should be set to true if the token carries sufficient permissions for
+	 * the bearer to connect.
+	 */
+	bool		authorized;
+
+	/*
+	 * If the token authenticates the user, this should be set to a palloc'd
+	 * string containing the SYSTEM_USER to use for HBA mapping. Consider
+	 * setting this even if result->authorized is false so that DBAs may use
+	 * the logs to match end users to token failures.
+	 *
+	 * This is required if the module is not configured for ident mapping
+	 * delegation. See the validator module documentation for details.
+	 */
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+/*
+ * Validator module callbacks
+ *
+ * These callback functions should be defined by validator modules and returned
+ * via _PG_oauth_validator_module_init().  ValidatorValidateCB is the only
+ * required callback. For more information about the purpose of each callback,
+ * refer to the OAuth validator modules documentation.
+ */
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+									 const char *token, const char *role,
+									 ValidatorModuleResult *result);
+
+/*
+ * Identifies the compiled ABI version of the validator module. Since the server
+ * already enforces the PG_MODULE_MAGIC number for modules across major
+ * versions, this is reserved for emergency use within a stable release line.
+ * May it never need to change.
+ */
+#define PG_OAUTH_VALIDATOR_MAGIC 0x20250207
+
+typedef struct OAuthValidatorCallbacks
+{
+	uint32		magic;			/* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+/*
+ * Type of the shared library symbol _PG_oauth_validator_module_init which is
+ * required for all validator modules.  This function will be invoked during
+ * module loading.
+ */
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..db6454090d2 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -442,6 +445,9 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
+/* Define to 1 if curl_global_init() is guaranteed to be thread-safe. */
+#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
 /* Define to 1 if your compiler understands `typeof' or something similar. */
 #undef HAVE_TYPEOF
 
@@ -663,6 +669,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 701810a272a..90b0b65db6f 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..a80e2047bb7
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2883 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * It's generally prudent to set a maximum response size to buffer in memory,
+ * but it's less clear what size to choose. The biggest of our expected
+ * responses is the server metadata JSON, which will only continue to grow in
+ * size; the number of IANA-registered parameters in that document is up to 78
+ * as of February 2025.
+ *
+ * Even if every single parameter were to take up 2k on average (a previously
+ * common limit on the size of a URL), 256k gives us 128 parameter values before
+ * we give up. (That's almost certainly complete overkill in practice; 2-4k
+ * appears to be common among popular providers at the moment.)
+ */
+#define MAX_OAUTH_RESPONSE_SIZE (256 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *verification_uri_complete;
+	char	   *expires_in_str;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			expires_in;
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->verification_uri_complete);
+	free(authz->expires_in_str);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+	int			timerfd;		/* descriptor for signaling async timeouts */
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (libcurl: curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * In general, none of the error cases below should ever happen if we have
+	 * no bugs above. But if we do hit them, surfacing those errors somehow
+	 * might be the only way to have a chance to debug them.
+	 *
+	 * TODO: At some point it'd be nice to have a standard way to warn about
+	 * teardown failures. Appending to the connection's error message only
+	 * helps if the bug caused a connection failure; otherwise it'll be
+	 * buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 */
+		if (ctx->active)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: started field '%s' before field '%s' was finished",
+								  name, ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+
+	/*
+	 * All fields should be fully processed by the end of the top-level
+	 * object.
+	 */
+	if (!ctx->nested && ctx->active)
+	{
+		Assert(false);
+		oauth_parse_set_error(ctx,
+							  "internal error: field '%s' still active at end of object",
+							  ctx->active->name);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Clear the target (which should be an array inside the top-level
+		 * object). For this to be safe, no target arrays can contain other
+		 * arrays; we check for that in the array_start callback.
+		 */
+		if (ctx->nested != 2 || ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: found unexpected array end while parsing field '%s'",
+								  ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			/* Ensure that we're parsing the top-level keys... */
+			if (ctx->nested != 1)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar target found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* ...and that a result has not already been set. */
+			if (*field->target.scalar)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar field '%s' would be assigned twice",
+									  ctx->active->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			/* The target array should be inside the top-level object. */
+			if (ctx->nested != 2)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: array member found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses a valid JSON number into a double. The input must have come from
+ * pg_parse_json(), so that we know the lexer has validated it; there's no
+ * in-band signal for invalid formats.
+ */
+static double
+parse_json_number(const char *s)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(s, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(false);
+		return 0;
+	}
+
+	return parsed;
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(interval_str);
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (parsed >= INT_MAX)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the "expires_in" JSON number, corresponding to the number of seconds
+ * remaining in the lifetime of the device code request.
+ *
+ * Similar to parse_interval, but we have even fewer requirements for reasonable
+ * values since we don't use the expiration time directly (it's passed to the
+ * PQAUTHDATA_PROMPT_OAUTH_DEVICE hook, in case the application wants to do
+ * something with it). We simply round down and clamp to int range.
+ */
+static int
+parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(expires_in_str);
+	parsed = floor(parsed);
+
+	if (parsed >= INT_MAX)
+		return INT_MAX;
+	else if (parsed <= INT_MIN)
+		return INT_MIN;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+		{"expires_in", JSON_TOKEN_NUMBER, {&authz->expires_in_str}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * There is no evidence of verification_uri_complete being spelled
+		 * with "url" instead with any service provider, so only support
+		 * "uri".
+		 */
+		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	Assert(authz->expires_in_str);	/* ensured by parse_oauth_json() */
+	authz->expires_in = parse_expires_in(actx, authz->expires_in_str);
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*---
+		 * We currently have no use for the following OPTIONAL fields:
+		 *
+		 * - expires_in: This will be important for maintaining a token cache,
+		 *               but we do not yet implement one.
+		 *
+		 * - refresh_token: Ditto.
+		 *
+		 * - scope: This is only sent when the authorization server sees fit to
+		 *          change our scope request. It's not clear what we should do
+		 *          about this; either it's been done as a matter of policy, or
+		 *          the user has explicitly denied part of the authorization,
+		 *          and either way the server-side validator is in a better
+		 *          place to complain if the change isn't acceptable.
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * For epoll, the timerfd is always part of the set; it's just disabled when
+ * we're not using it. For kqueue, the "timerfd" is actually a second kqueue
+ * instance which is only added to the set when needed.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		/*- translator: the term "kqueue" (kernel queue) should not be translated */
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	/*
+	 * Originally, we set EVFILT_TIMER directly on the top-level multiplexer.
+	 * This makes it difficult to implement timer_expired(), though, so now we
+	 * set EVFILT_TIMER on a separate actx->timerfd, which is chained to
+	 * actx->mux while the timer is active.
+	 */
+	actx->timerfd = kqueue();
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timer kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+
+	return 0;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+
+	return 0;
+#endif
+
+	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
+	return -1;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer).
+ *
+ * For epoll, rather than continually adding and removing the timer, we keep it
+ * in the set at all times and just disarm it when it's not needed. For kqueue,
+ * the timer is removed completely when disabled to prevent stale timeouts from
+ * remaining in the queue.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	/* Enable/disable the timer itself. */
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
+		   0, timeout, 0);
+	if (kevent(actx->timerfd, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+
+	/*
+	 * Add/remove the timer to/from the mux. (In contrast with epoll, if we
+	 * allowed the timer to remain registered here after being disabled, the
+	 * mux queue would retain any previous stale timeout notifications and
+	 * remain readable.)
+	 */
+	EV_SET(&ev, actx->timerfd, EVFILT_READ, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, 0, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "could not update timer on kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return false;
+}
+
+/*
+ * Returns 1 if the timeout in the multiplexer set has expired since the last
+ * call to set_timer(), 0 if the timer is still running, or -1 (with an
+ * actx_error() report) if the timer cannot be queried.
+ */
+static int
+timer_expired(struct async_ctx *actx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timerfd_gettime(actx->timerfd, &spec) < 0)
+	{
+		actx_error(actx, "getting timerfd value: %m");
+		return -1;
+	}
+
+	/*
+	 * This implementation assumes we're using single-shot timers. If you
+	 * change to using intervals, you'll need to reimplement this function
+	 * too, possibly with the read() or select() interfaces for timerfd.
+	 */
+	Assert(spec.it_interval.tv_sec == 0
+		   && spec.it_interval.tv_nsec == 0);
+
+	/* If the remaining time to expiration is zero, we're done. */
+	return (spec.it_value.tv_sec == 0
+			&& spec.it_value.tv_nsec == 0);
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	int			res;
+
+	/* Is the timer queue ready? */
+	res = PQsocketPoll(actx->timerfd, 1 /* forRead */ , 0, 0);
+	if (res < 0)
+	{
+		actx_error(actx, "checking kqueue for timeout: %m");
+		return -1;
+	}
+
+	return (res > 0);
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return -1;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * There might be an optimization opportunity here: if timeout == 0, we
+	 * could signal drive_request to immediately call
+	 * curl_multi_socket_action, rather than returning all the way up the
+	 * stack only to come right back. But it's not clear that the additional
+	 * code complexity is worth it.
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *prefix;
+	bool		printed_prefix = false;
+	PQExpBufferData buf;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	initPQExpBuffer(&buf);
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call. We also don't allow unprintable ASCII
+	 * through without a basic <XX> escape.
+	 */
+	for (int i = 0; i < size; i++)
+	{
+		char		c = data[i];
+
+		if (!printed_prefix)
+		{
+			appendPQExpBuffer(&buf, "[libcurl] %s ", prefix);
+			printed_prefix = true;
+		}
+
+		if (c >= 0x20 && c <= 0x7E)
+			appendPQExpBufferChar(&buf, c);
+		else if ((type == CURLINFO_HEADER_IN
+				  || type == CURLINFO_HEADER_OUT
+				  || type == CURLINFO_TEXT)
+				 && (c == '\r' || c == '\n'))
+		{
+			/*
+			 * Don't bother emitting <0D><0A> for headers and text; it's not
+			 * helpful noise.
+			 */
+		}
+		else
+			appendPQExpBuffer(&buf, "<%02X>", c);
+
+		if (c == '\n')
+		{
+			appendPQExpBufferChar(&buf, c);
+			printed_prefix = false;
+		}
+	}
+
+	if (printed_prefix)
+		appendPQExpBufferChar(&buf, '\n');	/* finish the line */
+
+	fprintf(stderr, "%s", buf.data);
+	termPQExpBuffer(&buf);
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 *
+	 * NB: If libcurl is not built against a friendly DNS resolver (c-ares or
+	 * threaded), setting this option prevents DNS lookups from timing out
+	 * correctly. We warn about this situation at configure time.
+	 *
+	 * TODO: Perhaps there's a clever way to warn the user about synchronous
+	 * DNS at runtime too? It's not immediately clear how to do that in a
+	 * helpful way: for many standard single-threaded use cases, the user
+	 * might not care at all, so spraying warnings to stderr would probably do
+	 * more harm than good.
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * If we're in debug mode, allow the developer to change the trusted CA
+	 * list. For now, this is not something we expose outside of the UNSAFE
+	 * mode, because it's not clear that it's useful in production: both libpq
+	 * and the user's browser must trust the same authorization servers for
+	 * the flow to work at all, so any changes to the roots are likely to be
+	 * done system-wide.
+	 */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	/* The first parameter to curl_easy_escape is deprecated by Curl */
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define HTTPS_SCHEME "https://"
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * provides an authorization endpoint, and both the token and authorization
+ * endpoint URLs seem reasonable).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/*
+	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
+	 * was present in the discovery document's grant_types_supported list. MS
+	 * Entra does not advertise this grant type, though, and since it doesn't
+	 * make sense to stand up a device_authorization_endpoint without also
+	 * accepting device codes at the token_endpoint, that's the only thing we
+	 * currently require.
+	 */
+
+	/*
+	 * Although libcurl will fail later if the URL contains an unsupported
+	 * scheme, that error message is going to be a bit opaque. This is a
+	 * decent time to bail out if we're not using HTTPS for the endpoints
+	 * we'll use for the flow.
+	 */
+	if (!actx->debugging)
+	{
+		if (pg_strncasecmp(provider->device_authorization_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "device authorization endpoint \"%s\" must use HTTPS",
+					   provider->device_authorization_endpoint);
+			return false;
+		}
+
+		if (pg_strncasecmp(provider->token_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "token endpoint \"%s\" must use HTTPS",
+					   provider->token_endpoint);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		/* Copy the token error into the context error buffer */
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		.verification_uri_complete = actx->authz.verification_uri_complete,
+		.expires_in = actx->authz.expires_in,
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Calls curl_global_init() in a thread-safe way.
+ *
+ * libcurl has stringent requirements for the thread context in which you call
+ * curl_global_init(), because it's going to try initializing a bunch of other
+ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
+ * the thread-safety situation, but there's a chicken-and-egg problem at
+ * runtime: you can't check the thread safety until you've initialized libcurl,
+ * which you can't do from within a thread unless you know it's thread-safe...
+ *
+ * Returns true if initialization was successful. Successful or not, this
+ * function will not try to reinitialize Curl on successive calls.
+ */
+static bool
+initialize_curl(PGconn *conn)
+{
+	/*
+	 * Don't let the compiler play tricks with this variable. In the
+	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
+	 * enter simultaneously, but we do care if this gets set transiently to
+	 * PG_BOOL_YES/NO in cases where that's not the final answer.
+	 */
+	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	curl_version_info_data *info;
+#endif
+
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * Lock around the whole function. If a libpq client performs its own work
+	 * with libcurl, it must either ensure that Curl is initialized safely
+	 * before calling us (in which case our call will be a no-op), or else it
+	 * must guard its own calls to curl_global_init() with a registered
+	 * threadlock handler. See PQregisterThreadLock().
+	 */
+	pglock_thread();
+#endif
+
+	/*
+	 * Skip initialization if we've already done it. (Curl tracks the number
+	 * of calls; there's no point in incrementing the counter every time we
+	 * connect.)
+	 */
+	if (init_successful == PG_BOOL_YES)
+		goto done;
+	else if (init_successful == PG_BOOL_NO)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init previously failed during OAuth setup");
+		goto done;
+	}
+
+	/*
+	 * We know we've already initialized Winsock by this point (see
+	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
+	 * we have to tell libcurl to initialize everything else, because other
+	 * pieces of our client executable may already be using libcurl for their
+	 * own purposes. If we initialize libcurl with only a subset of its
+	 * features, we could break those other clients nondeterministically, and
+	 * that would probably be a nightmare to debug.
+	 *
+	 * If some other part of the program has already called this, it's a
+	 * no-op.
+	 */
+	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init failed during OAuth setup");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * If we determined at configure time that the Curl installation is
+	 * thread-safe, our job here is much easier. We simply initialize above
+	 * without any locking (concurrent or duplicated calls are fine in that
+	 * situation), then double-check to make sure the runtime setting agrees,
+	 * to try to catch silent downgrades.
+	 */
+	info = curl_version_info(CURLVERSION_NOW);
+	if (!(info->features & CURL_VERSION_THREADSAFE))
+	{
+		/*
+		 * In a downgrade situation, the damage is already done. Curl global
+		 * state may be corrupted. Be noisy.
+		 */
+		libpq_append_conn_error(conn, "libcurl is no longer thread-safe\n"
+								"\tCurl initialization was reported thread-safe when libpq\n"
+								"\twas compiled, but the currently installed version of\n"
+								"\tlibcurl reports that it is not. Recompile libpq against\n"
+								"\tthe installed version of libcurl.");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+#endif
+
+	init_successful = PG_BOOL_YES;
+
+done:
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	pgunlock_thread();
+#endif
+	return (init_successful == PG_BOOL_YES);
+}
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	if (!initialize_curl(conn))
+		return PGRES_POLLING_FAILED;
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+		actx->timerfd = -1;
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+
+					break;
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+
+				/*
+				 * The client application is supposed to wait until our timer
+				 * expires before calling PQconnectPoll() again, but that
+				 * might not happen. To avoid sending a token request early,
+				 * check the timer before continuing.
+				 */
+				if (!timer_expired(actx))
+				{
+					conn->altsock = actx->timerfd;
+					return PGRES_POLLING_READING;
+				}
+
+				/* Disable the expired timer. */
+				if (!set_timer(actx, -1))
+					goto error_return;
+
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer.
+				 */
+				conn->altsock = actx->timerfd;
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage,
+						  " (libcurl: %s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..24448c3e209
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1153 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	/* Only top-level keys are considered. */
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		if (ctx->nested != 1)
+		{
+			/*
+			 * ctx->target_field should not have been set for nested keys.
+			 * Assert and don't continue any further for production builds.
+			 */
+			Assert(false);
+			oauth_json_set_error(ctx,
+								 "internal error: target scalar found at nesting level %d during OAUTHBEARER parsing",
+								 ctx->nested);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..32598721686
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f7..ec7a9236044 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85d1ca2864f..d5051f5e820 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -399,6 +417,7 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 static const pg_fe_sasl_mech *supported_sasl_mechs[] =
 {
 	&pg_scram_mech,
+	&pg_oauth_mech,
 };
 #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
 
@@ -655,6 +674,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1144,7 +1164,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1519,6 +1539,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4111,7 +4135,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4390,6 +4426,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -5002,6 +5041,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5155,6 +5200,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..b7399dee58e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,86 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+	const char *verification_uri_complete;	/* optional combination of URI and
+											 * code, or NULL */
+	int			expires_in;		/* seconds until user code expires */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a50..f36f7f19d58 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index dd64d291b3e..19f4a52a97a 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -37,6 +38,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 89e78b7d114..4e4be3fa511 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index a57077b682e..2b057451473 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..05b9f06ed73
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator magic_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..54eac5b117e
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder require 'oauth' to be present in PG_TEST_EXTRA, since
+HTTPS servers listening on localhost with TCP/IP sockets will be started. A
+Python installation is required to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..a4c7a4451d3
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which is
+ *	  guaranteed to always fail in the validation callback
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static bool fail_token(const ValidatorModuleState *state,
+					   const char *token,
+					   const char *role,
+					   ValidatorModuleResult *result);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static bool
+fail_token(const ValidatorModuleState *state,
+		   const char *token, const char *role,
+		   ValidatorModuleResult *res)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/magic_validator.c b/src/test/modules/oauth_validator/magic_validator.c
new file mode 100644
index 00000000000..9dc55b602e3
--- /dev/null
+++ b/src/test/modules/oauth_validator/magic_validator.c
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * magic_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which
+ *	  should fail due to using the wrong PG_OAUTH_VALIDATOR_MAGIC marker
+ *	  and thus the wrong ABI version
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/magic_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	0xdeadbeef,
+
+	.validate_cb = validate_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
+{
+	elog(FATAL, "magic_validator: this should be unreachable");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..36d1b26369f
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,85 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+magic_validator_sources = files(
+  'magic_validator.c',
+)
+
+if host_system == 'windows'
+  magic_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'magic_validator',
+    '--FILEDESC', 'magic_validator - ABI incompatible OAuth validator module',])
+endif
+
+magic_validator = shared_module('magic_validator',
+  magic_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += magic_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..9f553792c05
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,293 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+	printf(" --stress-async			busy-loop on PQconnectPoll rather than polling\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static bool stress_async = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{"stress-async", no_argument, NULL, 1006},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			case 1006:			/* --stress-async */
+				stress_async = true;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	if (stress_async)
+	{
+		/*
+		 * Perform an asynchronous connection, busy-looping on PQconnectPoll()
+		 * without actually waiting on socket events. This stresses code paths
+		 * that rely on asynchronous work to be done before continuing with
+		 * the next step in the flow.
+		 */
+		PostgresPollingStatusType res;
+
+		conn = PQconnectStart(conninfo);
+
+		do
+		{
+			res = PQconnectPoll(conn);
+		} while (res != PGRES_POLLING_FAILED && res != PGRES_POLLING_OK);
+	}
+	else
+	{
+		/* Perform a standard synchronous connection. */
+		conn = PQconnectdb(conninfo);
+	}
+
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..6fa59fbeb25
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,594 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($windows_os)
+{
+	plan skip_all => 'OAuth server-side tests are not supported on Windows';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+# Stress test: make sure our builtin flow operates correctly even if the client
+# application isn't respecting PGRES_POLLING_READING/WRITING signals returned
+# from PQconnectPoll().
+$base_connstr =
+  "$common_connstr port=" . $node->port . " host=" . $node->host;
+my @cmd = (
+	"oauth_hook_client", "--no-hook", "--stress-async",
+	connstr(stage => 'all', retries => 1, interval => 1));
+
+note "running '" . join("' '", @cmd) . "'";
+my ($stdout, $stderr) = run_command(\@cmd);
+
+like($stdout, qr/connection succeeded/, "stress-async: stdout matches");
+unlike(
+	$stderr,
+	qr/connection to database failed/,
+	"stress-async: stderr matches");
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+#
+# Test ABI compatibility magic marker
+#
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'magic_validator'\n");
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=magic_validator      issuer="$issuer"           scope="openid postgres"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"magic_validator is used for $user",
+	expected_stderr =>
+	  qr/FATAL:\s+OAuth validator module "magic_validator": magic number mismatch/
+);
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..ab83258d736
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..655b2870b0b
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item OAuth::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..4faf3323d38
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires_in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..b2e5d182e1b
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,143 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	/*
+	 * Make sure the server is correctly setting sversion. (Real modules
+	 * should not do this; it would defeat upgrade compatibility.)
+	 */
+	if (state->sversion != PG_VERSION_NUM)
+		elog(ERROR, "oauth_validator: sversion set to %d", state->sversion);
+
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(PANIC, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return true;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index f521ad0b12f..ab7d7452ede 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2515,6 +2515,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2558,7 +2563,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index bce4214503d..48f8184b061 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1952,6 +1959,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3092,6 +3100,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3489,6 +3499,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.39.3 (Apple Git-146)

#209Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#208)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Feb 17, 2025 at 4:03 AM Daniel Gustafsson <daniel@yesql.se> wrote:

The attached v51 squashes your commits together, discarding the changes
discussed here, and takes a stab at a commit message for these as this is
getting very close to be able to go in. There are no additional changes.

Awesome, thank you! It's been a little bit since I've re-run my
fuzzers, and a new Valgrind run would be a good idea, so I will just
keep throwing tests at it and review the new commit messages while
they run.

--Jacob

#210Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#209)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Feb 17, 2025 at 10:15 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

It's been a little bit since I've re-run my
fuzzers, and a new Valgrind run would be a good idea, so I will just
keep throwing tests at it

Fuzzers are happy so far.

Valgrind did find something! A mistake I made during parameter
discovery: setup_oauth_parameters() ensures that conn->oauth_issuer_id
is always set using the "issuer" connection option, but during the
second connection, I reassigned the pointer for it (and
conn->oauth_discovery_uri) and leaked the previous allocations.

v52-0002 fixes that. I've taken the opportunity to document that those
two parameters are designed to be unchangeable for the connection once
they've been assigned.

--

Reviews for the commit message:

postgres cannot ship with one built-in.

s/postgres/Postgres/. Maybe a softening to "does not" ship with one?

Each pg_hba entry can
specify one, or more, validators or be left blank for the validator
installed as default.

Each pg_hba entry can specify only one of the DBA-blessed validators, not more.

This adds a requirement on libucurl

s/libucurl/libcurl/

And as discussed offlist, we should note that the builtin device flow
is not currently supported on Windows.

Thanks!
--Jacob

Attachments:

v52-0001-Add-support-for-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v52-0001-Add-support-for-OAUTHBEARER-SASL-mechanism.patchDownload
From 24fa9c37b6c32b40981d5703b03e46b71d7f9300 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Mon, 17 Feb 2025 12:57:34 +0100
Subject: [PATCH v52 1/3] Add support for OAUTHBEARER SASL mechanism

This commit implements OAUTHBEARER, RFC 7628, and OAuth 2.0 Device
Authorization Grants, RFC 8628.  In order to use this there is a
new pg_hba auth method called oauth.  When speaking to a OAuth-
enabled server, it looks a bit like this:

  $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
  Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

Device authorization is currently the only supported flow so the
OAuth issuer must support that in order for users to authenticate.
Third-party clients may however extend this and provide their own
flows.

In order for validation to happen server side a new framework for
plugging in OAuth validation modules is added.  As validation is
implementation specific, with no default specified in the standard,
postgres cannot ship with one built-in.  Each pg_hba entry can
specify one, or more, validators or be left blank for the validator
installed as default.

This adds a requirement on libucurl for the client side support,
which is optional to build, but the server side has no additional
build requirements.  In order to run the tests, Python is required
as this adds a https server written in Python.  Tests are gated
behind PG_TEST_EXTRA as they open ports.

This patch has been a multi-year project with many contributors
involved on review and in-depth discussion: Michael Paquier,
Heikki Linnakangas, Zhihong Yu, Mahendrakar s, Andrey Chudnovsky
and Stephen Frost to name a few.  While Jacob Champion is the main
author there have been some levels of hacking by others. Daniel
Gustafsson contributed the validation module and various bits and
pieces; Thomas Munro wrote the client side support for kqueue.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Reviewed-by: Kashif Zeeshan <kashi.zeeshan@gmail.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   65 +
 configure                                     |  332 ++
 configure.ac                                  |   41 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  445 +++
 doc/src/sgml/oauth-validators.sgml            |  414 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/protocol.sgml                    |  133 +-
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |  100 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  894 +++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |  101 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2883 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1153 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   45 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   48 +-
 src/interfaces/libpq/libpq-fe.h               |   85 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   47 +
 .../modules/oauth_validator/magic_validator.c |   48 +
 src/test/modules/oauth_validator/meson.build  |   85 +
 .../oauth_validator/oauth_hook_client.c       |  293 ++
 .../modules/oauth_validator/t/001_server.pl   |  594 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  143 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   20 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 60 files changed, 9265 insertions(+), 39 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/magic_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index fffa438cec1..2f5f5ef21a8 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -167,7 +167,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -329,6 +329,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -422,8 +423,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -799,8 +802,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 7b55c2664a6..061b13376ac 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -274,3 +274,68 @@ AC_DEFUN([PGAC_CHECK_STRIP],
   AC_SUBST(STRIP_STATIC_LIB)
   AC_SUBST(STRIP_SHARED_LIB)
 ])# PGAC_CHECK_STRIP
+
+
+
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for required libraries and headers, and test to see whether the current
+# installation of libcurl is thread-safe.
+
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[
+  AC_CHECK_HEADER(curl/curl.h, [],
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+  AC_CHECK_LIB(curl, curl_multi_init, [],
+			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+])],
+  [pgac_cv__libcurl_threadsafe_init=yes],
+  [pgac_cv__libcurl_threadsafe_init=no],
+  [pgac_cv__libcurl_threadsafe_init=unknown])])
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
+              [Define to 1 if curl_global_init() is guaranteed to be thread-safe.])
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  AC_CACHE_CHECK([for curl support for asynchronous DNS], [pgac_cv__libcurl_async_dns],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+])],
+  [pgac_cv__libcurl_async_dns=yes],
+  [pgac_cv__libcurl_async_dns=no],
+  [pgac_cv__libcurl_async_dns=unknown])])
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    AC_MSG_WARN([
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.])
+  fi
+])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0ffcaeb4367..93fddd69981 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,157 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12378,176 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
+fi
+
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
+$as_echo_n "checking for curl_global_init thread safety... " >&6; }
+if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_threadsafe_init=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_threadsafe_init=yes
+else
+  pgac_cv__libcurl_threadsafe_init=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
+$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+
+$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
+
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl support for asynchronous DNS" >&5
+$as_echo_n "checking for curl support for asynchronous DNS... " >&6; }
+if ${pgac_cv__libcurl_async_dns+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_async_dns=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_async_dns=yes
+else
+  pgac_cv__libcurl_async_dns=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_async_dns" >&5
+$as_echo "$pgac_cv__libcurl_async_dns" >&6; }
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&5
+$as_echo "$as_me: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&2;}
+  fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
diff --git a/configure.ac b/configure.ac
index f56681e0d91..b6d02f5ecc7 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,40 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1328,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..832b616a7bb 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system hosting the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth authorization servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it is the responsibility of the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or formatting are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 336630ce417..c4dfa8ba039 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3f0a7e9c069..3c95c15a1e4 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1143,6 +1143,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         Libcurl version 7.61.0 or later is required for this feature.
+         Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2584,6 +2597,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        Libcurl version 7.61.0 or later is required for this feature.
+        Building with this will check for the required header files
+        and libraries to make sure that your <productname>Curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index c49e975b082..ddb3596df83 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,107 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration</link>.
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of
+        <ulink url="https://mailarchive.ietf.org/arch/msg/oauth/JIVxFBGsJBVtm7ljwJhPUm3Fr-w/">
+        "mix-up attacks"</ulink> on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10130,329 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   libpq implements support for the OAuth v2 Device Authorization client flow,
+   documented in
+   <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
+   which it will attempt to use by default if the server
+   <link linkend="auth-oauth">requests a bearer token</link> during
+   authentication. This flow can be utilized even if the system running the
+   client application does not have a usable web browser, for example when
+   running a client via <application>SSH</application>. Client applications may implement their own flows
+   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
+  </para>
+  <para>
+   The builtin flow will, by default, print a URL to visit and a user code to
+   enter there:
+<programlisting>
+$ psql 'dbname=postgres oauth_issuer=https://example.com oauth_client_id=...'
+Visit https://example.com/device and enter the code: ABCD-EFGH
+</programlisting>
+   (This prompt may be
+   <link linkend="libpq-oauth-authdata-prompt-oauth-device">customized</link>.)
+   The user will then log into their OAuth provider, which will ask whether
+   to allow libpq and the server to perform actions on their behalf. It is always
+   a good idea to carefully review the URL and permissions displayed, to ensure
+   they match expectations, before continuing. Permissions should not be given
+   to untrusted third parties.
+  </para>
+  <para>
+   For an OAuth client flow to be usable, the connection string must at minimum
+   contain <xref linkend="libpq-connect-oauth-issuer"/> and
+   <xref linkend="libpq-connect-oauth-client-id"/>. (These settings are
+   determined by your organization's OAuth provider.) The builtin flow
+   additionally requires the OAuth authorization server to publish a device
+   authorization endpoint.
+  </para>
+
+  <note>
+   <para>
+    The builtin Device Authorization flow is not currently supported on Windows.
+    Custom client flows may still be implemented.
+   </para>
+  </note>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when an action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+    const char *verification_uri_complete;  /* optional combination of URI and
+                                             * code, or NULL */
+    int         expires_in;         /* seconds until user code expires */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+        <para>
+         If a non-NULL <structfield>verification_uri_complete</structfield> is
+         provided, it may optionally be used for non-textual verification (for
+         example, by displaying a QR code). The URL and user code should still
+         be displayed to the end user in this case, because the code will be
+         manually confirmed by the provider, and the URL lets users continue
+         even if they can't use the non-textual method. For more information,
+         see section 3.3.1 in
+         <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">RFC 8628</ulink>.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       prints HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
@@ -10092,6 +10525,18 @@ int PQisthreadsafe();
    <application>libpq</application> source code for a way to do cooperative
    locking between <application>libpq</application> and your application.
   </para>
+
+  <para>
+   Similarly, if you are using <productname>Curl</productname> inside your application,
+   <emphasis>and</emphasis> you do not already
+   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
+   libcurl globally</ulink> before starting new threads, you will need to
+   cooperatively lock (again via <function>PQregisterThreadLock</function>)
+   around any code that may initialize libcurl. This restriction is lifted for
+   more recent versions of <productname>Curl</productname> that are built to support thread-safe
+   initialization; those builds can be identified by the advertisement of a
+   <literal>threadsafe</literal> feature in their version metadata.
+  </para>
  </sect1>
 
 
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..356f11d3bd8
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,414 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the integration layer between the server
+  and the OAuth provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is crucial for server safety. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    Although different modules may take very different approaches to token
+    validation, implementations generally need to perform three separate
+    actions:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+      <para>
+       Even if authorization fails, a module may choose to continue to pull
+       authentication information from the token for use in auditing and
+       debugging.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   OAuth validator modules are dynamically loaded from the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   Modules are loaded on demand when requested from a login in progress.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains a magic
+   number and pointers to the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    uint32        magic;            /* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+                                     const char *token, const char *role,
+                                     ValidatorModuleResult *result);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    <application>PostgreSQL</application> has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    set output parameters in the <literal>result</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>result->authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>result->authn_id</structfield>
+    field.  Alternatively, <structfield>result->authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    A validator may return <literal>false</literal> to signal an internal error,
+    in which case any result parameters are ignored and the connection fails.
+    Otherwise the validator should return <literal>true</literal> to indicate
+    that it has processed the token and made an authorization decision.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>result->authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>delegate_ident_mapping</literal> turned on,
+    <productname>PostgreSQL</productname> will not perform any checks on the value of
+    <structfield>result->authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any allocated state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index fb5dec1172e..3bd9e68e6ce 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1688,11 +1688,11 @@ SELCT 1/0;<!-- this typo is intentional -->
 
   <para>
    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
-   might be added in the future. The below steps illustrate how SASL
-   authentication is performed in general, while the next subsection gives
-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
+   protocols. At the moment, <productname>PostgreSQL</productname> implements three
+   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
+   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
+   authentication is performed in general, while the next subsections give
+   more details on particular mechanisms.
   </para>
 
   <procedure>
@@ -1727,7 +1727,7 @@ SELCT 1/0;<!-- this typo is intentional -->
    <step id="sasl-auth-end">
     <para>
      Finally, when the authentication exchange is completed successfully, the
-     server sends an AuthenticationSASLFinal message, followed
+     server sends an optional AuthenticationSASLFinal message, followed
      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
      contains additional server-to-client data, whose content is particular to the
      selected authentication mechanism. If the authentication mechanism doesn't
@@ -1746,9 +1746,9 @@ SELCT 1/0;<!-- this typo is intentional -->
    <title>SCRAM-SHA-256 Authentication</title>
 
    <para>
-    The implemented SASL mechanisms at the moment
-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
+    <literal>SCRAM-SHA-256</literal>, and its variant with channel
+    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
+    authentication mechanisms. They are described in
     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    </para>
@@ -1850,6 +1850,121 @@ SELCT 1/0;<!-- this typo is intentional -->
     </step>
    </procedure>
   </sect2>
+
+  <sect2 id="sasl-oauthbearer">
+   <title>OAUTHBEARER Authentication</title>
+
+   <para>
+    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
+    authentication. It is described in detail in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
+   </para>
+
+   <para>
+    A typical exchange differs depending on whether or not the client already
+    has a bearer token cached for the current user. If it does not, the exchange
+    will take place over two connections: the first "discovery" connection to
+    obtain OAuth metadata from the server, and the second connection to send
+    the token after the client has obtained it. (libpq does not currently
+    implement a caching method as part of its builtin flow, so it uses the
+    two-connection exchange.)
+   </para>
+
+   <para>
+    This mechanism is client-initiated, like SCRAM. The client initial response
+    consists of the standard "GS2" header used by SCRAM, followed by a list of
+    <literal>key=value</literal> pairs. The only key currently supported by
+    the server is <literal>auth</literal>, which contains the bearer token.
+    <literal>OAUTHBEARER</literal> additionally specifies three optional
+    components of the client initial response (the <literal>authzid</literal> of
+    the GS2 header, and the <structfield>host</structfield> and
+    <structfield>port</structfield> keys) which are currently ignored by the
+    server.
+   </para>
+
+   <para>
+    <literal>OAUTHBEARER</literal> does not support channel binding, and there
+    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
+    server data during a successful authentication, so the
+    AuthenticationSASLFinal message is not used in the exchange.
+   </para>
+
+   <procedure>
+    <title>Example</title>
+    <step>
+     <para>
+      During the first exchange, the server sends an AuthenticationSASL message
+      with the <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message which
+      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
+      client does not already have a valid bearer token for the current user,
+      the <structfield>auth</structfield> field is empty, indicating a discovery
+      connection.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an AuthenticationSASLContinue message containing an error
+      <literal>status</literal> alongside a well-known URI and scopes that the
+      client should use to conduct an OAuth flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Client sends a SASLResponse message containing the empty set (a single
+      <literal>0x01</literal> byte) to finish its half of the discovery
+      exchange.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an ErrorMessage to fail the first exchange.
+     </para>
+     <para>
+      At this point, the client conducts one of many possible OAuth flows to
+      obtain a bearer token, using any metadata that it has been configured with
+      in addition to that provided by the server. (This description is left
+      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
+      mandate any particular method for obtaining a token.)
+     </para>
+     <para>
+      Once it has a token, the client reconnects to the server for the final
+      exchange:
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server once again sends an AuthenticationSASL message with the
+      <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message, but this
+      time the <structfield>auth</structfield> field in the message contains the
+      bearer token that was obtained during the client flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server validates the token according to the instructions of the
+      token provider. If the client is authorized to connect, it sends an
+      AuthenticationOk message to end the SASL exchange.
+     </para>
+    </step>
+   </procedure>
+  </sect2>
  </sect1>
 
  <sect1 id="protocol-replication">
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index 7c474559bdf..0e5e8e8f309 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -347,6 +347,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 7dd7110318d..574f992ed49 100644
--- a/meson.build
+++ b/meson.build
@@ -855,6 +855,101 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+
+    # Check to see whether the current platform supports thread-safe Curl
+    # initialization.
+    libcurl_threadsafe_init = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+        #ifdef CURL_VERSION_THREADSAFE
+            if (info->features & CURL_VERSION_THREADSAFE)
+                return 0;
+        #endif
+
+            return 1;
+        }''',
+        name: 'test for curl_global_init thread safety',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_threadsafe_init = true
+        message('curl_global_init is thread-safe')
+      elif r.returncode() == 1
+        message('curl_global_init is not thread-safe')
+      else
+        message('curl_global_init failed; assuming not thread-safe')
+      endif
+    endif
+
+    if libcurl_threadsafe_init
+      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
+    endif
+
+    # Warn if a thread-friendly DNS resolver isn't built.
+    libcurl_async_dns = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+            return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+        }''',
+        name: 'test for curl support for asynchronous DNS',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_async_dns = true
+      endif
+    endif
+
+    if not libcurl_async_dns
+      warning('''
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.''')
+    endif
+  endif
+
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3045,6 +3140,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3721,6 +3820,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbe11e75bf0..3b620bac5ac 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..27f7af7be00
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,894 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://datatracker.ietf.org/doc/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(void *arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message. (In
+	 * practice such configurations are rejected during HBA parsing.)
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = palloc0(sizeof(ValidatorModuleResult));
+	if (!ValidatorCallbacks->validate_cb(validator_module_state, token,
+										 port->user_name, ret))
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+	MemoryContextCallback *mcb;
+
+	/*
+	 * The presence, and validity, of libname has already been established by
+	 * check_oauth_validator so we don't need to perform more than Assert
+	 * level checking here.
+	 */
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/*
+	 * Check the magic number, to protect against break-glass scenarios where
+	 * the ABI must change within a major version. load_external_function()
+	 * already checks for compatibility across major versions.
+	 */
+	if (ValidatorCallbacks->magic != PG_OAUTH_VALIDATOR_MAGIC)
+		ereport(ERROR,
+				errmsg("%s module \"%s\": magic number mismatch",
+					   "OAuth validator", libname),
+				errdetail("Server has magic number 0x%08X, module has 0x%08X.",
+						  PG_OAUTH_VALIDATOR_MAGIC, ValidatorCallbacks->magic));
+
+	/*
+	 * Make sure all required callbacks are present in the ValidatorCallbacks
+	 * structure. Right now only the validation callback is required.
+	 */
+	if (ValidatorCallbacks->validate_cb == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must provide a %s callback",
+					   "OAuth validator", libname, "validate_cb"));
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	validator_module_state->sversion = PG_VERSION_NUM;
+
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	/* Shut down the library before cleaning up its state. */
+	mcb = palloc0(sizeof(*mcb));
+	mcb->func = shutdown_validator_library;
+
+	MemoryContextRegisterResetCallback(CurrentMemoryContext, mcb);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked during memory context reset.
+ */
+static void
+shutdown_validator_library(void *arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	const char *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index cce73314609..515091a3844 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4861,6 +4862,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index d472987ed46..ccefd214143 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..5fb559d84b2
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..2f01b669633
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,101 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	/* Holds the server's PG_VERSION_NUM. Reserved for future extensibility. */
+	int			sversion;
+
+	/*
+	 * Private data pointer for use by a validator module. This can be used to
+	 * store state for the module that will be passed to each of its
+	 * callbacks.
+	 */
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	/*
+	 * Should be set to true if the token carries sufficient permissions for
+	 * the bearer to connect.
+	 */
+	bool		authorized;
+
+	/*
+	 * If the token authenticates the user, this should be set to a palloc'd
+	 * string containing the SYSTEM_USER to use for HBA mapping. Consider
+	 * setting this even if result->authorized is false so that DBAs may use
+	 * the logs to match end users to token failures.
+	 *
+	 * This is required if the module is not configured for ident mapping
+	 * delegation. See the validator module documentation for details.
+	 */
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+/*
+ * Validator module callbacks
+ *
+ * These callback functions should be defined by validator modules and returned
+ * via _PG_oauth_validator_module_init().  ValidatorValidateCB is the only
+ * required callback. For more information about the purpose of each callback,
+ * refer to the OAuth validator modules documentation.
+ */
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+									 const char *token, const char *role,
+									 ValidatorModuleResult *result);
+
+/*
+ * Identifies the compiled ABI version of the validator module. Since the server
+ * already enforces the PG_MODULE_MAGIC number for modules across major
+ * versions, this is reserved for emergency use within a stable release line.
+ * May it never need to change.
+ */
+#define PG_OAUTH_VALIDATOR_MAGIC 0x20250207
+
+typedef struct OAuthValidatorCallbacks
+{
+	uint32		magic;			/* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+/*
+ * Type of the shared library symbol _PG_oauth_validator_module_init which is
+ * required for all validator modules.  This function will be invoked during
+ * module loading.
+ */
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..db6454090d2 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -442,6 +445,9 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
+/* Define to 1 if curl_global_init() is guaranteed to be thread-safe. */
+#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
 /* Define to 1 if your compiler understands `typeof' or something similar. */
 #undef HAVE_TYPEOF
 
@@ -663,6 +669,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 701810a272a..90b0b65db6f 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..a80e2047bb7
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2883 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * It's generally prudent to set a maximum response size to buffer in memory,
+ * but it's less clear what size to choose. The biggest of our expected
+ * responses is the server metadata JSON, which will only continue to grow in
+ * size; the number of IANA-registered parameters in that document is up to 78
+ * as of February 2025.
+ *
+ * Even if every single parameter were to take up 2k on average (a previously
+ * common limit on the size of a URL), 256k gives us 128 parameter values before
+ * we give up. (That's almost certainly complete overkill in practice; 2-4k
+ * appears to be common among popular providers at the moment.)
+ */
+#define MAX_OAUTH_RESPONSE_SIZE (256 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *verification_uri_complete;
+	char	   *expires_in_str;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			expires_in;
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->verification_uri_complete);
+	free(authz->expires_in_str);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+	int			timerfd;		/* descriptor for signaling async timeouts */
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (libcurl: curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * In general, none of the error cases below should ever happen if we have
+	 * no bugs above. But if we do hit them, surfacing those errors somehow
+	 * might be the only way to have a chance to debug them.
+	 *
+	 * TODO: At some point it'd be nice to have a standard way to warn about
+	 * teardown failures. Appending to the connection's error message only
+	 * helps if the bug caused a connection failure; otherwise it'll be
+	 * buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 */
+		if (ctx->active)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: started field '%s' before field '%s' was finished",
+								  name, ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+
+	/*
+	 * All fields should be fully processed by the end of the top-level
+	 * object.
+	 */
+	if (!ctx->nested && ctx->active)
+	{
+		Assert(false);
+		oauth_parse_set_error(ctx,
+							  "internal error: field '%s' still active at end of object",
+							  ctx->active->name);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Clear the target (which should be an array inside the top-level
+		 * object). For this to be safe, no target arrays can contain other
+		 * arrays; we check for that in the array_start callback.
+		 */
+		if (ctx->nested != 2 || ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: found unexpected array end while parsing field '%s'",
+								  ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			/* Ensure that we're parsing the top-level keys... */
+			if (ctx->nested != 1)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar target found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* ...and that a result has not already been set. */
+			if (*field->target.scalar)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar field '%s' would be assigned twice",
+									  ctx->active->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			/* The target array should be inside the top-level object. */
+			if (ctx->nested != 2)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: array member found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses a valid JSON number into a double. The input must have come from
+ * pg_parse_json(), so that we know the lexer has validated it; there's no
+ * in-band signal for invalid formats.
+ */
+static double
+parse_json_number(const char *s)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(s, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(false);
+		return 0;
+	}
+
+	return parsed;
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(interval_str);
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (parsed >= INT_MAX)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the "expires_in" JSON number, corresponding to the number of seconds
+ * remaining in the lifetime of the device code request.
+ *
+ * Similar to parse_interval, but we have even fewer requirements for reasonable
+ * values since we don't use the expiration time directly (it's passed to the
+ * PQAUTHDATA_PROMPT_OAUTH_DEVICE hook, in case the application wants to do
+ * something with it). We simply round down and clamp to int range.
+ */
+static int
+parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(expires_in_str);
+	parsed = floor(parsed);
+
+	if (parsed >= INT_MAX)
+		return INT_MAX;
+	else if (parsed <= INT_MIN)
+		return INT_MIN;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+		{"expires_in", JSON_TOKEN_NUMBER, {&authz->expires_in_str}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * There is no evidence of verification_uri_complete being spelled
+		 * with "url" instead with any service provider, so only support
+		 * "uri".
+		 */
+		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	Assert(authz->expires_in_str);	/* ensured by parse_oauth_json() */
+	authz->expires_in = parse_expires_in(actx, authz->expires_in_str);
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*---
+		 * We currently have no use for the following OPTIONAL fields:
+		 *
+		 * - expires_in: This will be important for maintaining a token cache,
+		 *               but we do not yet implement one.
+		 *
+		 * - refresh_token: Ditto.
+		 *
+		 * - scope: This is only sent when the authorization server sees fit to
+		 *          change our scope request. It's not clear what we should do
+		 *          about this; either it's been done as a matter of policy, or
+		 *          the user has explicitly denied part of the authorization,
+		 *          and either way the server-side validator is in a better
+		 *          place to complain if the change isn't acceptable.
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * For epoll, the timerfd is always part of the set; it's just disabled when
+ * we're not using it. For kqueue, the "timerfd" is actually a second kqueue
+ * instance which is only added to the set when needed.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		/*- translator: the term "kqueue" (kernel queue) should not be translated */
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	/*
+	 * Originally, we set EVFILT_TIMER directly on the top-level multiplexer.
+	 * This makes it difficult to implement timer_expired(), though, so now we
+	 * set EVFILT_TIMER on a separate actx->timerfd, which is chained to
+	 * actx->mux while the timer is active.
+	 */
+	actx->timerfd = kqueue();
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timer kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+
+	return 0;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+
+	return 0;
+#endif
+
+	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
+	return -1;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer).
+ *
+ * For epoll, rather than continually adding and removing the timer, we keep it
+ * in the set at all times and just disarm it when it's not needed. For kqueue,
+ * the timer is removed completely when disabled to prevent stale timeouts from
+ * remaining in the queue.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	/* Enable/disable the timer itself. */
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
+		   0, timeout, 0);
+	if (kevent(actx->timerfd, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+
+	/*
+	 * Add/remove the timer to/from the mux. (In contrast with epoll, if we
+	 * allowed the timer to remain registered here after being disabled, the
+	 * mux queue would retain any previous stale timeout notifications and
+	 * remain readable.)
+	 */
+	EV_SET(&ev, actx->timerfd, EVFILT_READ, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, 0, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "could not update timer on kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return false;
+}
+
+/*
+ * Returns 1 if the timeout in the multiplexer set has expired since the last
+ * call to set_timer(), 0 if the timer is still running, or -1 (with an
+ * actx_error() report) if the timer cannot be queried.
+ */
+static int
+timer_expired(struct async_ctx *actx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timerfd_gettime(actx->timerfd, &spec) < 0)
+	{
+		actx_error(actx, "getting timerfd value: %m");
+		return -1;
+	}
+
+	/*
+	 * This implementation assumes we're using single-shot timers. If you
+	 * change to using intervals, you'll need to reimplement this function
+	 * too, possibly with the read() or select() interfaces for timerfd.
+	 */
+	Assert(spec.it_interval.tv_sec == 0
+		   && spec.it_interval.tv_nsec == 0);
+
+	/* If the remaining time to expiration is zero, we're done. */
+	return (spec.it_value.tv_sec == 0
+			&& spec.it_value.tv_nsec == 0);
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	int			res;
+
+	/* Is the timer queue ready? */
+	res = PQsocketPoll(actx->timerfd, 1 /* forRead */ , 0, 0);
+	if (res < 0)
+	{
+		actx_error(actx, "checking kqueue for timeout: %m");
+		return -1;
+	}
+
+	return (res > 0);
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return -1;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * There might be an optimization opportunity here: if timeout == 0, we
+	 * could signal drive_request to immediately call
+	 * curl_multi_socket_action, rather than returning all the way up the
+	 * stack only to come right back. But it's not clear that the additional
+	 * code complexity is worth it.
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *prefix;
+	bool		printed_prefix = false;
+	PQExpBufferData buf;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	initPQExpBuffer(&buf);
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call. We also don't allow unprintable ASCII
+	 * through without a basic <XX> escape.
+	 */
+	for (int i = 0; i < size; i++)
+	{
+		char		c = data[i];
+
+		if (!printed_prefix)
+		{
+			appendPQExpBuffer(&buf, "[libcurl] %s ", prefix);
+			printed_prefix = true;
+		}
+
+		if (c >= 0x20 && c <= 0x7E)
+			appendPQExpBufferChar(&buf, c);
+		else if ((type == CURLINFO_HEADER_IN
+				  || type == CURLINFO_HEADER_OUT
+				  || type == CURLINFO_TEXT)
+				 && (c == '\r' || c == '\n'))
+		{
+			/*
+			 * Don't bother emitting <0D><0A> for headers and text; it's not
+			 * helpful noise.
+			 */
+		}
+		else
+			appendPQExpBuffer(&buf, "<%02X>", c);
+
+		if (c == '\n')
+		{
+			appendPQExpBufferChar(&buf, c);
+			printed_prefix = false;
+		}
+	}
+
+	if (printed_prefix)
+		appendPQExpBufferChar(&buf, '\n');	/* finish the line */
+
+	fprintf(stderr, "%s", buf.data);
+	termPQExpBuffer(&buf);
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 *
+	 * NB: If libcurl is not built against a friendly DNS resolver (c-ares or
+	 * threaded), setting this option prevents DNS lookups from timing out
+	 * correctly. We warn about this situation at configure time.
+	 *
+	 * TODO: Perhaps there's a clever way to warn the user about synchronous
+	 * DNS at runtime too? It's not immediately clear how to do that in a
+	 * helpful way: for many standard single-threaded use cases, the user
+	 * might not care at all, so spraying warnings to stderr would probably do
+	 * more harm than good.
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * If we're in debug mode, allow the developer to change the trusted CA
+	 * list. For now, this is not something we expose outside of the UNSAFE
+	 * mode, because it's not clear that it's useful in production: both libpq
+	 * and the user's browser must trust the same authorization servers for
+	 * the flow to work at all, so any changes to the roots are likely to be
+	 * done system-wide.
+	 */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	/* The first parameter to curl_easy_escape is deprecated by Curl */
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define HTTPS_SCHEME "https://"
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * provides an authorization endpoint, and both the token and authorization
+ * endpoint URLs seem reasonable).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/*
+	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
+	 * was present in the discovery document's grant_types_supported list. MS
+	 * Entra does not advertise this grant type, though, and since it doesn't
+	 * make sense to stand up a device_authorization_endpoint without also
+	 * accepting device codes at the token_endpoint, that's the only thing we
+	 * currently require.
+	 */
+
+	/*
+	 * Although libcurl will fail later if the URL contains an unsupported
+	 * scheme, that error message is going to be a bit opaque. This is a
+	 * decent time to bail out if we're not using HTTPS for the endpoints
+	 * we'll use for the flow.
+	 */
+	if (!actx->debugging)
+	{
+		if (pg_strncasecmp(provider->device_authorization_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "device authorization endpoint \"%s\" must use HTTPS",
+					   provider->device_authorization_endpoint);
+			return false;
+		}
+
+		if (pg_strncasecmp(provider->token_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "token endpoint \"%s\" must use HTTPS",
+					   provider->token_endpoint);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		/* Copy the token error into the context error buffer */
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		.verification_uri_complete = actx->authz.verification_uri_complete,
+		.expires_in = actx->authz.expires_in,
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Calls curl_global_init() in a thread-safe way.
+ *
+ * libcurl has stringent requirements for the thread context in which you call
+ * curl_global_init(), because it's going to try initializing a bunch of other
+ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
+ * the thread-safety situation, but there's a chicken-and-egg problem at
+ * runtime: you can't check the thread safety until you've initialized libcurl,
+ * which you can't do from within a thread unless you know it's thread-safe...
+ *
+ * Returns true if initialization was successful. Successful or not, this
+ * function will not try to reinitialize Curl on successive calls.
+ */
+static bool
+initialize_curl(PGconn *conn)
+{
+	/*
+	 * Don't let the compiler play tricks with this variable. In the
+	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
+	 * enter simultaneously, but we do care if this gets set transiently to
+	 * PG_BOOL_YES/NO in cases where that's not the final answer.
+	 */
+	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	curl_version_info_data *info;
+#endif
+
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * Lock around the whole function. If a libpq client performs its own work
+	 * with libcurl, it must either ensure that Curl is initialized safely
+	 * before calling us (in which case our call will be a no-op), or else it
+	 * must guard its own calls to curl_global_init() with a registered
+	 * threadlock handler. See PQregisterThreadLock().
+	 */
+	pglock_thread();
+#endif
+
+	/*
+	 * Skip initialization if we've already done it. (Curl tracks the number
+	 * of calls; there's no point in incrementing the counter every time we
+	 * connect.)
+	 */
+	if (init_successful == PG_BOOL_YES)
+		goto done;
+	else if (init_successful == PG_BOOL_NO)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init previously failed during OAuth setup");
+		goto done;
+	}
+
+	/*
+	 * We know we've already initialized Winsock by this point (see
+	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
+	 * we have to tell libcurl to initialize everything else, because other
+	 * pieces of our client executable may already be using libcurl for their
+	 * own purposes. If we initialize libcurl with only a subset of its
+	 * features, we could break those other clients nondeterministically, and
+	 * that would probably be a nightmare to debug.
+	 *
+	 * If some other part of the program has already called this, it's a
+	 * no-op.
+	 */
+	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init failed during OAuth setup");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * If we determined at configure time that the Curl installation is
+	 * thread-safe, our job here is much easier. We simply initialize above
+	 * without any locking (concurrent or duplicated calls are fine in that
+	 * situation), then double-check to make sure the runtime setting agrees,
+	 * to try to catch silent downgrades.
+	 */
+	info = curl_version_info(CURLVERSION_NOW);
+	if (!(info->features & CURL_VERSION_THREADSAFE))
+	{
+		/*
+		 * In a downgrade situation, the damage is already done. Curl global
+		 * state may be corrupted. Be noisy.
+		 */
+		libpq_append_conn_error(conn, "libcurl is no longer thread-safe\n"
+								"\tCurl initialization was reported thread-safe when libpq\n"
+								"\twas compiled, but the currently installed version of\n"
+								"\tlibcurl reports that it is not. Recompile libpq against\n"
+								"\tthe installed version of libcurl.");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+#endif
+
+	init_successful = PG_BOOL_YES;
+
+done:
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	pgunlock_thread();
+#endif
+	return (init_successful == PG_BOOL_YES);
+}
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	if (!initialize_curl(conn))
+		return PGRES_POLLING_FAILED;
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+		actx->timerfd = -1;
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+
+					break;
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+
+				/*
+				 * The client application is supposed to wait until our timer
+				 * expires before calling PQconnectPoll() again, but that
+				 * might not happen. To avoid sending a token request early,
+				 * check the timer before continuing.
+				 */
+				if (!timer_expired(actx))
+				{
+					conn->altsock = actx->timerfd;
+					return PGRES_POLLING_READING;
+				}
+
+				/* Disable the expired timer. */
+				if (!set_timer(actx, -1))
+					goto error_return;
+
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer.
+				 */
+				conn->altsock = actx->timerfd;
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage,
+						  " (libcurl: %s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..24448c3e209
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1153 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	/* Only top-level keys are considered. */
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		if (ctx->nested != 1)
+		{
+			/*
+			 * ctx->target_field should not have been set for nested keys.
+			 * Assert and don't continue any further for production builds.
+			 */
+			Assert(false);
+			oauth_json_set_error(ctx,
+								 "internal error: target scalar found at nesting level %d during OAUTHBEARER parsing",
+								 ctx->nested);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier and discovery URI, if possible, using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..32598721686
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,45 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 2024, PostgreSQL Global Development Group
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f7..ec7a9236044 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85d1ca2864f..d5051f5e820 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -399,6 +417,7 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 static const pg_fe_sasl_mech *supported_sasl_mechs[] =
 {
 	&pg_scram_mech,
+	&pg_oauth_mech,
 };
 #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
 
@@ -655,6 +674,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1144,7 +1164,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1519,6 +1539,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4111,7 +4135,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4390,6 +4426,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -5002,6 +5041,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5155,6 +5200,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..b7399dee58e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,86 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+	const char *verification_uri_complete;	/* optional combination of URI and
+											 * code, or NULL */
+	int			expires_in;		/* seconds until user code expires */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a50..f36f7f19d58 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index dd64d291b3e..19f4a52a97a 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -37,6 +38,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 89e78b7d114..4e4be3fa511 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index a57077b682e..2b057451473 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..05b9f06ed73
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator magic_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..54eac5b117e
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder require 'oauth' to be present in PG_TEST_EXTRA, since
+HTTPS servers listening on localhost with TCP/IP sockets will be started. A
+Python installation is required to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..a4c7a4451d3
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which is
+ *	  guaranteed to always fail in the validation callback
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static bool fail_token(const ValidatorModuleState *state,
+					   const char *token,
+					   const char *role,
+					   ValidatorModuleResult *result);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static bool
+fail_token(const ValidatorModuleState *state,
+		   const char *token, const char *role,
+		   ValidatorModuleResult *res)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/magic_validator.c b/src/test/modules/oauth_validator/magic_validator.c
new file mode 100644
index 00000000000..9dc55b602e3
--- /dev/null
+++ b/src/test/modules/oauth_validator/magic_validator.c
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * magic_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which
+ *	  should fail due to using the wrong PG_OAUTH_VALIDATOR_MAGIC marker
+ *	  and thus the wrong ABI version
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/magic_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	0xdeadbeef,
+
+	.validate_cb = validate_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
+{
+	elog(FATAL, "magic_validator: this should be unreachable");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..36d1b26369f
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,85 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+magic_validator_sources = files(
+  'magic_validator.c',
+)
+
+if host_system == 'windows'
+  magic_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'magic_validator',
+    '--FILEDESC', 'magic_validator - ABI incompatible OAuth validator module',])
+endif
+
+magic_validator = shared_module('magic_validator',
+  magic_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += magic_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..9f553792c05
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,293 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+	printf(" --stress-async			busy-loop on PQconnectPoll rather than polling\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static bool stress_async = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{"stress-async", no_argument, NULL, 1006},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			case 1006:			/* --stress-async */
+				stress_async = true;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	if (stress_async)
+	{
+		/*
+		 * Perform an asynchronous connection, busy-looping on PQconnectPoll()
+		 * without actually waiting on socket events. This stresses code paths
+		 * that rely on asynchronous work to be done before continuing with
+		 * the next step in the flow.
+		 */
+		PostgresPollingStatusType res;
+
+		conn = PQconnectStart(conninfo);
+
+		do
+		{
+			res = PQconnectPoll(conn);
+		} while (res != PGRES_POLLING_FAILED && res != PGRES_POLLING_OK);
+	}
+	else
+	{
+		/* Perform a standard synchronous connection. */
+		conn = PQconnectdb(conninfo);
+	}
+
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..6fa59fbeb25
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,594 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($windows_os)
+{
+	plan skip_all => 'OAuth server-side tests are not supported on Windows';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+# Stress test: make sure our builtin flow operates correctly even if the client
+# application isn't respecting PGRES_POLLING_READING/WRITING signals returned
+# from PQconnectPoll().
+$base_connstr =
+  "$common_connstr port=" . $node->port . " host=" . $node->host;
+my @cmd = (
+	"oauth_hook_client", "--no-hook", "--stress-async",
+	connstr(stage => 'all', retries => 1, interval => 1));
+
+note "running '" . join("' '", @cmd) . "'";
+my ($stdout, $stderr) = run_command(\@cmd);
+
+like($stdout, qr/connection succeeded/, "stress-async: stdout matches");
+unlike(
+	$stderr,
+	qr/connection to database failed/,
+	"stress-async: stderr matches");
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+#
+# Test ABI compatibility magic marker
+#
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'magic_validator'\n");
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=magic_validator      issuer="$issuer"           scope="openid postgres"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"magic_validator is used for $user",
+	expected_stderr =>
+	  qr/FATAL:\s+OAuth validator module "magic_validator": magic number mismatch/
+);
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..ab83258d736
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..655b2870b0b
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item OAuth::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..4faf3323d38
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires_in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..b2e5d182e1b
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,143 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	/*
+	 * Make sure the server is correctly setting sversion. (Real modules
+	 * should not do this; it would defeat upgrade compatibility.)
+	 */
+	if (state->sversion != PG_VERSION_NUM)
+		elog(ERROR, "oauth_validator: sversion set to %d", state->sversion);
+
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(PANIC, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return true;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index f521ad0b12f..ab7d7452ede 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2515,6 +2515,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise the stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2558,7 +2563,20 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like($stderr, $params{expected_stderr}, "$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index bce4214503d..48f8184b061 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1832,6 +1836,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1839,7 +1844,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1952,6 +1959,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3092,6 +3100,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3489,6 +3499,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.34.1

v52-0002-fixup-Add-support-for-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v52-0002-fixup-Add-support-for-OAUTHBEARER-SASL-mechanism.patchDownload
From fd7224fa4617f548320c4f923ba2d6d6e81c9243 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 17 Feb 2025 14:42:42 -0800
Subject: [PATCH v52 2/3] fixup! Add support for OAUTHBEARER SASL mechanism

---
 src/interfaces/libpq/fe-auth-oauth.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 24448c3e209..fb1e9a1a8aa 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -815,13 +815,23 @@ fail:
 }
 
 /*
- * Fill in our issuer identifier and discovery URI, if possible, using the
+ * Fill in our issuer identifier (and discovery URI, if possible) using the
  * connection parameters. If conn->oauth_discovery_uri can't be populated in
  * this function, it will be requested from the server.
  */
 static bool
 setup_oauth_parameters(PGconn *conn)
 {
+	/*
+	 * This is the only function that sets conn->oauth_issuer_id. If a
+	 * previous connection attempt has already computed it, don't overwrite it
+	 * or the discovery URI. (There's no reason for them to change once
+	 * they're set, and handle_oauth_sasl_error() will fail the connection if
+	 * the server attempts to switch them on us later.)
+	 */
+	if (conn->oauth_issuer_id)
+		return true;
+
 	/*---
 	 * To talk to a server, we require the user to provide issuer and client
 	 * identifiers.
-- 
2.34.1

v52-0003-cirrus-Temporarily-fix-libcurl-link-error.patchapplication/octet-stream; name=v52-0003-cirrus-Temporarily-fix-libcurl-link-error.patchDownload
From 937f565848a955985382a3f2bb20641a135f5b79 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Mon, 17 Feb 2025 12:57:38 +0100
Subject: [PATCH v52 3/3] cirrus: Temporarily fix libcurl link error

On FreeBSD the ftp/curl port appears to be missing a minimum
version dependency on libssh2, so the following starts showing
up after upgrading to curl 8.11.1_1:

  libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

Awaiting an upgrade of the FreeBSD CI images to version 14, work
around the issue.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAOYmi+kZAka0sdxCOBxsQc2ozEZGZKHWU_9nrPXg3sG1NJ-zJw@mail.gmail.com
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 2f5f5ef21a8..91b51142d2e 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -168,6 +168,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.34.1

#211Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#210)
2 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 18 Feb 2025, at 00:51, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Mon, Feb 17, 2025 at 10:15 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

It's been a little bit since I've re-run my
fuzzers, and a new Valgrind run would be a good idea, so I will just
keep throwing tests at it

Fuzzers are happy so far.

Valgrind did find something! A mistake I made during parameter
discovery: setup_oauth_parameters() ensures that conn->oauth_issuer_id
is always set using the "issuer" connection option, but during the
second connection, I reassigned the pointer for it (and
conn->oauth_discovery_uri) and leaked the previous allocations.

Nice.

Reviews for the commit message:

All proposed changes applied.

The attached rebased has your 0002 fix as well as some minor tweaks like a few
small whitespace changes from a pgperltidy run and a copyright date fix which
still said 2024.

Unless something shows up I plan to commit this sometime tomorrow to allow it
ample time in the tree before the freeze.

--
Daniel Gustafsson

Attachments:

v53-0002-cirrus-Temporarily-fix-libcurl-link-error.patchapplication/octet-stream; name=v53-0002-cirrus-Temporarily-fix-libcurl-link-error.patch; x-unix-mode=0644Download
From f5575816eb2a62d2465615b4d58b8eea0d22c330 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Wed, 19 Feb 2025 15:09:02 +0100
Subject: [PATCH v53 2/2] cirrus: Temporarily fix libcurl link error

On FreeBSD the ftp/curl port appears to be missing a minimum
version dependency on libssh2, so the following starts showing
up after upgrading to curl 8.11.1_1:

  libcurl.so.4: Undefined symbol "libssh2_session_callback_set2"

Awaiting an upgrade of the FreeBSD CI images to version 14, work
around the issue.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAOYmi+kZAka0sdxCOBxsQc2ozEZGZKHWU_9nrPXg3sG1NJ-zJw@mail.gmail.com
---
 .cirrus.tasks.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 2f5f5ef21a8..91b51142d2e 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -168,6 +168,7 @@ task:
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
     pkg install -y curl
+    pkg upgrade -y libssh2 # XXX shouldn't be necessary. revisit w/ FreeBSD 14
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
-- 
2.39.3 (Apple Git-146)

v53-0001-Add-support-for-OAUTHBEARER-SASL-mechanism.patchapplication/octet-stream; name=v53-0001-Add-support-for-OAUTHBEARER-SASL-mechanism.patch; x-unix-mode=0644Download
From 86f5b77604a991299a9ddd966fd714c86eaa2e6a Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <daniel@yesql.se>
Date: Wed, 19 Feb 2025 15:08:59 +0100
Subject: [PATCH v53 1/2] Add support for OAUTHBEARER SASL mechanism

This commit implements OAUTHBEARER, RFC 7628, and OAuth 2.0 Device
Authorization Grants, RFC 8628.  In order to use this there is a
new pg_hba auth method called oauth.  When speaking to a OAuth-
enabled server, it looks a bit like this:

  $ psql 'host=example.org oauth_issuer=... oauth_client_id=...'
  Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG

Device authorization is currently the only supported flow so the
OAuth issuer must support that in order for users to authenticate.
Third-party clients may however extend this and provide their own
flows.  The built-in device authorization flow is currently not
supported on Windows.

In order for validation to happen server side a new framework for
plugging in OAuth validation modules is added.  As validation is
implementation specific, with no default specified in the standard,
PostgreSQL does not ship with one built-in.  Each pg_hba entry can
specify a specific validator or be left blank for the validator
installed as default.

This adds a requirement on libcurl for the client side support,
which is optional to build, but the server side has no additional
build requirements.  In order to run the tests, Python is required
as this adds a https server written in Python.  Tests are gated
behind PG_TEST_EXTRA as they open ports.

This patch has been a multi-year project with many contributors
involved with reviews and in-depth discussions:  Michael Paquier,
Heikki Linnakangas, Zhihong Yu, Mahendrakar Srinivasarao, Andrey
Chudnovsky and Stephen Frost to name a few.  While Jacob Champion
is the main author there have been some levels of hacking by others.
Daniel Gustafsson contributed the validation module and various bits
and pieces; Thomas Munro wrote the client side support for kqueue.

Author: Jacob Champion <jacob.champion@enterprisedb.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Antonin Houska <ah@cybertec.at>
Reviewed-by: Kashif Zeeshan <kashi.zeeshan@gmail.com>
Discussion: https://postgr.es/m/d1b467a78e0e36ed85a09adf979d04cf124a9d4b.camel@vmware.com
---
 .cirrus.tasks.yml                             |   15 +-
 config/programs.m4                            |   65 +
 configure                                     |  332 ++
 configure.ac                                  |   41 +
 doc/src/sgml/client-auth.sgml                 |  252 ++
 doc/src/sgml/config.sgml                      |   26 +
 doc/src/sgml/filelist.sgml                    |    1 +
 doc/src/sgml/installation.sgml                |   27 +
 doc/src/sgml/libpq.sgml                       |  445 +++
 doc/src/sgml/oauth-validators.sgml            |  414 +++
 doc/src/sgml/postgres.sgml                    |    1 +
 doc/src/sgml/protocol.sgml                    |  133 +-
 doc/src/sgml/regress.sgml                     |   10 +
 meson.build                                   |  100 +
 meson_options.txt                             |    3 +
 src/Makefile.global.in                        |    1 +
 src/backend/libpq/Makefile                    |    1 +
 src/backend/libpq/auth-oauth.c                |  894 +++++
 src/backend/libpq/auth.c                      |   10 +-
 src/backend/libpq/hba.c                       |   64 +-
 src/backend/libpq/meson.build                 |    1 +
 src/backend/libpq/pg_hba.conf.sample          |    4 +-
 src/backend/utils/adt/hbafuncs.c              |   19 +
 src/backend/utils/misc/guc_tables.c           |   12 +
 src/backend/utils/misc/postgresql.conf.sample |    3 +
 src/include/common/oauth-common.h             |   19 +
 src/include/libpq/auth.h                      |    1 +
 src/include/libpq/hba.h                       |    7 +-
 src/include/libpq/oauth.h                     |  101 +
 src/include/pg_config.h.in                    |    9 +
 src/interfaces/libpq/Makefile                 |   11 +-
 src/interfaces/libpq/exports.txt              |    3 +
 src/interfaces/libpq/fe-auth-oauth-curl.c     | 2883 +++++++++++++++++
 src/interfaces/libpq/fe-auth-oauth.c          | 1163 +++++++
 src/interfaces/libpq/fe-auth-oauth.h          |   46 +
 src/interfaces/libpq/fe-auth.c                |   36 +-
 src/interfaces/libpq/fe-auth.h                |    3 +
 src/interfaces/libpq/fe-connect.c             |   48 +-
 src/interfaces/libpq/libpq-fe.h               |   85 +
 src/interfaces/libpq/libpq-int.h              |   13 +-
 src/interfaces/libpq/meson.build              |    5 +
 src/makefiles/meson.build                     |    1 +
 src/test/authentication/t/001_password.pl     |    8 +-
 src/test/modules/Makefile                     |    1 +
 src/test/modules/meson.build                  |    1 +
 src/test/modules/oauth_validator/.gitignore   |    4 +
 src/test/modules/oauth_validator/Makefile     |   40 +
 src/test/modules/oauth_validator/README       |   13 +
 .../modules/oauth_validator/fail_validator.c  |   47 +
 .../modules/oauth_validator/magic_validator.c |   48 +
 src/test/modules/oauth_validator/meson.build  |   85 +
 .../oauth_validator/oauth_hook_client.c       |  293 ++
 .../modules/oauth_validator/t/001_server.pl   |  594 ++++
 .../modules/oauth_validator/t/002_client.pl   |  154 +
 .../modules/oauth_validator/t/OAuth/Server.pm |  140 +
 .../modules/oauth_validator/t/oauth_server.py |  391 +++
 src/test/modules/oauth_validator/validator.c  |  143 +
 src/test/perl/PostgreSQL/Test/Cluster.pm      |   22 +-
 src/tools/pgindent/pgindent                   |   14 +
 src/tools/pgindent/typedefs.list              |   11 +
 60 files changed, 9278 insertions(+), 39 deletions(-)
 create mode 100644 doc/src/sgml/oauth-validators.sgml
 create mode 100644 src/backend/libpq/auth-oauth.c
 create mode 100644 src/include/common/oauth-common.h
 create mode 100644 src/include/libpq/oauth.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-curl.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth.h
 create mode 100644 src/test/modules/oauth_validator/.gitignore
 create mode 100644 src/test/modules/oauth_validator/Makefile
 create mode 100644 src/test/modules/oauth_validator/README
 create mode 100644 src/test/modules/oauth_validator/fail_validator.c
 create mode 100644 src/test/modules/oauth_validator/magic_validator.c
 create mode 100644 src/test/modules/oauth_validator/meson.build
 create mode 100644 src/test/modules/oauth_validator/oauth_hook_client.c
 create mode 100644 src/test/modules/oauth_validator/t/001_server.pl
 create mode 100644 src/test/modules/oauth_validator/t/002_client.pl
 create mode 100644 src/test/modules/oauth_validator/t/OAuth/Server.pm
 create mode 100755 src/test/modules/oauth_validator/t/oauth_server.py
 create mode 100644 src/test/modules/oauth_validator/validator.c

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index fffa438cec1..2f5f5ef21a8 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -23,7 +23,7 @@ env:
   MTEST_ARGS: --print-errorlogs --no-rebuild -C build
   PGCTLTIMEOUT: 120 # avoids spurious failures during parallel tests
   TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
-  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance
+  PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
 
 
 # What files to preserve in case tests fail
@@ -167,7 +167,7 @@ task:
     chown root:postgres /tmp/cores
     sysctl kern.corefile='/tmp/cores/%N.%P.core'
   setup_additional_packages_script: |
-    #pkg install -y ...
+    pkg install -y curl
 
   # NB: Intentionally build without -Dllvm. The freebsd image size is already
   # large enough to make VM startup slow, and even without llvm freebsd
@@ -329,6 +329,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
+  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
@@ -422,8 +423,10 @@ task:
     EOF
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install \
+      libcurl4-openssl-dev \
+      libcurl4-openssl-dev:i386 \
 
   matrix:
     - name: Linux - Debian Bookworm - Autoconf
@@ -799,8 +802,8 @@ task:
     folder: $CCACHE_DIR
 
   setup_additional_packages_script: |
-    #apt-get update
-    #DEBIAN_FRONTEND=noninteractive apt-get -y install ...
+    apt-get update
+    DEBIAN_FRONTEND=noninteractive apt-get -y install libcurl4-openssl-dev
 
   ###
   # Test that code can be built with gcc/clang without warnings
diff --git a/config/programs.m4 b/config/programs.m4
index 7b55c2664a6..061b13376ac 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -274,3 +274,68 @@ AC_DEFUN([PGAC_CHECK_STRIP],
   AC_SUBST(STRIP_STATIC_LIB)
   AC_SUBST(STRIP_SHARED_LIB)
 ])# PGAC_CHECK_STRIP
+
+
+
+# PGAC_CHECK_LIBCURL
+# ------------------
+# Check for required libraries and headers, and test to see whether the current
+# installation of libcurl is thread-safe.
+
+AC_DEFUN([PGAC_CHECK_LIBCURL],
+[
+  AC_CHECK_HEADER(curl/curl.h, [],
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+  AC_CHECK_LIB(curl, curl_multi_init, [],
+			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+])],
+  [pgac_cv__libcurl_threadsafe_init=yes],
+  [pgac_cv__libcurl_threadsafe_init=no],
+  [pgac_cv__libcurl_threadsafe_init=unknown])])
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+    AC_DEFINE(HAVE_THREADSAFE_CURL_GLOBAL_INIT, 1,
+              [Define to 1 if curl_global_init() is guaranteed to be thread-safe.])
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  AC_CACHE_CHECK([for curl support for asynchronous DNS], [pgac_cv__libcurl_async_dns],
+  [AC_RUN_IFELSE([AC_LANG_PROGRAM([
+#include <curl/curl.h>
+],[
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+])],
+  [pgac_cv__libcurl_async_dns=yes],
+  [pgac_cv__libcurl_async_dns=no],
+  [pgac_cv__libcurl_async_dns=unknown])])
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    AC_MSG_WARN([
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.])
+  fi
+])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0ffcaeb4367..93fddd69981 100755
--- a/configure
+++ b/configure
@@ -708,6 +708,9 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LIBS
+LIBCURL_CFLAGS
+with_libcurl
 with_uuid
 with_readline
 with_systemd
@@ -864,6 +867,7 @@ with_readline
 with_libedit_preferred
 with_uuid
 with_ossp_uuid
+with_libcurl
 with_libxml
 with_libxslt
 with_system_tzdata
@@ -894,6 +898,8 @@ PKG_CONFIG_PATH
 PKG_CONFIG_LIBDIR
 ICU_CFLAGS
 ICU_LIBS
+LIBCURL_CFLAGS
+LIBCURL_LIBS
 XML2_CONFIG
 XML2_CFLAGS
 XML2_LIBS
@@ -1574,6 +1580,7 @@ Optional Packages:
                           prefer BSD Libedit over GNU Readline
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
+  --with-libcurl          build with libcurl support
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -1607,6 +1614,10 @@ Some influential environment variables:
               path overriding pkg-config's built-in search path
   ICU_CFLAGS  C compiler flags for ICU, overriding pkg-config
   ICU_LIBS    linker flags for ICU, overriding pkg-config
+  LIBCURL_CFLAGS
+              C compiler flags for LIBCURL, overriding pkg-config
+  LIBCURL_LIBS
+              linker flags for LIBCURL, overriding pkg-config
   XML2_CONFIG path to xml2-config utility
   XML2_CFLAGS C compiler flags for XML2, overriding pkg-config
   XML2_LIBS   linker flags for XML2, overriding pkg-config
@@ -8762,6 +8773,157 @@ fi
 
 
 
+#
+# libcurl
+#
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
+$as_echo_n "checking whether to build with libcurl support... " >&6; }
+
+
+
+# Check whether --with-libcurl was given.
+if test "${with_libcurl+set}" = set; then :
+  withval=$with_libcurl;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-libcurl option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_libcurl=no
+
+fi
+
+
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
+$as_echo "$with_libcurl" >&6; }
+
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+
+pkg_failed=no
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for libcurl >= 7.61.0" >&5
+$as_echo_n "checking for libcurl >= 7.61.0... " >&6; }
+
+if test -n "$LIBCURL_CFLAGS"; then
+    pkg_cv_LIBCURL_CFLAGS="$LIBCURL_CFLAGS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_CFLAGS=`$PKG_CONFIG --cflags "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+if test -n "$LIBCURL_LIBS"; then
+    pkg_cv_LIBCURL_LIBS="$LIBCURL_LIBS"
+ elif test -n "$PKG_CONFIG"; then
+    if test -n "$PKG_CONFIG" && \
+    { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"libcurl >= 7.61.0\""; } >&5
+  ($PKG_CONFIG --exists --print-errors "libcurl >= 7.61.0") 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; then
+  pkg_cv_LIBCURL_LIBS=`$PKG_CONFIG --libs "libcurl >= 7.61.0" 2>/dev/null`
+		      test "x$?" != "x0" && pkg_failed=yes
+else
+  pkg_failed=yes
+fi
+ else
+    pkg_failed=untried
+fi
+
+
+
+if test $pkg_failed = yes; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+
+if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
+        _pkg_short_errors_supported=yes
+else
+        _pkg_short_errors_supported=no
+fi
+        if test $_pkg_short_errors_supported = yes; then
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        else
+	        LIBCURL_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "libcurl >= 7.61.0" 2>&1`
+        fi
+	# Put the nasty error message in config.log where it belongs
+	echo "$LIBCURL_PKG_ERRORS" >&5
+
+	as_fn_error $? "Package requirements (libcurl >= 7.61.0) were not met:
+
+$LIBCURL_PKG_ERRORS
+
+Consider adjusting the PKG_CONFIG_PATH environment variable if you
+installed software in a non-standard prefix.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details." "$LINENO" 5
+elif test $pkg_failed = untried; then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+	{ { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5
+$as_echo "$as_me: error: in \`$ac_pwd':" >&2;}
+as_fn_error $? "The pkg-config script could not be found or is too old.  Make sure it
+is in your PATH or set the PKG_CONFIG environment variable to the full
+path to pkg-config.
+
+Alternatively, you may set the environment variables LIBCURL_CFLAGS
+and LIBCURL_LIBS to avoid the need to call pkg-config.
+See the pkg-config man page for more details.
+
+To get pkg-config, see <http://pkg-config.freedesktop.org/>.
+See \`config.log' for more details" "$LINENO" 5; }
+else
+	LIBCURL_CFLAGS=$pkg_cv_LIBCURL_CFLAGS
+	LIBCURL_LIBS=$pkg_cv_LIBCURL_LIBS
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+
+fi
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
+$as_echo "$as_me: WARNING: *** OAuth support tests require --with-python to run" >&2;}
+  fi
+fi
+
+
 #
 # XML
 #
@@ -12216,6 +12378,176 @@ fi
 
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+
+  ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
+if test "x$ac_cv_header_curl_curl_h" = xyes; then :
+
+else
+  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+fi
+
+
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
+$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
+if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  ac_check_lib_save_LIBS=$LIBS
+LIBS="-lcurl  $LIBS"
+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+/* Override any GCC internal prototype to avoid an error.
+   Use char because int might match the return type of a GCC
+   builtin and then its argument prototype would still apply.  */
+#ifdef __cplusplus
+extern "C"
+#endif
+char curl_multi_init ();
+int
+main ()
+{
+return curl_multi_init ();
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_link "$LINENO"; then :
+  ac_cv_lib_curl_curl_multi_init=yes
+else
+  ac_cv_lib_curl_curl_multi_init=no
+fi
+rm -f core conftest.err conftest.$ac_objext \
+    conftest$ac_exeext conftest.$ac_ext
+LIBS=$ac_check_lib_save_LIBS
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
+$as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
+if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_LIBCURL 1
+_ACEOF
+
+  LIBS="-lcurl $LIBS"
+
+else
+  as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
+fi
+
+
+  # Check to see whether the current platform supports threadsafe Curl
+  # initialization.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
+$as_echo_n "checking for curl_global_init thread safety... " >&6; }
+if ${pgac_cv__libcurl_threadsafe_init+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_threadsafe_init=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+#ifdef CURL_VERSION_THREADSAFE
+    if (info->features & CURL_VERSION_THREADSAFE)
+        return 0;
+#endif
+
+    return 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_threadsafe_init=yes
+else
+  pgac_cv__libcurl_threadsafe_init=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_threadsafe_init" >&5
+$as_echo "$pgac_cv__libcurl_threadsafe_init" >&6; }
+  if test x"$pgac_cv__libcurl_threadsafe_init" = xyes ; then
+
+$as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
+
+  fi
+
+  # Warn if a thread-friendly DNS resolver isn't built.
+  { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl support for asynchronous DNS" >&5
+$as_echo_n "checking for curl support for asynchronous DNS... " >&6; }
+if ${pgac_cv__libcurl_async_dns+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test "$cross_compiling" = yes; then :
+  pgac_cv__libcurl_async_dns=unknown
+else
+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h.  */
+
+#include <curl/curl.h>
+
+int
+main ()
+{
+
+    curl_version_info_data *info;
+
+    if (curl_global_init(CURL_GLOBAL_ALL))
+        return -1;
+
+    info = curl_version_info(CURLVERSION_NOW);
+    return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+
+  ;
+  return 0;
+}
+_ACEOF
+if ac_fn_c_try_run "$LINENO"; then :
+  pgac_cv__libcurl_async_dns=yes
+else
+  pgac_cv__libcurl_async_dns=no
+fi
+rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \
+  conftest.$ac_objext conftest.beam conftest.$ac_ext
+fi
+
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_async_dns" >&5
+$as_echo "$pgac_cv__libcurl_async_dns" >&6; }
+  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
+    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&5
+$as_echo "$as_me: WARNING:
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs." >&2;}
+  fi
+
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing gss_store_cred_into" >&5
diff --git a/configure.ac b/configure.ac
index f56681e0d91..b6d02f5ecc7 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1007,6 +1007,40 @@ fi
 AC_SUBST(with_uuid)
 
 
+#
+# libcurl
+#
+AC_MSG_CHECKING([whether to build with libcurl support])
+PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
+              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
+AC_MSG_RESULT([$with_libcurl])
+AC_SUBST(with_libcurl)
+
+if test "$with_libcurl" = yes ; then
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
+
+  # We only care about -I, -D, and -L switches;
+  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  for pgac_option in $LIBCURL_CFLAGS; do
+    case $pgac_option in
+      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+    esac
+  done
+  for pgac_option in $LIBCURL_LIBS; do
+    case $pgac_option in
+      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+    esac
+  done
+
+  # OAuth requires python for testing
+  if test "$with_python" != yes; then
+    AC_MSG_WARN([*** OAuth support tests require --with-python to run])
+  fi
+fi
+
+
 #
 # XML
 #
@@ -1294,6 +1328,13 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
+# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+# dependency on that platform?
+if test "$with_libcurl" = yes ; then
+  PGAC_CHECK_LIBCURL
+fi
+
 if test "$with_gssapi" = yes ; then
   if test "$PORTNAME" != "win32"; then
     AC_SEARCH_LIBS(gss_store_cred_into, [gssapi_krb5 gss 'gssapi -lkrb5 -lcrypto'], [],
diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index 782b49c85ac..832b616a7bb 100644
--- a/doc/src/sgml/client-auth.sgml
+++ b/doc/src/sgml/client-auth.sgml
@@ -656,6 +656,16 @@ include_dir         <replaceable>directory</replaceable>
          </para>
         </listitem>
        </varlistentry>
+
+       <varlistentry>
+        <term><literal>oauth</literal></term>
+        <listitem>
+         <para>
+          Authorize and optionally authenticate using a third-party OAuth 2.0
+          identity provider. See <xref linkend="auth-oauth"/> for details.
+         </para>
+        </listitem>
+       </varlistentry>
       </variablelist>
 
       </para>
@@ -1143,6 +1153,12 @@ omicron         bryanh                  guest1
       only on OpenBSD).
      </para>
     </listitem>
+    <listitem>
+     <para>
+      <link linkend="auth-oauth">OAuth authorization/authentication</link>,
+      which relies on an external OAuth 2.0 identity provider.
+     </para>
+    </listitem>
    </itemizedlist>
   </para>
 
@@ -2329,6 +2345,242 @@ host ... radius radiusservers="server1,server2" radiussecrets="""secret one"",""
    </note>
   </sect1>
 
+  <sect1 id="auth-oauth">
+   <title>OAuth Authorization/Authentication</title>
+
+   <indexterm zone="auth-oauth">
+    <primary>OAuth Authorization/Authentication</primary>
+   </indexterm>
+
+   <para>
+    OAuth 2.0 is an industry-standard framework, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6749">RFC 6749</ulink>,
+    to enable third-party applications to obtain limited access to a protected
+    resource.
+
+    OAuth client support has to be enabled when <productname>PostgreSQL</productname>
+    is built, see <xref linkend="installation"/> for more information.
+   </para>
+
+   <para>
+    This documentation uses the following terminology when discussing the OAuth
+    ecosystem:
+
+    <variablelist>
+
+     <varlistentry>
+      <term>Resource Owner (or End User)</term>
+      <listitem>
+       <para>
+        The user or system who owns protected resources and can grant access to
+        them. This documentation also uses the term <emphasis>end user</emphasis>
+        when the resource owner is a person. When you use
+        <application>psql</application> to connect to the database using OAuth,
+        you are the resource owner/end user.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Client</term>
+      <listitem>
+       <para>
+        The system which accesses the protected resources using access
+        tokens. Applications using libpq, such as <application>psql</application>,
+        are the OAuth clients when connecting to a
+        <productname>PostgreSQL</productname> cluster.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Resource Server</term>
+      <listitem>
+       <para>
+        The system hosting the protected resources which are
+        accessed by the client. The <productname>PostgreSQL</productname>
+        cluster being connected to is the resource server.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Provider</term>
+      <listitem>
+       <para>
+        The organization, product vendor, or other entity which develops and/or
+        administers the OAuth authorization servers and clients for a given application.
+        Different providers typically choose different implementation details
+        for their OAuth systems; a client of one provider is not generally
+        guaranteed to have access to the servers of another.
+       </para>
+       <para>
+        This use of the term "provider" is not standard, but it seems to be in
+        wide use colloquially. (It should not be confused with OpenID's similar
+        term "Identity Provider". While the implementation of OAuth in
+        <productname>PostgreSQL</productname> is intended to be interoperable
+        and compatible with OpenID Connect/OIDC, it is not itself an OIDC client
+        and does not require its use.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term>Authorization Server</term>
+      <listitem>
+       <para>
+        The system which receives requests from, and issues access tokens to,
+        the client after the authenticated resource owner has given approval.
+        <productname>PostgreSQL</productname> does not provide an authorization
+        server; it is the responsibility of the OAuth provider.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-issuer">Issuer</term>
+      <listitem>
+       <para>
+        An identifier for an authorization server, printed as an
+        <literal>https://</literal> URL, which provides a trusted "namespace"
+        for OAuth clients and applications. The issuer identifier allows a
+        single authorization server to talk to the clients of mutually
+        untrusting entities, as long as they maintain separate issuers.
+       </para>
+      </listitem>
+     </varlistentry>
+
+    </variablelist>
+
+    <note>
+     <para>
+      For small deployments, there may not be a meaningful distinction between
+      the "provider", "authorization server", and "issuer". However, for more
+      complicated setups, there may be a one-to-many (or many-to-many)
+      relationship: a provider may rent out multiple issuer identifiers to
+      separate tenants, then provide multiple authorization servers, possibly
+      with different supported feature sets, to interact with their clients.
+     </para>
+    </note>
+   </para>
+
+   <para>
+    <productname>PostgreSQL</productname> supports bearer tokens, defined in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc6750">RFC 6750</ulink>,
+    which are a type of access token used with OAuth 2.0 where the token is an
+    opaque string.  The format of the access token is implementation specific
+    and is chosen by each authorization server.
+   </para>
+
+   <para>
+    The following configuration options are supported for OAuth:
+    <variablelist>
+     <varlistentry>
+      <term><literal>issuer</literal></term>
+      <listitem>
+       <para>
+        An HTTPS URL which is either the exact
+        <link linkend="auth-oauth-issuer">issuer identifier</link> of the
+        authorization server, as defined by its discovery document, or a
+        well-known URI that points directly to that discovery document. This
+        parameter is required.
+       </para>
+       <para>
+        When an OAuth client connects to the server, a URL for the discovery
+        document will be constructed using the issuer identifier. By default,
+        this URL uses the conventions of OpenID Connect Discovery: the path
+        <literal>/.well-known/openid-configuration</literal> will be appended
+        to the end of the issuer identifier. Alternatively, if the
+        <literal>issuer</literal> contains a <literal>/.well-known/</literal>
+        path segment, that URL will be provided to the client as-is.
+       </para>
+       <warning>
+        <para>
+         The OAuth client in libpq requires the server's issuer setting to
+         exactly match the issuer identifier which is provided in the discovery
+         document, which must in turn match the client's
+         <xref linkend="libpq-connect-oauth-issuer"/> setting. No variations in
+         case or formatting are permitted.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>scope</literal></term>
+      <listitem>
+       <para>
+        A space-separated list of the OAuth scopes needed for the server to
+        both authorize the client and authenticate the user.  Appropriate values
+        are determined by the authorization server and the OAuth validation
+        module used (see <xref linkend="oauth-validators" /> for more
+        information on validators).  This parameter is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>validator</literal></term>
+      <listitem>
+       <para>
+        The library to use for validating bearer tokens. If given, the name must
+        exactly match one of the libraries listed in
+        <xref linkend="guc-oauth-validator-libraries" />.  This parameter is
+        optional unless <literal>oauth_validator_libraries</literal> contains
+        more than one library, in which case it is required.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term><literal>map</literal></term>
+      <listitem>
+       <para>
+        Allows for mapping between OAuth identity provider and database user
+        names.  See <xref linkend="auth-username-maps"/> for details.  If a
+        map is not specified, the user name associated with the token (as
+        determined by the OAuth validator) must exactly match the role name
+        being requested.  This parameter is optional.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
+      <term id="auth-oauth-delegate-ident-mapping" xreflabel="delegate_ident_mapping">
+       <literal>delegate_ident_mapping</literal>
+      </term>
+      <listitem>
+       <para>
+        An advanced option which is not intended for common use.
+       </para>
+       <para>
+        When set to <literal>1</literal>, standard user mapping with
+        <filename>pg_ident.conf</filename> is skipped, and the OAuth validator
+        takes full responsibility for mapping end user identities to database
+        roles.  If the validator authorizes the token, the server trusts that
+        the user is allowed to connect under the requested role, and the
+        connection is allowed to proceed regardless of the authentication
+        status of the user.
+       </para>
+       <para>
+        This parameter is incompatible with <literal>map</literal>.
+       </para>
+       <warning>
+        <para>
+         <literal>delegate_ident_mapping</literal> provides additional
+         flexibility in the design of the authentication system, but it also
+         requires careful implementation of the OAuth validator, which must
+         determine whether the provided token carries sufficient end-user
+         privileges in addition to the <link linkend="oauth-validators">standard
+         checks</link> required of all validators.  Use with caution.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+  </sect1>
+
   <sect1 id="client-authentication-problems">
    <title>Authentication Problems</title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 9eedcf6f0f4..007746a4429 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1209,6 +1209,32 @@ include_dir 'conf.d'
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="guc-oauth-validator-libraries" xreflabel="oauth_validator_libraries">
+      <term><varname>oauth_validator_libraries</varname> (<type>string</type>)
+      <indexterm>
+       <primary><varname>oauth_validator_libraries</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+        The library/libraries to use for validating OAuth connection tokens. If
+        only one validator library is provided, it will be used by default for
+        any OAuth connections; otherwise, all
+        <link linkend="auth-oauth"><literal>oauth</literal> HBA entries</link>
+        must explicitly set a <literal>validator</literal> chosen from this
+        list. If set to an empty string (the default), OAuth connections will be
+        refused. This parameter can only be set in the
+        <filename>postgresql.conf</filename> file.
+       </para>
+       <para>
+        Validator modules must be implemented/obtained separately;
+        <productname>PostgreSQL</productname> does not ship with any default
+        implementations. For more information on implementing OAuth validators,
+        see <xref linkend="oauth-validators" />.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
      </sect2>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 66e6dccd4c9..25fb99cee69 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -111,6 +111,7 @@
 <!ENTITY generic-wal SYSTEM "generic-wal.sgml">
 <!ENTITY custom-rmgr SYSTEM "custom-rmgr.sgml">
 <!ENTITY backup-manifest SYSTEM "backup-manifest.sgml">
+<!ENTITY oauth-validators SYSTEM "oauth-validators.sgml">
 
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 3f0a7e9c069..3c95c15a1e4 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1143,6 +1143,19 @@ build-postgresql:
        </listitem>
       </varlistentry>
 
+      <varlistentry id="configure-option-with-libcurl">
+       <term><option>--with-libcurl</option></term>
+       <listitem>
+        <para>
+         Build with libcurl support for OAuth 2.0 client flows.
+         Libcurl version 7.61.0 or later is required for this feature.
+         Building with this will check for the required header files
+         and libraries to make sure that your <productname>curl</productname>
+         installation is sufficient before proceeding.
+        </para>
+       </listitem>
+      </varlistentry>
+
       <varlistentry id="configure-option-with-libxml">
        <term><option>--with-libxml</option></term>
        <listitem>
@@ -2584,6 +2597,20 @@ ninja install
       </listitem>
      </varlistentry>
 
+     <varlistentry id="configure-with-libcurl-meson">
+      <term><option>-Dlibcurl={ auto | enabled | disabled }</option></term>
+      <listitem>
+       <para>
+        Build with libcurl support for OAuth 2.0 client flows.
+        Libcurl version 7.61.0 or later is required for this feature.
+        Building with this will check for the required header files
+        and libraries to make sure that your <productname>Curl</productname>
+        installation is sufficient before proceeding. The default for this
+        option is auto.
+       </para>
+      </listitem>
+     </varlistentry>
+
      <varlistentry id="configure-with-libxml-meson">
       <term><option>-Dlibxml={ auto | enabled | disabled }</option></term>
       <listitem>
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index c49e975b082..ddb3596df83 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -1385,6 +1385,15 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
           </listitem>
          </varlistentry>
 
+         <varlistentry>
+          <term><literal>oauth</literal></term>
+          <listitem>
+           <para>
+            The server must request an OAuth bearer token from the client.
+           </para>
+          </listitem>
+         </varlistentry>
+
          <varlistentry>
           <term><literal>none</literal></term>
           <listitem>
@@ -2373,6 +2382,107 @@ postgresql://%2Fvar%2Flib%2Fpostgresql/dbname
        </para>
       </listitem>
      </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-issuer" xreflabel="oauth_issuer">
+      <term><literal>oauth_issuer</literal></term>
+      <listitem>
+       <para>
+        The HTTPS URL of a trusted issuer to contact if the server requests an
+        OAuth token for the connection. This parameter is required for all OAuth
+        connections; it should exactly match the <literal>issuer</literal>
+        setting in <link linkend="auth-oauth">the server's HBA configuration</link>.
+       </para>
+       <para>
+        As part of the standard authentication handshake, <application>libpq</application>
+        will ask the server for a <emphasis>discovery document:</emphasis> a URL
+        providing a set of OAuth configuration parameters. The server must
+        provide a URL that is directly constructed from the components of the
+        <literal>oauth_issuer</literal>, and this value must exactly match the
+        issuer identifier that is declared in the discovery document itself, or
+        the connection will fail. This is required to prevent a class of
+        <ulink url="https://mailarchive.ietf.org/arch/msg/oauth/JIVxFBGsJBVtm7ljwJhPUm3Fr-w/">
+        "mix-up attacks"</ulink> on OAuth clients.
+       </para>
+       <para>
+        You may also explicitly set <literal>oauth_issuer</literal> to the
+        <literal>/.well-known/</literal> URI used for OAuth discovery. In this
+        case, if the server asks for a different URL, the connection will fail,
+        but a <link linkend="libpq-oauth-authdata-hooks">custom OAuth flow</link>
+        may be able to speed up the standard handshake by using previously
+        cached tokens. (In this case, it is recommended that
+        <xref linkend="libpq-connect-oauth-scope"/> be set as well, since the
+        client will not have a chance to ask the server for a correct scope
+        setting, and the default scopes for a token may not be sufficient to
+        connect.) <application>libpq</application> currently supports the
+        following well-known endpoints:
+        <itemizedlist spacing="compact">
+         <listitem><para><literal>/.well-known/openid-configuration</literal></para></listitem>
+         <listitem><para><literal>/.well-known/oauth-authorization-server</literal></para></listitem>
+        </itemizedlist>
+       </para>
+       <warning>
+        <para>
+         Issuers are highly privileged during the OAuth connection handshake. As
+         a rule of thumb, if you would not trust the operator of a URL to handle
+         access to your servers, or to impersonate you directly, that URL should
+         not be trusted as an <literal>oauth_issuer</literal>.
+        </para>
+       </warning>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-id" xreflabel="oauth_client_id">
+      <term><literal>oauth_client_id</literal></term>
+      <listitem>
+       <para>
+        An OAuth 2.0 client identifier, as issued by the authorization server.
+        If the <productname>PostgreSQL</productname> server
+        <link linkend="auth-oauth">requests an OAuth token</link> for the
+        connection (and if no <link linkend="libpq-oauth-authdata-hooks">custom
+        OAuth hook</link> is installed to provide one), then this parameter must
+        be set; otherwise, the connection will fail.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-client-secret" xreflabel="oauth_client_secret">
+      <term><literal>oauth_client_secret</literal></term>
+      <listitem>
+       <para>
+        The client password, if any, to use when contacting the OAuth
+        authorization server. Whether this parameter is required or not is
+        determined by the OAuth provider; "public" clients generally do not use
+        a secret, whereas "confidential" clients generally do.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-connect-oauth-scope" xreflabel="oauth_scope">
+      <term><literal>oauth_scope</literal></term>
+      <listitem>
+       <para>
+        The scope of the access request sent to the authorization server,
+        specified as a (possibly empty) space-separated list of OAuth scope
+        identifiers. This parameter is optional and intended for advanced usage.
+       </para>
+       <para>
+        Usually the client will obtain appropriate scope settings from the
+        <productname>PostgreSQL</productname> server. If this parameter is used,
+        the server's requested scope list will be ignored. This can prevent a
+        less-trusted server from requesting inappropriate access scopes from the
+        end user. However, if the client's scope setting does not contain the
+        server's required scopes, the server is likely to reject the issued
+        token, and the connection will fail.
+       </para>
+       <para>
+        The meaning of an empty scope list is provider-dependent. An OAuth
+        authorization server may choose to issue a token with "default scope",
+        whatever that happens to be, or it may reject the token request
+        entirely.
+       </para>
+      </listitem>
+     </varlistentry>
+
     </variablelist>
    </para>
   </sect2>
@@ -10020,6 +10130,329 @@ void PQinitSSL(int do_ssl);
 
  </sect1>
 
+ <sect1 id="libpq-oauth">
+  <title>OAuth Support</title>
+
+  <para>
+   libpq implements support for the OAuth v2 Device Authorization client flow,
+   documented in
+   <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
+   which it will attempt to use by default if the server
+   <link linkend="auth-oauth">requests a bearer token</link> during
+   authentication. This flow can be utilized even if the system running the
+   client application does not have a usable web browser, for example when
+   running a client via <application>SSH</application>. Client applications may implement their own flows
+   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
+  </para>
+  <para>
+   The builtin flow will, by default, print a URL to visit and a user code to
+   enter there:
+<programlisting>
+$ psql 'dbname=postgres oauth_issuer=https://example.com oauth_client_id=...'
+Visit https://example.com/device and enter the code: ABCD-EFGH
+</programlisting>
+   (This prompt may be
+   <link linkend="libpq-oauth-authdata-prompt-oauth-device">customized</link>.)
+   The user will then log into their OAuth provider, which will ask whether
+   to allow libpq and the server to perform actions on their behalf. It is always
+   a good idea to carefully review the URL and permissions displayed, to ensure
+   they match expectations, before continuing. Permissions should not be given
+   to untrusted third parties.
+  </para>
+  <para>
+   For an OAuth client flow to be usable, the connection string must at minimum
+   contain <xref linkend="libpq-connect-oauth-issuer"/> and
+   <xref linkend="libpq-connect-oauth-client-id"/>. (These settings are
+   determined by your organization's OAuth provider.) The builtin flow
+   additionally requires the OAuth authorization server to publish a device
+   authorization endpoint.
+  </para>
+
+  <note>
+   <para>
+    The builtin Device Authorization flow is not currently supported on Windows.
+    Custom client flows may still be implemented.
+   </para>
+  </note>
+
+  <sect2 id="libpq-oauth-authdata-hooks">
+   <title>Authdata Hooks</title>
+
+   <para>
+    The behavior of the OAuth flow may be modified or replaced by a client using
+    the following hook API:
+
+    <variablelist>
+     <varlistentry id="libpq-PQsetAuthDataHook">
+      <term><function>PQsetAuthDataHook</function><indexterm><primary>PQsetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Sets the <symbol>PGauthDataHook</symbol>, overriding
+        <application>libpq</application>'s handling of one or more aspects of
+        its OAuth client flow.
+<synopsis>
+void PQsetAuthDataHook(PQauthDataHook_type hook);
+</synopsis>
+        If <replaceable>hook</replaceable> is <literal>NULL</literal>, the
+        default handler will be reinstalled. Otherwise, the application passes
+        a pointer to a callback function with the signature:
+<programlisting>
+int hook_fn(PGauthData type, PGconn *conn, void *data);
+</programlisting>
+        which <application>libpq</application> will call when an action is
+        required of the application. <replaceable>type</replaceable> describes
+        the request being made, <replaceable>conn</replaceable> is the
+        connection handle being authenticated, and <replaceable>data</replaceable>
+        points to request-specific metadata. The contents of this pointer are
+        determined by <replaceable>type</replaceable>; see
+        <xref linkend="libpq-oauth-authdata-hooks-types"/> for the supported
+        list.
+       </para>
+       <para>
+        Hooks can be chained together to allow cooperative and/or fallback
+        behavior. In general, a hook implementation should examine the incoming
+        <replaceable>type</replaceable> (and, potentially, the request metadata
+        and/or the settings for the particular <replaceable>conn</replaceable>
+        in use) to decide whether or not to handle a specific piece of authdata.
+        If not, it should delegate to the previous hook in the chain
+        (retrievable via <function>PQgetAuthDataHook</function>).
+       </para>
+       <para>
+        Success is indicated by returning an integer greater than zero.
+        Returning a negative integer signals an error condition and abandons the
+        connection attempt. (A zero value is reserved for the default
+        implementation.)
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="libpq-PQgetAuthDataHook">
+      <term><function>PQgetAuthDataHook</function><indexterm><primary>PQgetAuthDataHook</primary></indexterm></term>
+
+      <listitem>
+       <para>
+        Retrieves the current value of <symbol>PGauthDataHook</symbol>.
+<synopsis>
+PQauthDataHook_type PQgetAuthDataHook(void);
+</synopsis>
+        At initialization time (before the first call to
+        <function>PQsetAuthDataHook</function>), this function will return
+        <symbol>PQdefaultAuthDataHook</symbol>.
+       </para>
+      </listitem>
+     </varlistentry>
+    </variablelist>
+   </para>
+
+   <sect3 id="libpq-oauth-authdata-hooks-types">
+    <title>Hook Types</title>
+    <para>
+     The following <symbol>PGauthData</symbol> types and their corresponding
+     <replaceable>data</replaceable> structures are defined:
+
+     <variablelist>
+      <varlistentry id="libpq-oauth-authdata-prompt-oauth-device">
+       <term>
+        <symbol>PQAUTHDATA_PROMPT_OAUTH_DEVICE</symbol>
+        <indexterm><primary>PQAUTHDATA_PROMPT_OAUTH_DEVICE</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the default user prompt during the builtin device
+         authorization client flow. <replaceable>data</replaceable> points to
+         an instance of <symbol>PGpromptOAuthDevice</symbol>:
+<synopsis>
+typedef struct _PGpromptOAuthDevice
+{
+    const char *verification_uri;   /* verification URI to visit */
+    const char *user_code;          /* user code to enter */
+    const char *verification_uri_complete;  /* optional combination of URI and
+                                             * code, or NULL */
+    int         expires_in;         /* seconds until user code expires */
+} PGpromptOAuthDevice;
+</synopsis>
+        </para>
+        <para>
+         The OAuth Device Authorization flow included in <application>libpq</application>
+         requires the end user to visit a URL with a browser, then enter a code
+         which permits <application>libpq</application> to connect to the server
+         on their behalf. The default prompt simply prints the
+         <literal>verification_uri</literal> and <literal>user_code</literal>
+         on standard error. Replacement implementations may display this
+         information using any preferred method, for example with a GUI.
+        </para>
+        <para>
+         This callback is only invoked during the builtin device
+         authorization flow. If the application installs a
+         <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
+         flow</link>, this authdata type will not be used.
+        </para>
+        <para>
+         If a non-NULL <structfield>verification_uri_complete</structfield> is
+         provided, it may optionally be used for non-textual verification (for
+         example, by displaying a QR code). The URL and user code should still
+         be displayed to the end user in this case, because the code will be
+         manually confirmed by the provider, and the URL lets users continue
+         even if they can't use the non-textual method. For more information,
+         see section 3.3.1 in
+         <ulink url="https://datatracker.ietf.org/doc/html/rfc8628#section-3.3.1">RFC 8628</ulink>.
+        </para>
+       </listitem>
+      </varlistentry>
+
+      <varlistentry id="libpq-oauth-authdata-oauth-bearer-token">
+       <term>
+        <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol>
+        <indexterm><primary>PQAUTHDATA_OAUTH_BEARER_TOKEN</primary></indexterm>
+       </term>
+       <listitem>
+        <para>
+         Replaces the entire OAuth flow with a custom implementation. The hook
+         should either directly return a Bearer token for the current
+         user/issuer/scope combination, if one is available without blocking, or
+         else set up an asynchronous callback to retrieve one.
+        </para>
+        <para>
+         <replaceable>data</replaceable> points to an instance
+         of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
+         by the implementation:
+<synopsis>
+typedef struct _PGoauthBearerRequest
+{
+    /* Hook inputs (constant across all calls) */
+    const char *const openid_configuration; /* OIDC discovery URL */
+    const char *const scope;                /* required scope(s), or NULL */
+
+    /* Hook outputs */
+
+    /* Callback implementing a custom asynchronous OAuth flow. */
+    PostgresPollingStatusType (*async) (PGconn *conn,
+                                        struct _PGoauthBearerRequest *request,
+                                        SOCKTYPE *altsock);
+
+    /* Callback to clean up custom allocations. */
+    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+    char       *token;   /* acquired Bearer token */
+    void       *user;    /* hook-defined allocated data */
+} PGoauthBearerRequest;
+</synopsis>
+        </para>
+        <para>
+         Two pieces of information are provided to the hook by
+         <application>libpq</application>:
+         <replaceable>openid_configuration</replaceable> contains the URL of an
+         OAuth discovery document describing the authorization server's
+         supported flows, and <replaceable>scope</replaceable> contains a
+         (possibly empty) space-separated list of OAuth scopes which are
+         required to access the server. Either or both may be
+         <literal>NULL</literal> to indicate that the information was not
+         discoverable. (In this case, implementations may be able to establish
+         the requirements using some other preconfigured knowledge, or they may
+         choose to fail.)
+        </para>
+        <para>
+         The final output of the hook is <replaceable>token</replaceable>, which
+         must point to a valid Bearer token for use on the connection. (This
+         token should be issued by the
+         <xref linkend="libpq-connect-oauth-issuer"/> and hold the requested
+         scopes, or the connection will be rejected by the server's validator
+         module.) The allocated token string must remain valid until
+         <application>libpq</application> is finished connecting; the hook
+         should set a <replaceable>cleanup</replaceable> callback which will be
+         called when <application>libpq</application> no longer requires it.
+        </para>
+        <para>
+         If an implementation cannot immediately produce a
+         <replaceable>token</replaceable> during the initial call to the hook,
+         it should set the <replaceable>async</replaceable> callback to handle
+         nonblocking communication with the authorization server.
+         <footnote>
+          <para>
+           Performing blocking operations during the
+           <symbol>PQAUTHDATA_OAUTH_BEARER_TOKEN</symbol> hook callback will
+           interfere with nonblocking connection APIs such as
+           <function>PQconnectPoll</function> and prevent concurrent connections
+           from making progress. Applications which only ever use the
+           synchronous connection primitives, such as
+           <function>PQconnectdb</function>, may synchronously retrieve a token
+           during the hook instead of implementing the
+           <replaceable>async</replaceable> callback, but they will necessarily
+           be limited to one connection at a time.
+          </para>
+         </footnote>
+         This will be called to begin the flow immediately upon return from the
+         hook. When the callback cannot make further progress without blocking,
+         it should return either <symbol>PGRES_POLLING_READING</symbol> or
+         <symbol>PGRES_POLLING_WRITING</symbol> after setting
+         <literal>*pgsocket</literal> to the file descriptor that will be marked
+         ready to read/write when progress can be made again. (This descriptor
+         is then provided to the top-level polling loop via
+         <function>PQsocket()</function>.) Return <symbol>PGRES_POLLING_OK</symbol>
+         after setting <replaceable>token</replaceable> when the flow is
+         complete, or <symbol>PGRES_POLLING_FAILED</symbol> to indicate failure.
+        </para>
+        <para>
+         Implementations may wish to store additional data for bookkeeping
+         across calls to the <replaceable>async</replaceable> and
+         <replaceable>cleanup</replaceable> callbacks. The
+         <replaceable>user</replaceable> pointer is provided for this purpose;
+         <application>libpq</application> will not touch its contents and the
+         application may use it at its convenience. (Remember to free any
+         allocations during token cleanup.)
+        </para>
+       </listitem>
+      </varlistentry>
+     </variablelist>
+    </para>
+   </sect3>
+  </sect2>
+
+  <sect2 id="libpq-oauth-debugging">
+   <title>Debugging and Developer Settings</title>
+
+   <para>
+    A "dangerous debugging mode" may be enabled by setting the environment
+    variable <envar>PGOAUTHDEBUG=UNSAFE</envar>. This functionality is provided
+    for ease of local development and testing only. It does several things that
+    you will not want a production system to do:
+
+    <itemizedlist spacing="compact">
+     <listitem>
+      <para>
+       permits the use of unencrypted HTTP during the OAuth provider exchange
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       allows the system's trusted CA list to be completely replaced using the
+       <envar>PGOAUTHCAFILE</envar> environment variable
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       prints HTTP traffic (containing several critical secrets) to standard
+       error during the OAuth flow
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       permits the use of zero-second retry intervals, which can cause the
+       client to busy-loop and pointlessly consume CPU
+      </para>
+     </listitem>
+    </itemizedlist>
+   </para>
+   <warning>
+    <para>
+     Do not share the output of the OAuth flow traffic with third parties. It
+     contains secrets that can be used to attack your clients and servers.
+    </para>
+   </warning>
+  </sect2>
+ </sect1>
+
 
  <sect1 id="libpq-threading">
   <title>Behavior in Threaded Programs</title>
@@ -10092,6 +10525,18 @@ int PQisthreadsafe();
    <application>libpq</application> source code for a way to do cooperative
    locking between <application>libpq</application> and your application.
   </para>
+
+  <para>
+   Similarly, if you are using <productname>Curl</productname> inside your application,
+   <emphasis>and</emphasis> you do not already
+   <ulink url="https://curl.se/libcurl/c/curl_global_init.html">initialize
+   libcurl globally</ulink> before starting new threads, you will need to
+   cooperatively lock (again via <function>PQregisterThreadLock</function>)
+   around any code that may initialize libcurl. This restriction is lifted for
+   more recent versions of <productname>Curl</productname> that are built to support thread-safe
+   initialization; those builds can be identified by the advertisement of a
+   <literal>threadsafe</literal> feature in their version metadata.
+  </para>
  </sect1>
 
 
diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
new file mode 100644
index 00000000000..356f11d3bd8
--- /dev/null
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -0,0 +1,414 @@
+<!-- doc/src/sgml/oauth-validators.sgml -->
+
+<chapter id="oauth-validators">
+ <title>OAuth Validator Modules</title>
+ <indexterm zone="oauth-validators">
+  <primary>OAuth Validators</primary>
+ </indexterm>
+ <para>
+  <productname>PostgreSQL</productname> provides infrastructure for creating
+  custom modules to perform server-side validation of OAuth bearer tokens.
+  Because OAuth implementations vary so wildly, and bearer token validation is
+  heavily dependent on the issuing party, the server cannot check the token
+  itself; validator modules provide the integration layer between the server
+  and the OAuth provider in use.
+ </para>
+ <para>
+  OAuth validator modules must at least consist of an initialization function
+  (see <xref linkend="oauth-validator-init"/>) and the required callback for
+  performing validation (see <xref linkend="oauth-validator-callback-validate"/>).
+ </para>
+ <warning>
+  <para>
+   Since a misbehaving validator might let unauthorized users into the database,
+   correct implementation is crucial for server safety. See
+   <xref linkend="oauth-validator-design"/> for design considerations.
+  </para>
+ </warning>
+
+ <sect1 id="oauth-validator-design">
+  <title>Safely Designing a Validator Module</title>
+  <warning>
+   <para>
+    Read and understand the entirety of this section before implementing a
+    validator module. A malfunctioning validator is potentially worse than no
+    authentication at all, both because of the false sense of security it
+    provides, and because it may contribute to attacks against other pieces of
+    an OAuth ecosystem.
+   </para>
+  </warning>
+
+  <sect2 id="oauth-validator-design-responsibilities">
+   <title>Validator Responsibilities</title>
+   <para>
+    Although different modules may take very different approaches to token
+    validation, implementations generally need to perform three separate
+    actions:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Validate the Token</term>
+     <listitem>
+      <para>
+       The validator must first ensure that the presented token is in fact a
+       valid Bearer token for use in client authentication. The correct way to
+       do this depends on the provider, but it generally involves either
+       cryptographic operations to prove that the token was created by a trusted
+       party (offline validation), or the presentation of the token to that
+       trusted party so that it can perform validation for you (online
+       validation).
+      </para>
+      <para>
+       Online validation, usually implemented via
+       <ulink url="https://datatracker.ietf.org/doc/html/rfc7662">OAuth Token
+       Introspection</ulink>, requires fewer steps of a validator module and
+       allows central revocation of a token in the event that it is stolen
+       or misissued. However, it does require the module to make at least one
+       network call per authentication attempt (all of which must complete
+       within the configured <xref linkend="guc-authentication-timeout"/>).
+       Additionally, your provider may not provide introspection endpoints for
+       use by external resource servers.
+      </para>
+      <para>
+       Offline validation is much more involved, typically requiring a validator
+       to maintain a list of trusted signing keys for a provider and then
+       check the token's cryptographic signature along with its contents.
+       Implementations must follow the provider's instructions to the letter,
+       including any verification of issuer ("where is this token from?"),
+       audience ("who is this token for?"), and validity period ("when can this
+       token be used?"). Since there is no communication between the module and
+       the provider, tokens cannot be centrally revoked using this method;
+       offline validator implementations may wish to place restrictions on the
+       maximum length of a token's validity period.
+      </para>
+      <para>
+       If the token cannot be validated, the module should immediately fail.
+       Further authentication/authorization is pointless if the bearer token
+       wasn't issued by a trusted party.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authorize the Client</term>
+     <listitem>
+      <para>
+       Next the validator must ensure that the end user has given the client
+       permission to access the server on their behalf. This generally involves
+       checking the scopes that have been assigned to the token, to make sure
+       that they cover database access for the current HBA parameters.
+      </para>
+      <para>
+       The purpose of this step is to prevent an OAuth client from obtaining a
+       token under false pretenses. If the validator requires all tokens to
+       carry scopes that cover database access, the provider should then loudly
+       prompt the user to grant that access during the flow. This gives them the
+       opportunity to reject the request if the client isn't supposed to be
+       using their credentials to connect to databases.
+      </para>
+      <para>
+       While it is possible to establish client authorization without explicit
+       scopes by using out-of-band knowledge of the deployed architecture, doing
+       so removes the user from the loop, which prevents them from catching
+       deployment mistakes and allows any such mistakes to be exploited
+       silently. Access to the database must be tightly restricted to only
+       trusted clients
+       <footnote>
+        <para>
+         That is, "trusted" in the sense that the OAuth client and the
+         <productname>PostgreSQL</productname> server are controlled by the same
+         entity. Notably, the Device Authorization client flow supported by
+         libpq does not usually meet this bar, since it's designed for use by
+         public/untrusted clients.
+        </para>
+       </footnote>
+       if users are not prompted for additional scopes.
+      </para>
+      <para>
+       Even if authorization fails, a module may choose to continue to pull
+       authentication information from the token for use in auditing and
+       debugging.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Authenticate the End User</term>
+     <listitem>
+      <para>
+       Finally, the validator should determine a user identifier for the token,
+       either by asking the provider for this information or by extracting it
+       from the token itself, and return that identifier to the server (which
+       will then make a final authorization decision using the HBA
+       configuration). This identifier will be available within the session via
+       <link linkend="functions-info-session-table"><function>system_user</function></link>
+       and recorded in the server logs if <xref linkend="guc-log-connections"/>
+       is enabled.
+      </para>
+      <para>
+       Different providers may record a variety of different authentication
+       information for an end user, typically referred to as
+       <emphasis>claims</emphasis>. Providers usually document which of these
+       claims are trustworthy enough to use for authorization decisions and
+       which are not. (For instance, it would probably not be wise to use an
+       end user's full name as the identifier for authentication, since many
+       providers allow users to change their display names arbitrarily.)
+       Ultimately, the choice of which claim (or combination of claims) to use
+       comes down to the provider implementation and application requirements.
+      </para>
+      <para>
+       Note that anonymous/pseudonymous login is possible as well, by enabling
+       usermap delegation; see
+       <xref linkend="oauth-validator-design-usermap-delegation"/>.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-guidelines">
+   <title>General Coding Guidelines</title>
+   <para>
+    Developers should keep the following in mind when implementing token
+    validation:
+   </para>
+   <variablelist>
+    <varlistentry>
+     <term>Token Confidentiality</term>
+     <listitem>
+      <para>
+       Modules should not write tokens, or pieces of tokens, into the server
+       log. This is true even if the module considers the token invalid; an
+       attacker who confuses a client into communicating with the wrong provider
+       should not be able to retrieve that (otherwise valid) token from the
+       disk.
+      </para>
+      <para>
+       Implementations that send tokens over the network (for example, to
+       perform online token validation with a provider) must authenticate the
+       peer and ensure that strong transport security is in use.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Logging</term>
+     <listitem>
+      <para>
+       Modules may use the same <link linkend="error-message-reporting">logging
+       facilities</link> as standard extensions; however, the rules for emitting
+       log entries to the client are subtly different during the authentication
+       phase of the connection. Generally speaking, modules should log
+       verification problems at the <symbol>COMMERROR</symbol> level and return
+       normally, instead of using <symbol>ERROR</symbol>/<symbol>FATAL</symbol>
+       to unwind the stack, to avoid leaking information to unauthenticated
+       clients.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Testing</term>
+     <listitem>
+      <para>
+       The breadth of testing an OAuth system is well beyond the scope of this
+       documentation, but at minimum, negative testing should be considered
+       mandatory. It's trivial to design a module that lets authorized users in;
+       the whole point of the system is to keep unauthorized users out.
+      </para>
+     </listitem>
+    </varlistentry>
+    <varlistentry>
+     <term>Documentation</term>
+     <listitem>
+      <para>
+       Validator implementations should document the contents and format of the
+       authenticated ID that is reported to the server for each end user, since
+       DBAs may need to use this information to construct pg_ident maps. (For
+       instance, is it an email address? an organizational ID number? a UUID?)
+       They should also document whether or not it is safe to use the module in
+       <symbol>delegate_ident_mapping=1</symbol> mode, and what additional
+       configuration is required in order to do so.
+      </para>
+     </listitem>
+    </varlistentry>
+   </variablelist>
+  </sect2>
+
+  <sect2 id="oauth-validator-design-usermap-delegation">
+   <title>Authorizing Users (Usermap Delegation)</title>
+   <para>
+    The standard deliverable of a validation module is the user identifier,
+    which the server will then compare to any configured
+    <link linkend="auth-username-maps"><filename>pg_ident.conf</filename>
+    mappings</link> and determine whether the end user is authorized to connect.
+    However, OAuth is itself an authorization framework, and tokens may carry
+    information about user privileges. For example, a token may be associated
+    with the organizational groups that a user belongs to, or list the roles
+    that a user may assume, and duplicating that knowledge into local usermaps
+    for every server may not be desirable.
+   </para>
+   <para>
+    To bypass username mapping entirely, and have the validator module assume
+    the additional responsibility of authorizing user connections, the HBA may
+    be configured with <xref linkend="auth-oauth-delegate-ident-mapping"/>.
+    The module may then use token scopes or an equivalent method to decide
+    whether the user is allowed to connect under their desired role. The user
+    identifier will still be recorded by the server, but it plays no part in
+    determining whether to continue the connection.
+   </para>
+   <para>
+    Using this scheme, authentication itself is optional. As long as the module
+    reports that the connection is authorized, login will continue even if there
+    is no recorded user identifier at all. This makes it possible to implement
+    anonymous or pseudonymous access to the database, where the third-party
+    provider performs all necessary authentication but does not provide any
+    user-identifying information to the server. (Some providers may create an
+    anonymized ID number that can be recorded instead, for later auditing.)
+   </para>
+   <para>
+    Usermap delegation provides the most architectural flexibility, but it turns
+    the validator module into a single point of failure for connection
+    authorization. Use with caution.
+   </para>
+  </sect2>
+ </sect1>
+
+ <sect1 id="oauth-validator-init">
+  <title>Initialization Functions</title>
+  <indexterm zone="oauth-validator-init">
+   <primary>_PG_oauth_validator_module_init</primary>
+  </indexterm>
+  <para>
+   OAuth validator modules are dynamically loaded from the shared
+   libraries listed in <xref linkend="guc-oauth-validator-libraries"/>.
+   Modules are loaded on demand when requested from a login in progress.
+   The normal library search path is used to locate the library. To
+   provide the validator callbacks and to indicate that the library is an OAuth
+   validator module a function named
+   <function>_PG_oauth_validator_module_init</function> must be provided. The
+   return value of the function must be a pointer to a struct of type
+   <structname>OAuthValidatorCallbacks</structname>, which contains a magic
+   number and pointers to the module's token validation functions. The returned
+   pointer must be of server lifetime, which is typically achieved by defining
+   it as a <literal>static const</literal> variable in global scope.
+<programlisting>
+typedef struct OAuthValidatorCallbacks
+{
+    uint32        magic;            /* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
+    ValidatorStartupCB startup_cb;
+    ValidatorShutdownCB shutdown_cb;
+    ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+</programlisting>
+
+   Only the <function>validate_cb</function> callback is required, the others
+   are optional.
+  </para>
+ </sect1>
+
+ <sect1 id="oauth-validator-callbacks">
+  <title>OAuth Validator Callbacks</title>
+  <para>
+   OAuth validator modules implement their functionality by defining a set of
+   callbacks. The server will call them as required to process the
+   authentication request from the user.
+  </para>
+
+  <sect2 id="oauth-validator-callback-startup">
+   <title>Startup Callback</title>
+   <para>
+    The <function>startup_cb</function> callback is executed directly after
+    loading the module. This callback can be used to set up local state and
+    perform additional initialization if required. If the validator module
+    has state it can use <structfield>state->private_data</structfield> to
+    store it.
+
+<programlisting>
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-validate">
+   <title>Validate Callback</title>
+   <para>
+    The <function>validate_cb</function> callback is executed during the OAuth
+    exchange when a user attempts to authenticate using OAuth.  Any state set in
+    previous calls will be available in <structfield>state->private_data</structfield>.
+
+<programlisting>
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+                                     const char *token, const char *role,
+                                     ValidatorModuleResult *result);
+</programlisting>
+
+    <replaceable>token</replaceable> will contain the bearer token to validate.
+    <application>PostgreSQL</application> has ensured that the token is well-formed syntactically, but no
+    other validation has been performed.  <replaceable>role</replaceable> will
+    contain the role the user has requested to log in as.  The callback must
+    set output parameters in the <literal>result</literal> struct, which is
+    defined as below:
+
+<programlisting>
+typedef struct ValidatorModuleResult
+{
+    bool        authorized;
+    char       *authn_id;
+} ValidatorModuleResult;
+</programlisting>
+
+    The connection will only proceed if the module sets
+    <structfield>result->authorized</structfield> to <literal>true</literal>.  To
+    authenticate the user, the authenticated user name (as determined using the
+    token) shall be palloc'd and returned in the <structfield>result->authn_id</structfield>
+    field.  Alternatively, <structfield>result->authn_id</structfield> may be set to
+    NULL if the token is valid but the associated user identity cannot be
+    determined.
+   </para>
+   <para>
+    A validator may return <literal>false</literal> to signal an internal error,
+    in which case any result parameters are ignored and the connection fails.
+    Otherwise the validator should return <literal>true</literal> to indicate
+    that it has processed the token and made an authorization decision.
+   </para>
+   <para>
+    The behavior after <function>validate_cb</function> returns depends on the
+    specific HBA setup.  Normally, the <structfield>result->authn_id</structfield> user
+    name must exactly match the role that the user is logging in as.  (This
+    behavior may be modified with a usermap.)  But when authenticating against
+    an HBA rule with <literal>delegate_ident_mapping</literal> turned on,
+    <productname>PostgreSQL</productname> will not perform any checks on the value of
+    <structfield>result->authn_id</structfield> at all; in this case it is up to the
+    validator to ensure that the token carries enough privileges for the user to
+    log in under the indicated <replaceable>role</replaceable>.
+   </para>
+  </sect2>
+
+  <sect2 id="oauth-validator-callback-shutdown">
+   <title>Shutdown Callback</title>
+   <para>
+    The <function>shutdown_cb</function> callback is executed when the backend
+    process associated with the connection exits. If the validator module has
+    any allocated state, this callback should free it to avoid resource leaks.
+<programlisting>
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+</programlisting>
+   </para>
+  </sect2>
+
+ </sect1>
+</chapter>
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 7be25c58507..af476c82fcc 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -229,6 +229,7 @@ break is not needed in a wider output rendering.
   &logicaldecoding;
   &replication-origins;
   &archive-modules;
+  &oauth-validators;
 
  </part>
 
diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml
index fb5dec1172e..3bd9e68e6ce 100644
--- a/doc/src/sgml/protocol.sgml
+++ b/doc/src/sgml/protocol.sgml
@@ -1688,11 +1688,11 @@ SELCT 1/0;<!-- this typo is intentional -->
 
   <para>
    <firstterm>SASL</firstterm> is a framework for authentication in connection-oriented
-   protocols. At the moment, <productname>PostgreSQL</productname> implements two SASL
-   authentication mechanisms, SCRAM-SHA-256 and SCRAM-SHA-256-PLUS. More
-   might be added in the future. The below steps illustrate how SASL
-   authentication is performed in general, while the next subsection gives
-   more details on SCRAM-SHA-256 and SCRAM-SHA-256-PLUS.
+   protocols. At the moment, <productname>PostgreSQL</productname> implements three
+   SASL authentication mechanisms: SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and
+   OAUTHBEARER. More might be added in the future. The below steps illustrate how SASL
+   authentication is performed in general, while the next subsections give
+   more details on particular mechanisms.
   </para>
 
   <procedure>
@@ -1727,7 +1727,7 @@ SELCT 1/0;<!-- this typo is intentional -->
    <step id="sasl-auth-end">
     <para>
      Finally, when the authentication exchange is completed successfully, the
-     server sends an AuthenticationSASLFinal message, followed
+     server sends an optional AuthenticationSASLFinal message, followed
      immediately by an AuthenticationOk message. The AuthenticationSASLFinal
      contains additional server-to-client data, whose content is particular to the
      selected authentication mechanism. If the authentication mechanism doesn't
@@ -1746,9 +1746,9 @@ SELCT 1/0;<!-- this typo is intentional -->
    <title>SCRAM-SHA-256 Authentication</title>
 
    <para>
-    The implemented SASL mechanisms at the moment
-    are <literal>SCRAM-SHA-256</literal> and its variant with channel
-    binding <literal>SCRAM-SHA-256-PLUS</literal>. They are described in
+    <literal>SCRAM-SHA-256</literal>, and its variant with channel
+    binding <literal>SCRAM-SHA-256-PLUS</literal>, are password-based
+    authentication mechanisms. They are described in
     detail in <ulink url="https://datatracker.ietf.org/doc/html/rfc7677">RFC 7677</ulink>
     and <ulink url="https://datatracker.ietf.org/doc/html/rfc5802">RFC 5802</ulink>.
    </para>
@@ -1850,6 +1850,121 @@ SELCT 1/0;<!-- this typo is intentional -->
     </step>
    </procedure>
   </sect2>
+
+  <sect2 id="sasl-oauthbearer">
+   <title>OAUTHBEARER Authentication</title>
+
+   <para>
+    <literal>OAUTHBEARER</literal> is a token-based mechanism for federated
+    authentication. It is described in detail in
+    <ulink url="https://datatracker.ietf.org/doc/html/rfc7628">RFC 7628</ulink>.
+   </para>
+
+   <para>
+    A typical exchange differs depending on whether or not the client already
+    has a bearer token cached for the current user. If it does not, the exchange
+    will take place over two connections: the first "discovery" connection to
+    obtain OAuth metadata from the server, and the second connection to send
+    the token after the client has obtained it. (libpq does not currently
+    implement a caching method as part of its builtin flow, so it uses the
+    two-connection exchange.)
+   </para>
+
+   <para>
+    This mechanism is client-initiated, like SCRAM. The client initial response
+    consists of the standard "GS2" header used by SCRAM, followed by a list of
+    <literal>key=value</literal> pairs. The only key currently supported by
+    the server is <literal>auth</literal>, which contains the bearer token.
+    <literal>OAUTHBEARER</literal> additionally specifies three optional
+    components of the client initial response (the <literal>authzid</literal> of
+    the GS2 header, and the <structfield>host</structfield> and
+    <structfield>port</structfield> keys) which are currently ignored by the
+    server.
+   </para>
+
+   <para>
+    <literal>OAUTHBEARER</literal> does not support channel binding, and there
+    is no "OAUTHBEARER-PLUS" mechanism. This mechanism does not make use of
+    server data during a successful authentication, so the
+    AuthenticationSASLFinal message is not used in the exchange.
+   </para>
+
+   <procedure>
+    <title>Example</title>
+    <step>
+     <para>
+      During the first exchange, the server sends an AuthenticationSASL message
+      with the <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message which
+      indicates the <literal>OAUTHBEARER</literal> mechanism. Assuming the
+      client does not already have a valid bearer token for the current user,
+      the <structfield>auth</structfield> field is empty, indicating a discovery
+      connection.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an AuthenticationSASLContinue message containing an error
+      <literal>status</literal> alongside a well-known URI and scopes that the
+      client should use to conduct an OAuth flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Client sends a SASLResponse message containing the empty set (a single
+      <literal>0x01</literal> byte) to finish its half of the discovery
+      exchange.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      Server sends an ErrorMessage to fail the first exchange.
+     </para>
+     <para>
+      At this point, the client conducts one of many possible OAuth flows to
+      obtain a bearer token, using any metadata that it has been configured with
+      in addition to that provided by the server. (This description is left
+      deliberately vague; <literal>OAUTHBEARER</literal> does not specify or
+      mandate any particular method for obtaining a token.)
+     </para>
+     <para>
+      Once it has a token, the client reconnects to the server for the final
+      exchange:
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server once again sends an AuthenticationSASL message with the
+      <literal>OAUTHBEARER</literal> mechanism advertised.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The client responds by sending a SASLInitialResponse message, but this
+      time the <structfield>auth</structfield> field in the message contains the
+      bearer token that was obtained during the client flow.
+     </para>
+    </step>
+
+    <step>
+     <para>
+      The server validates the token according to the instructions of the
+      token provider. If the client is authorized to connect, it sends an
+      AuthenticationOk message to end the SASL exchange.
+     </para>
+    </step>
+   </procedure>
+  </sect2>
  </sect1>
 
  <sect1 id="protocol-replication">
diff --git a/doc/src/sgml/regress.sgml b/doc/src/sgml/regress.sgml
index 7c474559bdf..0e5e8e8f309 100644
--- a/doc/src/sgml/regress.sgml
+++ b/doc/src/sgml/regress.sgml
@@ -347,6 +347,16 @@ make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
       </para>
      </listitem>
     </varlistentry>
+
+    <varlistentry>
+     <term><literal>oauth</literal></term>
+     <listitem>
+      <para>
+       Runs the test suite under <filename>src/test/modules/oauth_validator</filename>.
+       This opens TCP/IP listen sockets for a test-server running HTTPS.
+      </para>
+     </listitem>
+    </varlistentry>
    </variablelist>
 
    Tests for features that are not supported by the current build
diff --git a/meson.build b/meson.build
index 7dd7110318d..574f992ed49 100644
--- a/meson.build
+++ b/meson.build
@@ -855,6 +855,101 @@ endif
 
 
 
+###############################################################
+# Library: libcurl
+###############################################################
+
+libcurlopt = get_option('libcurl')
+if not libcurlopt.disabled()
+  # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
+  # to explicitly set TLS 1.3 ciphersuites).
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  if libcurl.found()
+    cdata.set('USE_LIBCURL', 1)
+
+    # Check to see whether the current platform supports thread-safe Curl
+    # initialization.
+    libcurl_threadsafe_init = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+        #ifdef CURL_VERSION_THREADSAFE
+            if (info->features & CURL_VERSION_THREADSAFE)
+                return 0;
+        #endif
+
+            return 1;
+        }''',
+        name: 'test for curl_global_init thread safety',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_threadsafe_init = true
+        message('curl_global_init is thread-safe')
+      elif r.returncode() == 1
+        message('curl_global_init is not thread-safe')
+      else
+        message('curl_global_init failed; assuming not thread-safe')
+      endif
+    endif
+
+    if libcurl_threadsafe_init
+      cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
+    endif
+
+    # Warn if a thread-friendly DNS resolver isn't built.
+    libcurl_async_dns = false
+
+    if not meson.is_cross_build()
+      r = cc.run('''
+        #include <curl/curl.h>
+
+        int main(void)
+        {
+            curl_version_info_data *info;
+
+            if (curl_global_init(CURL_GLOBAL_ALL))
+                return -1;
+
+            info = curl_version_info(CURLVERSION_NOW);
+            return (info->features & CURL_VERSION_ASYNCHDNS) ? 0 : 1;
+        }''',
+        name: 'test for curl support for asynchronous DNS',
+        dependencies: libcurl,
+      )
+
+      assert(r.compiled())
+      if r.returncode() == 0
+        libcurl_async_dns = true
+      endif
+    endif
+
+    if not libcurl_async_dns
+      warning('''
+*** The installed version of libcurl does not support asynchronous DNS
+*** lookups. Connection timeouts will not be honored during DNS resolution,
+*** which may lead to hangs in client programs.''')
+    endif
+  endif
+
+else
+  libcurl = not_found_dep
+endif
+
+
+
 ###############################################################
 # Library: libxml
 ###############################################################
@@ -3045,6 +3140,10 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
+  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
+  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
+  # dependency on that platform?
+  libcurl,
   libintl,
   ssl,
 ]
@@ -3721,6 +3820,7 @@ if meson.version().version_compare('>=0.57')
       'gss': gssapi,
       'icu': icu,
       'ldap': ldap,
+      'libcurl': libcurl,
       'libxml': libxml,
       'libxslt': libxslt,
       'llvm': llvm,
diff --git a/meson_options.txt b/meson_options.txt
index d9c7ddccbc4..702c4517145 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,6 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+option('libcurl', type : 'feature', value: 'auto',
+  description: 'libcurl support')
+
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bbe11e75bf0..3b620bac5ac 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -190,6 +190,7 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
+with_libcurl	= @with_libcurl@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
diff --git a/src/backend/libpq/Makefile b/src/backend/libpq/Makefile
index 6d385fd6a45..98eb2a8242d 100644
--- a/src/backend/libpq/Makefile
+++ b/src/backend/libpq/Makefile
@@ -15,6 +15,7 @@ include $(top_builddir)/src/Makefile.global
 # be-fsstubs is here for historical reasons, probably belongs elsewhere
 
 OBJS = \
+	auth-oauth.o \
 	auth-sasl.o \
 	auth-scram.o \
 	auth.o \
diff --git a/src/backend/libpq/auth-oauth.c b/src/backend/libpq/auth-oauth.c
new file mode 100644
index 00000000000..27f7af7be00
--- /dev/null
+++ b/src/backend/libpq/auth-oauth.c
@@ -0,0 +1,894 @@
+/*-------------------------------------------------------------------------
+ *
+ * auth-oauth.c
+ *	  Server-side implementation of the SASL OAUTHBEARER mechanism.
+ *
+ * See the following RFC for more details:
+ * - RFC 7628: https://datatracker.ietf.org/doc/html/rfc7628
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/backend/libpq/auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+
+#include "common/oauth-common.h"
+#include "fmgr.h"
+#include "lib/stringinfo.h"
+#include "libpq/auth.h"
+#include "libpq/hba.h"
+#include "libpq/oauth.h"
+#include "libpq/sasl.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "utils/json.h"
+#include "utils/varlena.h"
+
+/* GUC */
+char	   *oauth_validator_libraries_string = NULL;
+
+static void oauth_get_mechanisms(Port *port, StringInfo buf);
+static void *oauth_init(Port *port, const char *selected_mech, const char *shadow_pass);
+static int	oauth_exchange(void *opaq, const char *input, int inputlen,
+						   char **output, int *outputlen, const char **logdetail);
+
+static void load_validator_library(const char *libname);
+static void shutdown_validator_library(void *arg);
+
+static ValidatorModuleState *validator_module_state;
+static const OAuthValidatorCallbacks *ValidatorCallbacks;
+
+/* Mechanism declaration */
+const pg_be_sasl_mech pg_be_oauth_mech = {
+	.get_mechanisms = oauth_get_mechanisms,
+	.init = oauth_init,
+	.exchange = oauth_exchange,
+
+	.max_message_length = PG_MAX_AUTH_TOKEN_LENGTH,
+};
+
+/* Valid states for the oauth_exchange() machine. */
+enum oauth_state
+{
+	OAUTH_STATE_INIT = 0,
+	OAUTH_STATE_ERROR,
+	OAUTH_STATE_FINISHED,
+};
+
+/* Mechanism callback state. */
+struct oauth_ctx
+{
+	enum oauth_state state;
+	Port	   *port;
+	const char *issuer;
+	const char *scope;
+};
+
+static char *sanitize_char(char c);
+static char *parse_kvpairs_for_auth(char **input);
+static void generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen);
+static bool validate(Port *port, const char *auth);
+
+/* Constants seen in an OAUTHBEARER client initial response. */
+#define KVSEP 0x01				/* separator byte for key/value pairs */
+#define AUTH_KEY "auth"			/* key containing the Authorization header */
+#define BEARER_SCHEME "Bearer " /* required header scheme (case-insensitive!) */
+
+/*
+ * Retrieves the OAUTHBEARER mechanism list (currently a single item).
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void
+oauth_get_mechanisms(Port *port, StringInfo buf)
+{
+	/* Only OAUTHBEARER is supported. */
+	appendStringInfoString(buf, OAUTHBEARER_NAME);
+	appendStringInfoChar(buf, '\0');
+}
+
+/*
+ * Initializes mechanism state and loads the configured validator module.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static void *
+oauth_init(Port *port, const char *selected_mech, const char *shadow_pass)
+{
+	struct oauth_ctx *ctx;
+
+	if (strcmp(selected_mech, OAUTHBEARER_NAME) != 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("client selected an invalid SASL authentication mechanism"));
+
+	ctx = palloc0(sizeof(*ctx));
+
+	ctx->state = OAUTH_STATE_INIT;
+	ctx->port = port;
+
+	Assert(port->hba);
+	ctx->issuer = port->hba->oauth_issuer;
+	ctx->scope = port->hba->oauth_scope;
+
+	load_validator_library(port->hba->oauth_validator);
+
+	return ctx;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2). This pulls
+ * apart the client initial response and validates the Bearer token. It also
+ * handles the dummy error response for a failed handshake, as described in
+ * Sec. 3.2.3.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static int
+oauth_exchange(void *opaq, const char *input, int inputlen,
+			   char **output, int *outputlen, const char **logdetail)
+{
+	char	   *input_copy;
+	char	   *p;
+	char		cbind_flag;
+	char	   *auth;
+	int			status;
+
+	struct oauth_ctx *ctx = opaq;
+
+	*output = NULL;
+	*outputlen = -1;
+
+	/*
+	 * If the client didn't include an "Initial Client Response" in the
+	 * SASLInitialResponse message, send an empty challenge, to which the
+	 * client will respond with the same data that usually comes in the
+	 * Initial Client Response.
+	 */
+	if (input == NULL)
+	{
+		Assert(ctx->state == OAUTH_STATE_INIT);
+
+		*output = pstrdup("");
+		*outputlen = 0;
+		return PG_SASL_EXCHANGE_CONTINUE;
+	}
+
+	/*
+	 * Check that the input length agrees with the string length of the input.
+	 */
+	if (inputlen == 0)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("The message is empty."));
+	if (inputlen != strlen(input))
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message length does not match input length."));
+
+	switch (ctx->state)
+	{
+		case OAUTH_STATE_INIT:
+			/* Handle this case below. */
+			break;
+
+		case OAUTH_STATE_ERROR:
+
+			/*
+			 * Only one response is valid for the client during authentication
+			 * failure: a single kvsep.
+			 */
+			if (inputlen != 1 || *input != KVSEP)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Client did not send a kvsep response."));
+
+			/* The (failed) handshake is now complete. */
+			ctx->state = OAUTH_STATE_FINISHED;
+			return PG_SASL_EXCHANGE_FAILURE;
+
+		default:
+			elog(ERROR, "invalid OAUTHBEARER exchange state");
+			return PG_SASL_EXCHANGE_FAILURE;
+	}
+
+	/* Handle the client's initial message. */
+	p = input_copy = pstrdup(input);
+
+	/*
+	 * OAUTHBEARER does not currently define a channel binding (so there is no
+	 * OAUTHBEARER-PLUS, and we do not accept a 'p' specifier). We accept a
+	 * 'y' specifier purely for the remote chance that a future specification
+	 * could define one; then future clients can still interoperate with this
+	 * server implementation. 'n' is the expected case.
+	 */
+	cbind_flag = *p;
+	switch (cbind_flag)
+	{
+		case 'p':
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("The server does not support channel binding for OAuth, but the client message includes channel binding data."));
+			break;
+
+		case 'y':				/* fall through */
+		case 'n':
+			p++;
+			if (*p != ',')
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Comma expected, but found character \"%s\".",
+								  sanitize_char(*p)));
+			p++;
+			break;
+
+		default:
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Unexpected channel-binding flag \"%s\".",
+							  sanitize_char(cbind_flag)));
+	}
+
+	/*
+	 * Forbid optional authzid (authorization identity).  We don't support it.
+	 */
+	if (*p == 'a')
+		ereport(ERROR,
+				errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("client uses authorization identity, but it is not supported"));
+	if (*p != ',')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Unexpected attribute \"%s\" in client-first-message.",
+						  sanitize_char(*p)));
+	p++;
+
+	/* All remaining fields are separated by the RFC's kvsep (\x01). */
+	if (*p != KVSEP)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Key-value separator expected, but found character \"%s\".",
+						  sanitize_char(*p)));
+	p++;
+
+	auth = parse_kvpairs_for_auth(&p);
+	if (!auth)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message does not contain an auth value."));
+
+	/* We should be at the end of our message. */
+	if (*p)
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains additional data after the final terminator."));
+
+	if (!validate(ctx->port, auth))
+	{
+		generate_error_response(ctx, output, outputlen);
+
+		ctx->state = OAUTH_STATE_ERROR;
+		status = PG_SASL_EXCHANGE_CONTINUE;
+	}
+	else
+	{
+		ctx->state = OAUTH_STATE_FINISHED;
+		status = PG_SASL_EXCHANGE_SUCCESS;
+	}
+
+	/* Don't let extra copies of the bearer token hang around. */
+	explicit_bzero(input_copy, inputlen);
+
+	return status;
+}
+
+/*
+ * Convert an arbitrary byte to printable form.  For error messages.
+ *
+ * If it's a printable ASCII character, print it as a single character.
+ * otherwise, print it in hex.
+ *
+ * The returned pointer points to a static buffer.
+ */
+static char *
+sanitize_char(char c)
+{
+	static char buf[5];
+
+	if (c >= 0x21 && c <= 0x7E)
+		snprintf(buf, sizeof(buf), "'%c'", c);
+	else
+		snprintf(buf, sizeof(buf), "0x%02x", (unsigned char) c);
+	return buf;
+}
+
+/*
+ * Performs syntactic validation of a key and value from the initial client
+ * response. (Semantic validation of interesting values must be performed
+ * later.)
+ */
+static void
+validate_kvpair(const char *key, const char *val)
+{
+	/*-----
+	 * From Sec 3.1:
+	 *     key            = 1*(ALPHA)
+	 */
+	static const char *key_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ";
+
+	size_t		span;
+
+	if (!key[0])
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an empty key name."));
+
+	span = strspn(key, key_allowed_set);
+	if (key[span] != '\0')
+		ereport(ERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAUTHBEARER message"),
+				errdetail("Message contains an invalid key name."));
+
+	/*-----
+	 * From Sec 3.1:
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *
+	 * The VCHAR (visible character) class is large; a loop is more
+	 * straightforward than strspn().
+	 */
+	for (; *val; ++val)
+	{
+		if (0x21 <= *val && *val <= 0x7E)
+			continue;			/* VCHAR */
+
+		switch (*val)
+		{
+			case ' ':
+			case '\t':
+			case '\r':
+			case '\n':
+				continue;		/* SP, HTAB, CR, LF */
+
+			default:
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains an invalid value."));
+		}
+	}
+}
+
+/*
+ * Consumes all kvpairs in an OAUTHBEARER exchange message. If the "auth" key is
+ * found, its value is returned.
+ */
+static char *
+parse_kvpairs_for_auth(char **input)
+{
+	char	   *pos = *input;
+	char	   *auth = NULL;
+
+	/*----
+	 * The relevant ABNF, from Sec. 3.1:
+	 *
+	 *     kvsep          = %x01
+	 *     key            = 1*(ALPHA)
+	 *     value          = *(VCHAR / SP / HTAB / CR / LF )
+	 *     kvpair         = key "=" value kvsep
+	 *   ;;gs2-header     = See RFC 5801
+	 *     client-resp    = (gs2-header kvsep *kvpair kvsep) / kvsep
+	 *
+	 * By the time we reach this code, the gs2-header and initial kvsep have
+	 * already been validated. We start at the beginning of the first kvpair.
+	 */
+
+	while (*pos)
+	{
+		char	   *end;
+		char	   *sep;
+		char	   *key;
+		char	   *value;
+
+		/*
+		 * Find the end of this kvpair. Note that input is null-terminated by
+		 * the SASL code, so the strchr() is bounded.
+		 */
+		end = strchr(pos, KVSEP);
+		if (!end)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains an unterminated key/value pair."));
+		*end = '\0';
+
+		if (pos == end)
+		{
+			/* Empty kvpair, signifying the end of the list. */
+			*input = pos + 1;
+			return auth;
+		}
+
+		/*
+		 * Find the end of the key name.
+		 */
+		sep = strchr(pos, '=');
+		if (!sep)
+			ereport(ERROR,
+					errcode(ERRCODE_PROTOCOL_VIOLATION),
+					errmsg("malformed OAUTHBEARER message"),
+					errdetail("Message contains a key without a value."));
+		*sep = '\0';
+
+		/* Both key and value are now safely terminated. */
+		key = pos;
+		value = sep + 1;
+		validate_kvpair(key, value);
+
+		if (strcmp(key, AUTH_KEY) == 0)
+		{
+			if (auth)
+				ereport(ERROR,
+						errcode(ERRCODE_PROTOCOL_VIOLATION),
+						errmsg("malformed OAUTHBEARER message"),
+						errdetail("Message contains multiple auth values."));
+
+			auth = value;
+		}
+		else
+		{
+			/*
+			 * The RFC also defines the host and port keys, but they are not
+			 * required for OAUTHBEARER and we do not use them. Also, per Sec.
+			 * 3.1, any key/value pairs we don't recognize must be ignored.
+			 */
+		}
+
+		/* Move to the next pair. */
+		pos = end + 1;
+	}
+
+	ereport(ERROR,
+			errcode(ERRCODE_PROTOCOL_VIOLATION),
+			errmsg("malformed OAUTHBEARER message"),
+			errdetail("Message did not contain a final terminator."));
+
+	pg_unreachable();
+	return NULL;
+}
+
+/*
+ * Builds the JSON response for failed authentication (RFC 7628, Sec. 3.2.2).
+ * This contains the required scopes for entry and a pointer to the OAuth/OpenID
+ * discovery document, which the client may use to conduct its OAuth flow.
+ */
+static void
+generate_error_response(struct oauth_ctx *ctx, char **output, int *outputlen)
+{
+	StringInfoData buf;
+	StringInfoData issuer;
+
+	/*
+	 * The admin needs to set an issuer and scope for OAuth to work. There's
+	 * not really a way to hide this from the user, either, because we can't
+	 * choose a "default" issuer, so be honest in the failure message. (In
+	 * practice such configurations are rejected during HBA parsing.)
+	 */
+	if (!ctx->issuer || !ctx->scope)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("OAuth is not properly configured for this user"),
+				errdetail_log("The issuer and scope parameters must be set in pg_hba.conf."));
+
+	/*
+	 * Build a default .well-known URI based on our issuer, unless the HBA has
+	 * already provided one.
+	 */
+	initStringInfo(&issuer);
+	appendStringInfoString(&issuer, ctx->issuer);
+	if (strstr(ctx->issuer, "/.well-known/") == NULL)
+		appendStringInfoString(&issuer, "/.well-known/openid-configuration");
+
+	initStringInfo(&buf);
+
+	/*
+	 * Escaping the string here is belt-and-suspenders defensive programming
+	 * since escapable characters aren't valid in either the issuer URI or the
+	 * scope list, but the HBA doesn't enforce that yet.
+	 */
+	appendStringInfoString(&buf, "{ \"status\": \"invalid_token\", ");
+
+	appendStringInfoString(&buf, "\"openid-configuration\": ");
+	escape_json(&buf, issuer.data);
+	pfree(issuer.data);
+
+	appendStringInfoString(&buf, ", \"scope\": ");
+	escape_json(&buf, ctx->scope);
+
+	appendStringInfoString(&buf, " }");
+
+	*output = buf.data;
+	*outputlen = buf.len;
+}
+
+/*-----
+ * Validates the provided Authorization header and returns the token from
+ * within it. NULL is returned on validation failure.
+ *
+ * Only Bearer tokens are accepted. The ABNF is defined in RFC 6750, Sec.
+ * 2.1:
+ *
+ *      b64token    = 1*( ALPHA / DIGIT /
+ *                        "-" / "." / "_" / "~" / "+" / "/" ) *"="
+ *      credentials = "Bearer" 1*SP b64token
+ *
+ * The "credentials" construction is what we receive in our auth value.
+ *
+ * Since that spec is subordinate to HTTP (i.e. the HTTP Authorization
+ * header format; RFC 9110 Sec. 11), the "Bearer" scheme string must be
+ * compared case-insensitively. (This is not mentioned in RFC 6750, but the
+ * OAUTHBEARER spec points it out: RFC 7628 Sec. 4.)
+ *
+ * Invalid formats are technically a protocol violation, but we shouldn't
+ * reflect any information about the sensitive Bearer token back to the
+ * client; log at COMMERROR instead.
+ */
+static const char *
+validate_token_format(const char *header)
+{
+	size_t		span;
+	const char *token;
+	static const char *const b64token_allowed_set =
+		"abcdefghijklmnopqrstuvwxyz"
+		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
+		"0123456789-._~+/";
+
+	/* Missing auth headers should be handled by the caller. */
+	Assert(header);
+
+	if (header[0] == '\0')
+	{
+		/*
+		 * A completely empty auth header represents a query for
+		 * authentication parameters. The client expects it to fail; there's
+		 * no need to make any extra noise in the logs.
+		 *
+		 * TODO: should we find a way to return STATUS_EOF at the top level,
+		 * to suppress the authentication error entirely?
+		 */
+		return NULL;
+	}
+
+	if (pg_strncasecmp(header, BEARER_SCHEME, strlen(BEARER_SCHEME)))
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Client response indicated a non-Bearer authentication scheme."));
+		return NULL;
+	}
+
+	/* Pull the bearer token out of the auth value. */
+	token = header + strlen(BEARER_SCHEME);
+
+	/* Swallow any additional spaces. */
+	while (*token == ' ')
+		token++;
+
+	/* Tokens must not be empty. */
+	if (!*token)
+	{
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is empty."));
+		return NULL;
+	}
+
+	/*
+	 * Make sure the token contains only allowed characters. Tokens may end
+	 * with any number of '=' characters.
+	 */
+	span = strspn(token, b64token_allowed_set);
+	while (token[span] == '=')
+		span++;
+
+	if (token[span] != '\0')
+	{
+		/*
+		 * This error message could be more helpful by printing the
+		 * problematic character(s), but that'd be a bit like printing a piece
+		 * of someone's password into the logs.
+		 */
+		ereport(COMMERROR,
+				errcode(ERRCODE_PROTOCOL_VIOLATION),
+				errmsg("malformed OAuth bearer token"),
+				errdetail_log("Bearer token is not in the correct format."));
+		return NULL;
+	}
+
+	return token;
+}
+
+/*
+ * Checks that the "auth" kvpair in the client response contains a syntactically
+ * valid Bearer token, then passes it along to the loaded validator module for
+ * authorization. Returns true if validation succeeds.
+ */
+static bool
+validate(Port *port, const char *auth)
+{
+	int			map_status;
+	ValidatorModuleResult *ret;
+	const char *token;
+	bool		status;
+
+	/* Ensure that we have a correct token to validate */
+	if (!(token = validate_token_format(auth)))
+		return false;
+
+	/*
+	 * Ensure that we have a validation library loaded, this should always be
+	 * the case and an error here is indicative of a bug.
+	 */
+	if (!ValidatorCallbacks || !ValidatorCallbacks->validate_cb)
+		ereport(FATAL,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("validation of OAuth token requested without a validator loaded"));
+
+	/* Call the validation function from the validator module */
+	ret = palloc0(sizeof(ValidatorModuleResult));
+	if (!ValidatorCallbacks->validate_cb(validator_module_state, token,
+										 port->user_name, ret))
+	{
+		ereport(WARNING,
+				errcode(ERRCODE_INTERNAL_ERROR),
+				errmsg("internal error in OAuth validator module"));
+		return false;
+	}
+
+	/*
+	 * Log any authentication results even if the token isn't authorized; it
+	 * might be useful for auditing or troubleshooting.
+	 */
+	if (ret->authn_id)
+		set_authn_id(port, ret->authn_id);
+
+	if (!ret->authorized)
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator failed to authorize the provided token."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	if (port->hba->oauth_skip_usermap)
+	{
+		/*
+		 * If the validator is our authorization authority, we're done.
+		 * Authentication may or may not have been performed depending on the
+		 * validator implementation; all that matters is that the validator
+		 * says the user can log in with the target role.
+		 */
+		status = true;
+		goto cleanup;
+	}
+
+	/* Make sure the validator authenticated the user. */
+	if (ret->authn_id == NULL || ret->authn_id[0] == '\0')
+	{
+		ereport(LOG,
+				errmsg("OAuth bearer authentication failed for user \"%s\"",
+					   port->user_name),
+				errdetail_log("Validator provided no identity."));
+
+		status = false;
+		goto cleanup;
+	}
+
+	/* Finally, check the user map. */
+	map_status = check_usermap(port->hba->usermap, port->user_name,
+							   MyClientConnectionInfo.authn_id, false);
+	status = (map_status == STATUS_OK);
+
+cleanup:
+
+	/*
+	 * Clear and free the validation result from the validator module once
+	 * we're done with it.
+	 */
+	if (ret->authn_id != NULL)
+		pfree(ret->authn_id);
+	pfree(ret);
+
+	return status;
+}
+
+/*
+ * load_validator_library
+ *
+ * Load the configured validator library in order to perform token validation.
+ * There is no built-in fallback since validation is implementation specific. If
+ * no validator library is configured, or if it fails to load, then error out
+ * since token validation won't be possible.
+ */
+static void
+load_validator_library(const char *libname)
+{
+	OAuthValidatorModuleInit validator_init;
+	MemoryContextCallback *mcb;
+
+	/*
+	 * The presence, and validity, of libname has already been established by
+	 * check_oauth_validator so we don't need to perform more than Assert
+	 * level checking here.
+	 */
+	Assert(libname && *libname);
+
+	validator_init = (OAuthValidatorModuleInit)
+		load_external_function(libname, "_PG_oauth_validator_module_init",
+							   false, NULL);
+
+	/*
+	 * The validator init function is required since it will set the callbacks
+	 * for the validator library.
+	 */
+	if (validator_init == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must define the symbol %s",
+					   "OAuth validator", libname, "_PG_oauth_validator_module_init"));
+
+	ValidatorCallbacks = (*validator_init) ();
+	Assert(ValidatorCallbacks);
+
+	/*
+	 * Check the magic number, to protect against break-glass scenarios where
+	 * the ABI must change within a major version. load_external_function()
+	 * already checks for compatibility across major versions.
+	 */
+	if (ValidatorCallbacks->magic != PG_OAUTH_VALIDATOR_MAGIC)
+		ereport(ERROR,
+				errmsg("%s module \"%s\": magic number mismatch",
+					   "OAuth validator", libname),
+				errdetail("Server has magic number 0x%08X, module has 0x%08X.",
+						  PG_OAUTH_VALIDATOR_MAGIC, ValidatorCallbacks->magic));
+
+	/*
+	 * Make sure all required callbacks are present in the ValidatorCallbacks
+	 * structure. Right now only the validation callback is required.
+	 */
+	if (ValidatorCallbacks->validate_cb == NULL)
+		ereport(ERROR,
+				errmsg("%s module \"%s\" must provide a %s callback",
+					   "OAuth validator", libname, "validate_cb"));
+
+	/* Allocate memory for validator library private state data */
+	validator_module_state = (ValidatorModuleState *) palloc0(sizeof(ValidatorModuleState));
+	validator_module_state->sversion = PG_VERSION_NUM;
+
+	if (ValidatorCallbacks->startup_cb != NULL)
+		ValidatorCallbacks->startup_cb(validator_module_state);
+
+	/* Shut down the library before cleaning up its state. */
+	mcb = palloc0(sizeof(*mcb));
+	mcb->func = shutdown_validator_library;
+
+	MemoryContextRegisterResetCallback(CurrentMemoryContext, mcb);
+}
+
+/*
+ * Call the validator module's shutdown callback, if one is provided. This is
+ * invoked during memory context reset.
+ */
+static void
+shutdown_validator_library(void *arg)
+{
+	if (ValidatorCallbacks->shutdown_cb != NULL)
+		ValidatorCallbacks->shutdown_cb(validator_module_state);
+}
+
+/*
+ * Ensure an OAuth validator named in the HBA is permitted by the configuration.
+ *
+ * If the validator is currently unset and exactly one library is declared in
+ * oauth_validator_libraries, then that library will be used as the validator.
+ * Otherwise the name must be present in the list of oauth_validator_libraries.
+ */
+bool
+check_oauth_validator(HbaLine *hbaline, int elevel, char **err_msg)
+{
+	int			line_num = hbaline->linenumber;
+	const char *file_name = hbaline->sourcefile;
+	char	   *rawstring;
+	List	   *elemlist = NIL;
+
+	*err_msg = NULL;
+
+	if (oauth_validator_libraries_string[0] == '\0')
+	{
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("oauth_validator_libraries must be set for authentication method %s",
+					   "oauth"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = psprintf("oauth_validator_libraries must be set for authentication method %s",
+							"oauth");
+		return false;
+	}
+
+	/* SplitDirectoriesString needs a modifiable copy */
+	rawstring = pstrdup(oauth_validator_libraries_string);
+
+	if (!SplitDirectoriesString(rawstring, ',', &elemlist))
+	{
+		/* syntax error in list */
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("invalid list syntax in parameter \"%s\"",
+					   "oauth_validator_libraries"));
+		*err_msg = psprintf("invalid list syntax in parameter \"%s\"",
+							"oauth_validator_libraries");
+		goto done;
+	}
+
+	if (!hbaline->oauth_validator)
+	{
+		if (elemlist->length == 1)
+		{
+			hbaline->oauth_validator = pstrdup(linitial(elemlist));
+			goto done;
+		}
+
+		ereport(elevel,
+				errcode(ERRCODE_CONFIG_FILE_ERROR),
+				errmsg("authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options"),
+				errcontext("line %d of configuration file \"%s\"",
+						   line_num, file_name));
+		*err_msg = "authentication method \"oauth\" requires argument \"validator\" to be set when oauth_validator_libraries contains multiple options";
+		goto done;
+	}
+
+	foreach_ptr(char, allowed, elemlist)
+	{
+		if (strcmp(allowed, hbaline->oauth_validator) == 0)
+			goto done;
+	}
+
+	ereport(elevel,
+			errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+			errmsg("validator \"%s\" is not permitted by %s",
+				   hbaline->oauth_validator, "oauth_validator_libraries"),
+			errcontext("line %d of configuration file \"%s\"",
+					   line_num, file_name));
+	*err_msg = psprintf("validator \"%s\" is not permitted by %s",
+						hbaline->oauth_validator, "oauth_validator_libraries");
+
+done:
+	list_free_deep(elemlist);
+	pfree(rawstring);
+
+	return (*err_msg == NULL);
+}
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index d6ef32cc823..0f65014e64f 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -29,6 +29,7 @@
 #include "libpq/auth.h"
 #include "libpq/crypt.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/pqformat.h"
 #include "libpq/sasl.h"
 #include "libpq/scram.h"
@@ -45,7 +46,6 @@
  */
 static void auth_failed(Port *port, int status, const char *logdetail);
 static char *recv_password_packet(Port *port);
-static void set_authn_id(Port *port, const char *id);
 
 
 /*----------------------------------------------------------------
@@ -289,6 +289,9 @@ auth_failed(Port *port, int status, const char *logdetail)
 		case uaRADIUS:
 			errstr = gettext_noop("RADIUS authentication failed for user \"%s\"");
 			break;
+		case uaOAuth:
+			errstr = gettext_noop("OAuth bearer authentication failed for user \"%s\"");
+			break;
 		default:
 			errstr = gettext_noop("authentication failed for user \"%s\": invalid authentication method");
 			break;
@@ -324,7 +327,7 @@ auth_failed(Port *port, int status, const char *logdetail)
  * lifetime of MyClientConnectionInfo, so it is safe to pass a string that is
  * managed by an external library.
  */
-static void
+void
 set_authn_id(Port *port, const char *id)
 {
 	Assert(id);
@@ -611,6 +614,9 @@ ClientAuthentication(Port *port)
 		case uaTrust:
 			status = STATUS_OK;
 			break;
+		case uaOAuth:
+			status = CheckSASLAuth(&pg_be_oauth_mech, port, NULL, NULL);
+			break;
 	}
 
 	if ((status == STATUS_OK && port->hba->clientcert == clientCertFull)
diff --git a/src/backend/libpq/hba.c b/src/backend/libpq/hba.c
index 510c9ffc6d7..332fad27835 100644
--- a/src/backend/libpq/hba.c
+++ b/src/backend/libpq/hba.c
@@ -32,6 +32,7 @@
 #include "libpq/hba.h"
 #include "libpq/ifaddr.h"
 #include "libpq/libpq-be.h"
+#include "libpq/oauth.h"
 #include "postmaster/postmaster.h"
 #include "regex/regex.h"
 #include "replication/walsender.h"
@@ -114,7 +115,8 @@ static const char *const UserAuthName[] =
 	"ldap",
 	"cert",
 	"radius",
-	"peer"
+	"peer",
+	"oauth",
 };
 
 /*
@@ -1747,6 +1749,8 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 #endif
 	else if (strcmp(token->string, "radius") == 0)
 		parsedline->auth_method = uaRADIUS;
+	else if (strcmp(token->string, "oauth") == 0)
+		parsedline->auth_method = uaOAuth;
 	else
 	{
 		ereport(elevel,
@@ -2039,6 +2043,36 @@ parse_hba_line(TokenizedAuthLine *tok_line, int elevel)
 		parsedline->clientcert = clientCertFull;
 	}
 
+	/*
+	 * Enforce proper configuration of OAuth authentication.
+	 */
+	if (parsedline->auth_method == uaOAuth)
+	{
+		MANDATORY_AUTH_ARG(parsedline->oauth_scope, "scope", "oauth");
+		MANDATORY_AUTH_ARG(parsedline->oauth_issuer, "issuer", "oauth");
+
+		/* Ensure a validator library is set and permitted by the config. */
+		if (!check_oauth_validator(parsedline, elevel, err_msg))
+			return NULL;
+
+		/*
+		 * Supplying a usermap combined with the option to skip usermapping is
+		 * nonsensical and indicates a configuration error.
+		 */
+		if (parsedline->oauth_skip_usermap && parsedline->usermap != NULL)
+		{
+			ereport(elevel,
+					errcode(ERRCODE_CONFIG_FILE_ERROR),
+			/* translator: strings are replaced with hba options */
+					errmsg("%s cannot be used in combination with %s",
+						   "map", "delegate_ident_mapping"),
+					errcontext("line %d of configuration file \"%s\"",
+							   line_num, file_name));
+			*err_msg = "map cannot be used in combination with delegate_ident_mapping";
+			return NULL;
+		}
+	}
+
 	return parsedline;
 }
 
@@ -2066,8 +2100,9 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 			hbaline->auth_method != uaPeer &&
 			hbaline->auth_method != uaGSS &&
 			hbaline->auth_method != uaSSPI &&
-			hbaline->auth_method != uaCert)
-			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, and cert"));
+			hbaline->auth_method != uaCert &&
+			hbaline->auth_method != uaOAuth)
+			INVALID_AUTH_OPTION("map", gettext_noop("ident, peer, gssapi, sspi, cert, and oauth"));
 		hbaline->usermap = pstrdup(val);
 	}
 	else if (strcmp(name, "clientcert") == 0)
@@ -2450,6 +2485,29 @@ parse_hba_auth_opt(char *name, char *val, HbaLine *hbaline,
 		hbaline->radiusidentifiers = parsed_identifiers;
 		hbaline->radiusidentifiers_s = pstrdup(val);
 	}
+	else if (strcmp(name, "issuer") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "issuer", "oauth");
+		hbaline->oauth_issuer = pstrdup(val);
+	}
+	else if (strcmp(name, "scope") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "scope", "oauth");
+		hbaline->oauth_scope = pstrdup(val);
+	}
+	else if (strcmp(name, "validator") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "validator", "oauth");
+		hbaline->oauth_validator = pstrdup(val);
+	}
+	else if (strcmp(name, "delegate_ident_mapping") == 0)
+	{
+		REQUIRE_AUTH_OPTION(uaOAuth, "delegate_ident_mapping", "oauth");
+		if (strcmp(val, "1") == 0)
+			hbaline->oauth_skip_usermap = true;
+		else
+			hbaline->oauth_skip_usermap = false;
+	}
 	else
 	{
 		ereport(elevel,
diff --git a/src/backend/libpq/meson.build b/src/backend/libpq/meson.build
index 0f0421037e4..31aa2faae1e 100644
--- a/src/backend/libpq/meson.build
+++ b/src/backend/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 backend_sources += files(
+  'auth-oauth.c',
   'auth-sasl.c',
   'auth-scram.c',
   'auth.c',
diff --git a/src/backend/libpq/pg_hba.conf.sample b/src/backend/libpq/pg_hba.conf.sample
index bad13497a34..b64c8dea97c 100644
--- a/src/backend/libpq/pg_hba.conf.sample
+++ b/src/backend/libpq/pg_hba.conf.sample
@@ -53,8 +53,8 @@
 # directly connected to.
 #
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",
-# "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".
-# Note that "password" sends passwords in clear text; "md5" or
+# "gss", "sspi", "ident", "peer", "pam", "oauth", "ldap", "radius" or
+# "cert".  Note that "password" sends passwords in clear text; "md5" or
 # "scram-sha-256" are preferred since they send encrypted passwords.
 #
 # OPTIONS are a set of options for the authentication in the format
diff --git a/src/backend/utils/adt/hbafuncs.c b/src/backend/utils/adt/hbafuncs.c
index 03c38e8c451..b62c3d944cf 100644
--- a/src/backend/utils/adt/hbafuncs.c
+++ b/src/backend/utils/adt/hbafuncs.c
@@ -152,6 +152,25 @@ get_hba_options(HbaLine *hba)
 				CStringGetTextDatum(psprintf("radiusports=%s", hba->radiusports_s));
 	}
 
+	if (hba->auth_method == uaOAuth)
+	{
+		if (hba->oauth_issuer)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("issuer=%s", hba->oauth_issuer));
+
+		if (hba->oauth_scope)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("scope=%s", hba->oauth_scope));
+
+		if (hba->oauth_validator)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("validator=%s", hba->oauth_validator));
+
+		if (hba->oauth_skip_usermap)
+			options[noptions++] =
+				CStringGetTextDatum(psprintf("delegate_ident_mapping=true"));
+	}
+
 	/* If you add more options, consider increasing MAX_HBA_OPTIONS. */
 	Assert(noptions <= MAX_HBA_OPTIONS);
 
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index 3cde94a1759..03a6dd49154 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -49,6 +49,7 @@
 #include "jit/jit.h"
 #include "libpq/auth.h"
 #include "libpq/libpq.h"
+#include "libpq/oauth.h"
 #include "libpq/scram.h"
 #include "nodes/queryjumble.h"
 #include "optimizer/cost.h"
@@ -4873,6 +4874,17 @@ struct config_string ConfigureNamesString[] =
 		check_restrict_nonsystem_relation_kind, assign_restrict_nonsystem_relation_kind, NULL
 	},
 
+	{
+		{"oauth_validator_libraries", PGC_SIGHUP, CONN_AUTH_AUTH,
+			gettext_noop("Lists libraries that may be called to validate OAuth v2 bearer tokens."),
+			NULL,
+			GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_SUPERUSER_ONLY
+		},
+		&oauth_validator_libraries_string,
+		"",
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, NULL, NULL, NULL, NULL
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index 415f253096c..5362ff80519 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -121,6 +121,9 @@
 #ssl_passphrase_command = ''
 #ssl_passphrase_command_supports_reload = off
 
+# OAuth
+#oauth_validator_libraries = ''	# comma-separated list of trusted validator modules
+
 
 #------------------------------------------------------------------------------
 # RESOURCE USAGE (except WAL)
diff --git a/src/include/common/oauth-common.h b/src/include/common/oauth-common.h
new file mode 100644
index 00000000000..5fb559d84b2
--- /dev/null
+++ b/src/include/common/oauth-common.h
@@ -0,0 +1,19 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-common.h
+ *		Declarations for helper functions used for OAuth/OIDC authentication
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/common/oauth-common.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef OAUTH_COMMON_H
+#define OAUTH_COMMON_H
+
+/* Name of SASL mechanism per IANA */
+#define OAUTHBEARER_NAME "OAUTHBEARER"
+
+#endif							/* OAUTH_COMMON_H */
diff --git a/src/include/libpq/auth.h b/src/include/libpq/auth.h
index 902c5f6de32..25b5742068f 100644
--- a/src/include/libpq/auth.h
+++ b/src/include/libpq/auth.h
@@ -39,6 +39,7 @@ extern PGDLLIMPORT bool pg_gss_accept_delegation;
 extern void ClientAuthentication(Port *port);
 extern void sendAuthRequest(Port *port, AuthRequest areq, const char *extradata,
 							int extralen);
+extern void set_authn_id(Port *port, const char *id);
 
 /* Hook for plugins to get control in ClientAuthentication() */
 typedef void (*ClientAuthentication_hook_type) (Port *, int);
diff --git a/src/include/libpq/hba.h b/src/include/libpq/hba.h
index b20d0051f7d..3657f182db3 100644
--- a/src/include/libpq/hba.h
+++ b/src/include/libpq/hba.h
@@ -39,7 +39,8 @@ typedef enum UserAuth
 	uaCert,
 	uaRADIUS,
 	uaPeer,
-#define USER_AUTH_LAST uaPeer	/* Must be last value of this enum */
+	uaOAuth,
+#define USER_AUTH_LAST uaOAuth	/* Must be last value of this enum */
 } UserAuth;
 
 /*
@@ -135,6 +136,10 @@ typedef struct HbaLine
 	char	   *radiusidentifiers_s;
 	List	   *radiusports;
 	char	   *radiusports_s;
+	char	   *oauth_issuer;
+	char	   *oauth_scope;
+	char	   *oauth_validator;
+	bool		oauth_skip_usermap;
 } HbaLine;
 
 typedef struct IdentLine
diff --git a/src/include/libpq/oauth.h b/src/include/libpq/oauth.h
new file mode 100644
index 00000000000..2c6892ffba4
--- /dev/null
+++ b/src/include/libpq/oauth.h
@@ -0,0 +1,101 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth.h
+ *	  Interface to libpq/auth-oauth.c
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/include/libpq/oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef PG_OAUTH_H
+#define PG_OAUTH_H
+
+#include "libpq/libpq-be.h"
+#include "libpq/sasl.h"
+
+extern PGDLLIMPORT char *oauth_validator_libraries_string;
+
+typedef struct ValidatorModuleState
+{
+	/* Holds the server's PG_VERSION_NUM. Reserved for future extensibility. */
+	int			sversion;
+
+	/*
+	 * Private data pointer for use by a validator module. This can be used to
+	 * store state for the module that will be passed to each of its
+	 * callbacks.
+	 */
+	void	   *private_data;
+} ValidatorModuleState;
+
+typedef struct ValidatorModuleResult
+{
+	/*
+	 * Should be set to true if the token carries sufficient permissions for
+	 * the bearer to connect.
+	 */
+	bool		authorized;
+
+	/*
+	 * If the token authenticates the user, this should be set to a palloc'd
+	 * string containing the SYSTEM_USER to use for HBA mapping. Consider
+	 * setting this even if result->authorized is false so that DBAs may use
+	 * the logs to match end users to token failures.
+	 *
+	 * This is required if the module is not configured for ident mapping
+	 * delegation. See the validator module documentation for details.
+	 */
+	char	   *authn_id;
+} ValidatorModuleResult;
+
+/*
+ * Validator module callbacks
+ *
+ * These callback functions should be defined by validator modules and returned
+ * via _PG_oauth_validator_module_init().  ValidatorValidateCB is the only
+ * required callback. For more information about the purpose of each callback,
+ * refer to the OAuth validator modules documentation.
+ */
+typedef void (*ValidatorStartupCB) (ValidatorModuleState *state);
+typedef void (*ValidatorShutdownCB) (ValidatorModuleState *state);
+typedef bool (*ValidatorValidateCB) (const ValidatorModuleState *state,
+									 const char *token, const char *role,
+									 ValidatorModuleResult *result);
+
+/*
+ * Identifies the compiled ABI version of the validator module. Since the server
+ * already enforces the PG_MODULE_MAGIC number for modules across major
+ * versions, this is reserved for emergency use within a stable release line.
+ * May it never need to change.
+ */
+#define PG_OAUTH_VALIDATOR_MAGIC 0x20250220
+
+typedef struct OAuthValidatorCallbacks
+{
+	uint32		magic;			/* must be set to PG_OAUTH_VALIDATOR_MAGIC */
+
+	ValidatorStartupCB startup_cb;
+	ValidatorShutdownCB shutdown_cb;
+	ValidatorValidateCB validate_cb;
+} OAuthValidatorCallbacks;
+
+/*
+ * Type of the shared library symbol _PG_oauth_validator_module_init which is
+ * required for all validator modules.  This function will be invoked during
+ * module loading.
+ */
+typedef const OAuthValidatorCallbacks *(*OAuthValidatorModuleInit) (void);
+extern PGDLLEXPORT const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void);
+
+/* Implementation */
+extern const pg_be_sasl_mech pg_be_oauth_mech;
+
+/*
+ * Ensure a validator named in the HBA is permitted by the configuration.
+ */
+extern bool check_oauth_validator(HbaLine *hba, int elevel, char **err_msg);
+
+#endif							/* PG_OAUTH_H */
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 07b2f798abd..db6454090d2 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -229,6 +229,9 @@
 /* Define to 1 if you have the `crypto' library (-lcrypto). */
 #undef HAVE_LIBCRYPTO
 
+/* Define to 1 if you have the `curl' library (-lcurl). */
+#undef HAVE_LIBCURL
+
 /* Define to 1 if you have the `ldap' library (-lldap). */
 #undef HAVE_LIBLDAP
 
@@ -442,6 +445,9 @@
 /* Define to 1 if you have the <termios.h> header file. */
 #undef HAVE_TERMIOS_H
 
+/* Define to 1 if curl_global_init() is guaranteed to be thread-safe. */
+#undef HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
 /* Define to 1 if your compiler understands `typeof' or something similar. */
 #undef HAVE_TYPEOF
 
@@ -663,6 +669,9 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
+/* Define to 1 to build with libcurl support. (--with-libcurl) */
+#undef USE_LIBCURL
+
 /* Define to 1 to build with XML support. (--with-libxml) */
 #undef USE_LIBXML
 
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 701810a272a..90b0b65db6f 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,6 +31,7 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
+	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -63,6 +64,10 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
+ifeq ($(with_libcurl),yes)
+OBJS += fe-auth-oauth-curl.o
+endif
+
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -81,7 +86,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -110,6 +115,8 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
+# libcurl registers an exit handler in the memory debugging code when running
+# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -117,7 +124,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index 2ad2cbf5ca3..9b789cbec0b 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -206,3 +206,6 @@ PQsocketPoll              203
 PQsetChunkedRowsMode      204
 PQgetCurrentTimeUSec      205
 PQservice                 206
+PQsetAuthDataHook         207
+PQgetAuthDataHook         208
+PQdefaultAuthDataHook     209
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
new file mode 100644
index 00000000000..a80e2047bb7
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -0,0 +1,2883 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.c
+ *	   The libcurl implementation of OAuth/OIDC authentication, using the
+ *	   OAuth Device Authorization Grant (RFC 8628).
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <curl/curl.h>
+#include <math.h>
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#include <sys/timerfd.h>
+#endif
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+#include <unistd.h>
+
+#include "common/jsonapi.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "libpq-int.h"
+#include "mb/pg_wchar.h"
+
+/*
+ * It's generally prudent to set a maximum response size to buffer in memory,
+ * but it's less clear what size to choose. The biggest of our expected
+ * responses is the server metadata JSON, which will only continue to grow in
+ * size; the number of IANA-registered parameters in that document is up to 78
+ * as of February 2025.
+ *
+ * Even if every single parameter were to take up 2k on average (a previously
+ * common limit on the size of a URL), 256k gives us 128 parameter values before
+ * we give up. (That's almost certainly complete overkill in practice; 2-4k
+ * appears to be common among popular providers at the moment.)
+ */
+#define MAX_OAUTH_RESPONSE_SIZE (256 * 1024)
+
+/*
+ * Parsed JSON Representations
+ *
+ * As a general rule, we parse and cache only the fields we're currently using.
+ * When adding new fields, ensure the corresponding free_*() function is updated
+ * too.
+ */
+
+/*
+ * The OpenID Provider configuration (alternatively named "authorization server
+ * metadata") jointly described by OpenID Connect Discovery 1.0 and RFC 8414:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.2
+ */
+struct provider
+{
+	char	   *issuer;
+	char	   *token_endpoint;
+	char	   *device_authorization_endpoint;
+	struct curl_slist *grant_types_supported;
+};
+
+static void
+free_provider(struct provider *provider)
+{
+	free(provider->issuer);
+	free(provider->token_endpoint);
+	free(provider->device_authorization_endpoint);
+	curl_slist_free_all(provider->grant_types_supported);
+}
+
+/*
+ * The Device Authorization response, described by RFC 8628:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.2
+ */
+struct device_authz
+{
+	char	   *device_code;
+	char	   *user_code;
+	char	   *verification_uri;
+	char	   *verification_uri_complete;
+	char	   *expires_in_str;
+	char	   *interval_str;
+
+	/* Fields below are parsed from the corresponding string above. */
+	int			expires_in;
+	int			interval;
+};
+
+static void
+free_device_authz(struct device_authz *authz)
+{
+	free(authz->device_code);
+	free(authz->user_code);
+	free(authz->verification_uri);
+	free(authz->verification_uri_complete);
+	free(authz->expires_in_str);
+	free(authz->interval_str);
+}
+
+/*
+ * The Token Endpoint error response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+ *
+ * Note that this response type can also be returned from the Device
+ * Authorization Endpoint.
+ */
+struct token_error
+{
+	char	   *error;
+	char	   *error_description;
+};
+
+static void
+free_token_error(struct token_error *err)
+{
+	free(err->error);
+	free(err->error_description);
+}
+
+/*
+ * The Access Token response, as described by RFC 6749:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.4
+ *
+ * During the Device Authorization flow, several temporary errors are expected
+ * as part of normal operation. To make it easy to handle these in the happy
+ * path, this contains an embedded token_error that is filled in if needed.
+ */
+struct token
+{
+	/* for successful responses */
+	char	   *access_token;
+	char	   *token_type;
+
+	/* for error responses */
+	struct token_error err;
+};
+
+static void
+free_token(struct token *tok)
+{
+	free(tok->access_token);
+	free(tok->token_type);
+	free_token_error(&tok->err);
+}
+
+/*
+ * Asynchronous State
+ */
+
+/* States for the overall async machine. */
+enum OAuthStep
+{
+	OAUTH_STEP_INIT = 0,
+	OAUTH_STEP_DISCOVERY,
+	OAUTH_STEP_DEVICE_AUTHORIZATION,
+	OAUTH_STEP_TOKEN_REQUEST,
+	OAUTH_STEP_WAIT_INTERVAL,
+};
+
+/*
+ * The async_ctx holds onto state that needs to persist across multiple calls
+ * to pg_fe_run_oauth_flow(). Almost everything interacts with this in some
+ * way.
+ */
+struct async_ctx
+{
+	enum OAuthStep step;		/* where are we in the flow? */
+
+	int			timerfd;		/* descriptor for signaling async timeouts */
+	pgsocket	mux;			/* the multiplexer socket containing all
+								 * descriptors tracked by libcurl, plus the
+								 * timerfd */
+	CURLM	   *curlm;			/* top-level multi handle for libcurl
+								 * operations */
+	CURL	   *curl;			/* the (single) easy handle for serial
+								 * requests */
+
+	struct curl_slist *headers; /* common headers for all requests */
+	PQExpBufferData work_data;	/* scratch buffer for general use (remember to
+								 * clear out prior contents first!) */
+
+	/*------
+	 * Since a single logical operation may stretch across multiple calls to
+	 * our entry point, errors have three parts:
+	 *
+	 * - errctx:	an optional static string, describing the global operation
+	 *				currently in progress. It'll be translated for you.
+	 *
+	 * - errbuf:	contains the actual error message. Generally speaking, use
+	 *				actx_error[_str] to manipulate this. This must be filled
+	 *				with something useful on an error.
+	 *
+	 * - curl_err:	an optional static error buffer used by libcurl to put
+	 *				detailed information about failures. Unfortunately
+	 *				untranslatable.
+	 *
+	 * These pieces will be combined into a single error message looking
+	 * something like the following, with errctx and/or curl_err omitted when
+	 * absent:
+	 *
+	 *     connection to server ... failed: errctx: errbuf (libcurl: curl_err)
+	 */
+	const char *errctx;			/* not freed; must point to static allocation */
+	PQExpBufferData errbuf;
+	char		curl_err[CURL_ERROR_SIZE];
+
+	/*
+	 * These documents need to survive over multiple calls, and are therefore
+	 * cached directly in the async_ctx.
+	 */
+	struct provider provider;
+	struct device_authz authz;
+
+	int			running;		/* is asynchronous work in progress? */
+	bool		user_prompted;	/* have we already sent the authz prompt? */
+	bool		used_basic_auth;	/* did we send a client secret? */
+	bool		debugging;		/* can we give unsafe developer assistance? */
+};
+
+/*
+ * Tears down the Curl handles and frees the async_ctx.
+ */
+static void
+free_async_ctx(PGconn *conn, struct async_ctx *actx)
+{
+	/*
+	 * In general, none of the error cases below should ever happen if we have
+	 * no bugs above. But if we do hit them, surfacing those errors somehow
+	 * might be the only way to have a chance to debug them.
+	 *
+	 * TODO: At some point it'd be nice to have a standard way to warn about
+	 * teardown failures. Appending to the connection's error message only
+	 * helps if the bug caused a connection failure; otherwise it'll be
+	 * buried...
+	 */
+
+	if (actx->curlm && actx->curl)
+	{
+		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl easy handle removal failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	if (actx->curl)
+	{
+		/*
+		 * curl_multi_cleanup() doesn't free any associated easy handles; we
+		 * need to do that separately. We only ever have one easy handle per
+		 * multi handle.
+		 */
+		curl_easy_cleanup(actx->curl);
+	}
+
+	if (actx->curlm)
+	{
+		CURLMcode	err = curl_multi_cleanup(actx->curlm);
+
+		if (err)
+			libpq_append_conn_error(conn,
+									"libcurl multi handle cleanup failed: %s",
+									curl_multi_strerror(err));
+	}
+
+	free_provider(&actx->provider);
+	free_device_authz(&actx->authz);
+
+	curl_slist_free_all(actx->headers);
+	termPQExpBuffer(&actx->work_data);
+	termPQExpBuffer(&actx->errbuf);
+
+	if (actx->mux != PGINVALID_SOCKET)
+		close(actx->mux);
+	if (actx->timerfd >= 0)
+		close(actx->timerfd);
+
+	free(actx);
+}
+
+/*
+ * Release resources used for the asynchronous exchange and disconnect the
+ * altsock.
+ *
+ * This is called either at the end of a successful authentication, or during
+ * pqDropConnection(), so we won't leak resources even if PQconnectPoll() never
+ * calls us back.
+ */
+void
+pg_fe_cleanup_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+
+	if (state->async_ctx)
+	{
+		free_async_ctx(conn, state->async_ctx);
+		state->async_ctx = NULL;
+	}
+
+	conn->altsock = PGINVALID_SOCKET;
+}
+
+/*
+ * Macros for manipulating actx->errbuf. actx_error() translates and formats a
+ * string for you; actx_error_str() appends a string directly without
+ * translation.
+ */
+
+#define actx_error(ACTX, FMT, ...) \
+	appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
+
+#define actx_error_str(ACTX, S) \
+	appendPQExpBufferStr(&(ACTX)->errbuf, S)
+
+/*
+ * Macros for getting and setting state for the connection's two libcurl
+ * handles, so you don't have to write out the error handling every time.
+ */
+
+#define CHECK_MSETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLMcode	_setopterr = curl_multi_setopt(_actx->curlm, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_multi_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_SETOPT(ACTX, OPT, VAL, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_setopterr = curl_easy_setopt(_actx->curl, OPT, VAL); \
+		if (_setopterr) { \
+			actx_error(_actx, "failed to set %s on OAuth connection: %s",\
+					   #OPT, curl_easy_strerror(_setopterr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+#define CHECK_GETINFO(ACTX, INFO, OUT, FAILACTION) \
+	do { \
+		struct async_ctx *_actx = (ACTX); \
+		CURLcode	_getinfoerr = curl_easy_getinfo(_actx->curl, INFO, OUT); \
+		if (_getinfoerr) { \
+			actx_error(_actx, "failed to get %s from OAuth response: %s",\
+					   #INFO, curl_easy_strerror(_getinfoerr)); \
+			FAILACTION; \
+		} \
+	} while (0)
+
+/*
+ * General JSON Parsing for OAuth Responses
+ */
+
+/*
+ * Represents a single name/value pair in a JSON object. This is the primary
+ * interface to parse_oauth_json().
+ *
+ * All fields are stored internally as strings or lists of strings, so clients
+ * have to explicitly parse other scalar types (though they will have gone
+ * through basic lexical validation). Storing nested objects is not currently
+ * supported, nor is parsing arrays of anything other than strings.
+ */
+struct json_field
+{
+	const char *name;			/* name (key) of the member */
+
+	JsonTokenType type;			/* currently supports JSON_TOKEN_STRING,
+								 * JSON_TOKEN_NUMBER, and
+								 * JSON_TOKEN_ARRAY_START */
+	union
+	{
+		char	  **scalar;		/* for all scalar types */
+		struct curl_slist **array;	/* for type == JSON_TOKEN_ARRAY_START */
+	}			target;
+
+	bool		required;		/* REQUIRED field, or just OPTIONAL? */
+};
+
+/* Documentation macros for json_field.required. */
+#define REQUIRED true
+#define OPTIONAL false
+
+/* Parse state for parse_oauth_json(). */
+struct oauth_parse
+{
+	PQExpBuffer errbuf;			/* detail message for JSON_SEM_ACTION_FAILED */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const struct json_field *fields;	/* field definition array */
+	const struct json_field *active;	/* points inside the fields array */
+};
+
+#define oauth_parse_set_error(ctx, fmt, ...) \
+	appendPQExpBuffer((ctx)->errbuf, libpq_gettext(fmt), ##__VA_ARGS__)
+
+static void
+report_type_mismatch(struct oauth_parse *ctx)
+{
+	char	   *msgfmt;
+
+	Assert(ctx->active);
+
+	/*
+	 * At the moment, the only fields we're interested in are strings,
+	 * numbers, and arrays of strings.
+	 */
+	switch (ctx->active->type)
+	{
+		case JSON_TOKEN_STRING:
+			msgfmt = "field \"%s\" must be a string";
+			break;
+
+		case JSON_TOKEN_NUMBER:
+			msgfmt = "field \"%s\" must be a number";
+			break;
+
+		case JSON_TOKEN_ARRAY_START:
+			msgfmt = "field \"%s\" must be an array of strings";
+			break;
+
+		default:
+			Assert(false);
+			msgfmt = "field \"%s\" has unexpected type";
+	}
+
+	oauth_parse_set_error(ctx, msgfmt, ctx->active->name);
+}
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Currently, none of the fields we're interested in can be or contain
+		 * objects, so we can reject this case outright.
+		 */
+		report_type_mismatch(ctx);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct oauth_parse *ctx = state;
+
+	/* We care only about the top-level fields. */
+	if (ctx->nested == 1)
+	{
+		const struct json_field *field = ctx->fields;
+
+		/*
+		 * We should never start parsing a new field while a previous one is
+		 * still active.
+		 */
+		if (ctx->active)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: started field '%s' before field '%s' was finished",
+								  name, ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		while (field->name)
+		{
+			if (strcmp(name, field->name) == 0)
+			{
+				ctx->active = field;
+				break;
+			}
+
+			++field;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (ctx->active)
+		{
+			field = ctx->active;
+
+			if ((field->type == JSON_TOKEN_ARRAY_START && *field->target.array)
+				|| (field->type != JSON_TOKEN_ARRAY_START && *field->target.scalar))
+			{
+				oauth_parse_set_error(ctx, "field \"%s\" is duplicated",
+									  field->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	--ctx->nested;
+
+	/*
+	 * All fields should be fully processed by the end of the top-level
+	 * object.
+	 */
+	if (!ctx->nested && ctx->active)
+	{
+		Assert(false);
+		oauth_parse_set_error(ctx,
+							  "internal error: field '%s' still active at end of object",
+							  ctx->active->name);
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		if (ctx->active->type != JSON_TOKEN_ARRAY_START
+		/* The arrays we care about must not have arrays as values. */
+			|| ctx->nested > 1)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+	}
+
+	++ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_end(void *state)
+{
+	struct oauth_parse *ctx = state;
+
+	if (ctx->active)
+	{
+		/*
+		 * Clear the target (which should be an array inside the top-level
+		 * object). For this to be safe, no target arrays can contain other
+		 * arrays; we check for that in the array_start callback.
+		 */
+		if (ctx->nested != 2 || ctx->active->type != JSON_TOKEN_ARRAY_START)
+		{
+			Assert(false);
+			oauth_parse_set_error(ctx,
+								  "internal error: found unexpected array end while parsing field '%s'",
+								  ctx->active->name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		ctx->active = NULL;
+	}
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct oauth_parse *ctx = state;
+
+	if (!ctx->nested)
+	{
+		oauth_parse_set_error(ctx, "top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->active)
+	{
+		const struct json_field *field = ctx->active;
+		JsonTokenType expected = field->type;
+
+		/* Make sure this matches what the active field expects. */
+		if (expected == JSON_TOKEN_ARRAY_START)
+		{
+			/* Are we actually inside an array? */
+			if (ctx->nested < 2)
+			{
+				report_type_mismatch(ctx);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Currently, arrays can only contain strings. */
+			expected = JSON_TOKEN_STRING;
+		}
+
+		if (type != expected)
+		{
+			report_type_mismatch(ctx);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		if (field->type != JSON_TOKEN_ARRAY_START)
+		{
+			/* Ensure that we're parsing the top-level keys... */
+			if (ctx->nested != 1)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar target found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* ...and that a result has not already been set. */
+			if (*field->target.scalar)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: scalar field '%s' would be assigned twice",
+									  ctx->active->name);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			*field->target.scalar = strdup(token);
+			if (!*field->target.scalar)
+				return JSON_OUT_OF_MEMORY;
+
+			ctx->active = NULL;
+
+			return JSON_SUCCESS;
+		}
+		else
+		{
+			struct curl_slist *temp;
+
+			/* The target array should be inside the top-level object. */
+			if (ctx->nested != 2)
+			{
+				Assert(false);
+				oauth_parse_set_error(ctx,
+									  "internal error: array member found at nesting level %d",
+									  ctx->nested);
+				return JSON_SEM_ACTION_FAILED;
+			}
+
+			/* Note that curl_slist_append() makes a copy of the token. */
+			temp = curl_slist_append(*field->target.array, token);
+			if (!temp)
+				return JSON_OUT_OF_MEMORY;
+
+			*field->target.array = temp;
+		}
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+/*
+ * Checks the Content-Type header against the expected type. Parameters are
+ * allowed but ignored.
+ */
+static bool
+check_content_type(struct async_ctx *actx, const char *type)
+{
+	const size_t type_len = strlen(type);
+	char	   *content_type;
+
+	CHECK_GETINFO(actx, CURLINFO_CONTENT_TYPE, &content_type, return false);
+
+	if (!content_type)
+	{
+		actx_error(actx, "no content type was provided");
+		return false;
+	}
+
+	/*
+	 * We need to perform a length limited comparison and not compare the
+	 * whole string.
+	 */
+	if (pg_strncasecmp(content_type, type, type_len) != 0)
+		goto fail;
+
+	/* On an exact match, we're done. */
+	Assert(strlen(content_type) >= type_len);
+	if (content_type[type_len] == '\0')
+		return true;
+
+	/*
+	 * Only a semicolon (optionally preceded by HTTP optional whitespace) is
+	 * acceptable after the prefix we checked. This marks the start of media
+	 * type parameters, which we currently have no use for.
+	 */
+	for (size_t i = type_len; content_type[i]; ++i)
+	{
+		switch (content_type[i])
+		{
+			case ';':
+				return true;	/* success! */
+
+			case ' ':
+			case '\t':
+				/* HTTP optional whitespace allows only spaces and htabs. */
+				break;
+
+			default:
+				goto fail;
+		}
+	}
+
+fail:
+	actx_error(actx, "unexpected content type: \"%s\"", content_type);
+	return false;
+}
+
+/*
+ * A helper function for general JSON parsing. fields is the array of field
+ * definitions with their backing pointers. The response will be parsed from
+ * actx->curl and actx->work_data (as set up by start_request()), and any
+ * parsing errors will be placed into actx->errbuf.
+ */
+static bool
+parse_oauth_json(struct async_ctx *actx, const struct json_field *fields)
+{
+	PQExpBuffer resp = &actx->work_data;
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct oauth_parse ctx = {0};
+	bool		success = false;
+
+	if (!check_content_type(actx, "application/json"))
+		return false;
+
+	if (strlen(resp->data) != resp->len)
+	{
+		actx_error(actx, "response contains embedded NULLs");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, resp->data, resp->len) != resp->len)
+	{
+		actx_error(actx, "response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, resp->data, resp->len, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	ctx.errbuf = &actx->errbuf;
+	ctx.fields = fields;
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.object_end = oauth_json_object_end;
+	sem.array_start = oauth_json_array_start;
+	sem.array_end = oauth_json_array_end;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err != JSON_SUCCESS)
+	{
+		/*
+		 * For JSON_SEM_ACTION_FAILED, we've already written the error
+		 * message. Other errors come directly from pg_parse_json(), already
+		 * translated.
+		 */
+		if (err != JSON_SEM_ACTION_FAILED)
+			actx_error_str(actx, json_errdetail(err, &lex));
+
+		goto cleanup;
+	}
+
+	/* Check all required fields. */
+	while (fields->name)
+	{
+		if (fields->required
+			&& !*fields->target.scalar
+			&& !*fields->target.array)
+		{
+			actx_error(actx, "field \"%s\" is missing", fields->name);
+			goto cleanup;
+		}
+
+		fields++;
+	}
+
+	success = true;
+
+cleanup:
+	freeJsonLexContext(&lex);
+	return success;
+}
+
+/*
+ * JSON Parser Definitions
+ */
+
+/*
+ * Parses authorization server metadata. Fields are defined by OIDC Discovery
+ * 1.0 and RFC 8414.
+ */
+static bool
+parse_provider(struct async_ctx *actx, struct provider *provider)
+{
+	struct json_field fields[] = {
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+
+		/*----
+		 * The following fields are technically REQUIRED, but we don't use
+		 * them anywhere yet:
+		 *
+		 * - jwks_uri
+		 * - response_types_supported
+		 * - subject_types_supported
+		 * - id_token_signing_alg_values_supported
+		 */
+
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * Parses a valid JSON number into a double. The input must have come from
+ * pg_parse_json(), so that we know the lexer has validated it; there's no
+ * in-band signal for invalid formats.
+ */
+static double
+parse_json_number(const char *s)
+{
+	double		parsed;
+	int			cnt;
+
+	/*
+	 * The JSON lexer has already validated the number, which is stricter than
+	 * the %f format, so we should be good to use sscanf().
+	 */
+	cnt = sscanf(s, "%lf", &parsed);
+
+	if (cnt != 1)
+	{
+		/*
+		 * Either the lexer screwed up or our assumption above isn't true, and
+		 * either way a developer needs to take a look.
+		 */
+		Assert(false);
+		return 0;
+	}
+
+	return parsed;
+}
+
+/*
+ * Parses the "interval" JSON number, corresponding to the number of seconds to
+ * wait between token endpoint requests.
+ *
+ * RFC 8628 is pretty silent on sanity checks for the interval. As a matter of
+ * practicality, round any fractional intervals up to the next second, and clamp
+ * the result at a minimum of one. (Zero-second intervals would result in an
+ * expensive network polling loop.) Tests may remove the lower bound with
+ * PGOAUTHDEBUG, for improved performance.
+ */
+static int
+parse_interval(struct async_ctx *actx, const char *interval_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(interval_str);
+	parsed = ceil(parsed);
+
+	if (parsed < 1)
+		return actx->debugging ? 0 : 1;
+
+	else if (parsed >= INT_MAX)
+		return INT_MAX;
+
+	return parsed;
+}
+
+/*
+ * Parses the "expires_in" JSON number, corresponding to the number of seconds
+ * remaining in the lifetime of the device code request.
+ *
+ * Similar to parse_interval, but we have even fewer requirements for reasonable
+ * values since we don't use the expiration time directly (it's passed to the
+ * PQAUTHDATA_PROMPT_OAUTH_DEVICE hook, in case the application wants to do
+ * something with it). We simply round down and clamp to int range.
+ */
+static int
+parse_expires_in(struct async_ctx *actx, const char *expires_in_str)
+{
+	double		parsed;
+
+	parsed = parse_json_number(expires_in_str);
+	parsed = floor(parsed);
+
+	if (parsed >= INT_MAX)
+		return INT_MAX;
+	else if (parsed <= INT_MIN)
+		return INT_MIN;
+
+	return parsed;
+}
+
+/*
+ * Parses the Device Authorization Response (RFC 8628, Sec. 3.2).
+ */
+static bool
+parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
+{
+	struct json_field fields[] = {
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+		{"expires_in", JSON_TOKEN_NUMBER, {&authz->expires_in_str}, REQUIRED},
+
+		/*
+		 * Some services (Google, Azure) spell verification_uri differently.
+		 * We accept either.
+		 */
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+
+		/*
+		 * There is no evidence of verification_uri_complete being spelled
+		 * with "url" instead with any service provider, so only support
+		 * "uri".
+		 */
+		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+
+		{0},
+	};
+
+	if (!parse_oauth_json(actx, fields))
+		return false;
+
+	/*
+	 * Parse our numeric fields. Lexing has already completed by this time, so
+	 * we at least know they're valid JSON numbers.
+	 */
+	if (authz->interval_str)
+		authz->interval = parse_interval(actx, authz->interval_str);
+	else
+	{
+		/*
+		 * RFC 8628 specifies 5 seconds as the default value if the server
+		 * doesn't provide an interval.
+		 */
+		authz->interval = 5;
+	}
+
+	Assert(authz->expires_in_str);	/* ensured by parse_oauth_json() */
+	authz->expires_in = parse_expires_in(actx, authz->expires_in_str);
+
+	return true;
+}
+
+/*
+ * Parses the device access token error response (RFC 8628, Sec. 3.5, which
+ * uses the error response defined in RFC 6749, Sec. 5.2).
+ */
+static bool
+parse_token_error(struct async_ctx *actx, struct token_error *err)
+{
+	bool		result;
+	struct json_field fields[] = {
+		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+
+		{0},
+	};
+
+	result = parse_oauth_json(actx, fields);
+
+	/*
+	 * Since token errors are parsed during other active error paths, only
+	 * override the errctx if parsing explicitly fails.
+	 */
+	if (!result)
+		actx->errctx = "failed to parse token error response";
+
+	return result;
+}
+
+/*
+ * Constructs a message from the token error response and puts it into
+ * actx->errbuf.
+ */
+static void
+record_token_error(struct async_ctx *actx, const struct token_error *err)
+{
+	if (err->error_description)
+		appendPQExpBuffer(&actx->errbuf, "%s ", err->error_description);
+	else
+	{
+		/*
+		 * Try to get some more helpful detail into the error string. A 401
+		 * status in particular implies that the oauth_client_secret is
+		 * missing or wrong.
+		 */
+		long		response_code;
+
+		CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, response_code = 0);
+
+		if (response_code == 401)
+		{
+			actx_error(actx, actx->used_basic_auth
+					   ? "provider rejected the oauth_client_secret"
+					   : "provider requires client authentication, and no oauth_client_secret is set");
+			actx_error_str(actx, " ");
+		}
+	}
+
+	appendPQExpBuffer(&actx->errbuf, "(%s)", err->error);
+}
+
+/*
+ * Parses the device access token response (RFC 8628, Sec. 3.5, which uses the
+ * success response defined in RFC 6749, Sec. 5.1).
+ */
+static bool
+parse_access_token(struct async_ctx *actx, struct token *tok)
+{
+	struct json_field fields[] = {
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+
+		/*---
+		 * We currently have no use for the following OPTIONAL fields:
+		 *
+		 * - expires_in: This will be important for maintaining a token cache,
+		 *               but we do not yet implement one.
+		 *
+		 * - refresh_token: Ditto.
+		 *
+		 * - scope: This is only sent when the authorization server sees fit to
+		 *          change our scope request. It's not clear what we should do
+		 *          about this; either it's been done as a matter of policy, or
+		 *          the user has explicitly denied part of the authorization,
+		 *          and either way the server-side validator is in a better
+		 *          place to complain if the change isn't acceptable.
+		 */
+
+		{0},
+	};
+
+	return parse_oauth_json(actx, fields);
+}
+
+/*
+ * libcurl Multi Setup/Callbacks
+ */
+
+/*
+ * Sets up the actx->mux, which is the altsock that PQconnectPoll clients will
+ * select() on instead of the Postgres socket during OAuth negotiation.
+ *
+ * This is just an epoll set or kqueue abstracting multiple other descriptors.
+ * For epoll, the timerfd is always part of the set; it's just disabled when
+ * we're not using it. For kqueue, the "timerfd" is actually a second kqueue
+ * instance which is only added to the set when needed.
+ */
+static bool
+setup_multiplexer(struct async_ctx *actx)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct epoll_event ev = {.events = EPOLLIN};
+
+	actx->mux = epoll_create1(EPOLL_CLOEXEC);
+	if (actx->mux < 0)
+	{
+		actx_error(actx, "failed to create epoll set: %m");
+		return false;
+	}
+
+	actx->timerfd = timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC);
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timerfd: %m");
+		return false;
+	}
+
+	if (epoll_ctl(actx->mux, EPOLL_CTL_ADD, actx->timerfd, &ev) < 0)
+	{
+		actx_error(actx, "failed to add timerfd to epoll set: %m");
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	actx->mux = kqueue();
+	if (actx->mux < 0)
+	{
+		/*- translator: the term "kqueue" (kernel queue) should not be translated */
+		actx_error(actx, "failed to create kqueue: %m");
+		return false;
+	}
+
+	/*
+	 * Originally, we set EVFILT_TIMER directly on the top-level multiplexer.
+	 * This makes it difficult to implement timer_expired(), though, so now we
+	 * set EVFILT_TIMER on a separate actx->timerfd, which is chained to
+	 * actx->mux while the timer is active.
+	 */
+	actx->timerfd = kqueue();
+	if (actx->timerfd < 0)
+	{
+		actx_error(actx, "failed to create timer kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
+	return false;
+}
+
+/*
+ * Adds and removes sockets from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
+				void *socketp)
+{
+#ifdef HAVE_SYS_EPOLL_H
+	struct async_ctx *actx = ctx;
+	struct epoll_event ev = {0};
+	int			res;
+	int			op = EPOLL_CTL_ADD;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			ev.events = EPOLLIN;
+			break;
+
+		case CURL_POLL_OUT:
+			ev.events = EPOLLOUT;
+			break;
+
+		case CURL_POLL_INOUT:
+			ev.events = EPOLLIN | EPOLLOUT;
+			break;
+
+		case CURL_POLL_REMOVE:
+			op = EPOLL_CTL_DEL;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = epoll_ctl(actx->mux, op, socket, &ev);
+	if (res < 0 && errno == EEXIST)
+	{
+		/* We already had this socket in the pollset. */
+		op = EPOLL_CTL_MOD;
+		res = epoll_ctl(actx->mux, op, socket, &ev);
+	}
+
+	if (res < 0)
+	{
+		switch (op)
+		{
+			case EPOLL_CTL_ADD:
+				actx_error(actx, "could not add to epoll set: %m");
+				break;
+
+			case EPOLL_CTL_DEL:
+				actx_error(actx, "could not delete from epoll set: %m");
+				break;
+
+			default:
+				actx_error(actx, "could not update epoll set: %m");
+		}
+
+		return -1;
+	}
+
+	return 0;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct async_ctx *actx = ctx;
+	struct kevent ev[2] = {{0}};
+	struct kevent ev_out[2];
+	struct timespec timeout = {0};
+	int			nev = 0;
+	int			res;
+
+	switch (what)
+	{
+		case CURL_POLL_IN:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_OUT:
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_INOUT:
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_ADD | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		case CURL_POLL_REMOVE:
+
+			/*
+			 * We don't know which of these is currently registered, perhaps
+			 * both, so we try to remove both.  This means we need to tolerate
+			 * ENOENT below.
+			 */
+			EV_SET(&ev[nev], socket, EVFILT_READ, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			EV_SET(&ev[nev], socket, EVFILT_WRITE, EV_DELETE | EV_RECEIPT, 0, 0, 0);
+			nev++;
+			break;
+
+		default:
+			actx_error(actx, "unknown libcurl socket operation: %d", what);
+			return -1;
+	}
+
+	res = kevent(actx->mux, ev, nev, ev_out, lengthof(ev_out), &timeout);
+	if (res < 0)
+	{
+		actx_error(actx, "could not modify kqueue: %m");
+		return -1;
+	}
+
+	/*
+	 * We can't use the simple errno version of kevent, because we need to
+	 * skip over ENOENT while still allowing a second change to be processed.
+	 * So we need a longer-form error checking loop.
+	 */
+	for (int i = 0; i < res; ++i)
+	{
+		/*
+		 * EV_RECEIPT should guarantee one EV_ERROR result for every change,
+		 * whether successful or not. Failed entries contain a non-zero errno
+		 * in the data field.
+		 */
+		Assert(ev_out[i].flags & EV_ERROR);
+
+		errno = ev_out[i].data;
+		if (errno && errno != ENOENT)
+		{
+			switch (what)
+			{
+				case CURL_POLL_REMOVE:
+					actx_error(actx, "could not delete from kqueue: %m");
+					break;
+				default:
+					actx_error(actx, "could not add to kqueue: %m");
+			}
+			return -1;
+		}
+	}
+
+	return 0;
+#endif
+
+	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
+	return -1;
+}
+
+/*
+ * Enables or disables the timer in the multiplexer set. The timeout value is
+ * in milliseconds (negative values disable the timer).
+ *
+ * For epoll, rather than continually adding and removing the timer, we keep it
+ * in the set at all times and just disarm it when it's not needed. For kqueue,
+ * the timer is removed completely when disabled to prevent stale timeouts from
+ * remaining in the queue.
+ */
+static bool
+set_timer(struct async_ctx *actx, long timeout)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timeout < 0)
+	{
+		/* the zero itimerspec will disarm the timer below */
+	}
+	else if (timeout == 0)
+	{
+		/*
+		 * A zero timeout means libcurl wants us to call back immediately.
+		 * That's not technically an option for timerfd, but we can make the
+		 * timeout ridiculously short.
+		 */
+		spec.it_value.tv_nsec = 1;
+	}
+	else
+	{
+		spec.it_value.tv_sec = timeout / 1000;
+		spec.it_value.tv_nsec = (timeout % 1000) * 1000000;
+	}
+
+	if (timerfd_settime(actx->timerfd, 0 /* no flags */ , &spec, NULL) < 0)
+	{
+		actx_error(actx, "setting timerfd to %ld: %m", timeout);
+		return false;
+	}
+
+	return true;
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	struct kevent ev;
+
+	/* Enable/disable the timer itself. */
+	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
+		   0, timeout, 0);
+	if (kevent(actx->timerfd, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+
+	/*
+	 * Add/remove the timer to/from the mux. (In contrast with epoll, if we
+	 * allowed the timer to remain registered here after being disabled, the
+	 * mux queue would retain any previous stale timeout notifications and
+	 * remain readable.)
+	 */
+	EV_SET(&ev, actx->timerfd, EVFILT_READ, timeout < 0 ? EV_DELETE : EV_ADD,
+		   0, 0, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
+	{
+		actx_error(actx, "could not update timer on kqueue: %m");
+		return false;
+	}
+
+	return true;
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return false;
+}
+
+/*
+ * Returns 1 if the timeout in the multiplexer set has expired since the last
+ * call to set_timer(), 0 if the timer is still running, or -1 (with an
+ * actx_error() report) if the timer cannot be queried.
+ */
+static int
+timer_expired(struct async_ctx *actx)
+{
+#if HAVE_SYS_EPOLL_H
+	struct itimerspec spec = {0};
+
+	if (timerfd_gettime(actx->timerfd, &spec) < 0)
+	{
+		actx_error(actx, "getting timerfd value: %m");
+		return -1;
+	}
+
+	/*
+	 * This implementation assumes we're using single-shot timers. If you
+	 * change to using intervals, you'll need to reimplement this function
+	 * too, possibly with the read() or select() interfaces for timerfd.
+	 */
+	Assert(spec.it_interval.tv_sec == 0
+		   && spec.it_interval.tv_nsec == 0);
+
+	/* If the remaining time to expiration is zero, we're done. */
+	return (spec.it_value.tv_sec == 0
+			&& spec.it_value.tv_nsec == 0);
+#endif
+#ifdef HAVE_SYS_EVENT_H
+	int			res;
+
+	/* Is the timer queue ready? */
+	res = PQsocketPoll(actx->timerfd, 1 /* forRead */ , 0, 0);
+	if (res < 0)
+	{
+		actx_error(actx, "checking kqueue for timeout: %m");
+		return -1;
+	}
+
+	return (res > 0);
+#endif
+
+	actx_error(actx, "libpq does not support timers on this platform");
+	return -1;
+}
+
+/*
+ * Adds or removes timeouts from the multiplexer set, as directed by the
+ * libcurl multi handle.
+ */
+static int
+register_timer(CURLM *curlm, long timeout, void *ctx)
+{
+	struct async_ctx *actx = ctx;
+
+	/*
+	 * There might be an optimization opportunity here: if timeout == 0, we
+	 * could signal drive_request to immediately call
+	 * curl_multi_socket_action, rather than returning all the way up the
+	 * stack only to come right back. But it's not clear that the additional
+	 * code complexity is worth it.
+	 */
+	if (!set_timer(actx, timeout))
+		return -1;				/* actx_error already called */
+
+	return 0;
+}
+
+/*
+ * Prints Curl request debugging information to stderr.
+ *
+ * Note that this will expose a number of critical secrets, so users have to opt
+ * into this (see PGOAUTHDEBUG).
+ */
+static int
+debug_callback(CURL *handle, curl_infotype type, char *data, size_t size,
+			   void *clientp)
+{
+	const char *prefix;
+	bool		printed_prefix = false;
+	PQExpBufferData buf;
+
+	/* Prefixes are modeled off of the default libcurl debug output. */
+	switch (type)
+	{
+		case CURLINFO_TEXT:
+			prefix = "*";
+			break;
+
+		case CURLINFO_HEADER_IN:	/* fall through */
+		case CURLINFO_DATA_IN:
+			prefix = "<";
+			break;
+
+		case CURLINFO_HEADER_OUT:	/* fall through */
+		case CURLINFO_DATA_OUT:
+			prefix = ">";
+			break;
+
+		default:
+			return 0;
+	}
+
+	initPQExpBuffer(&buf);
+
+	/*
+	 * Split the output into lines for readability; sometimes multiple headers
+	 * are included in a single call. We also don't allow unprintable ASCII
+	 * through without a basic <XX> escape.
+	 */
+	for (int i = 0; i < size; i++)
+	{
+		char		c = data[i];
+
+		if (!printed_prefix)
+		{
+			appendPQExpBuffer(&buf, "[libcurl] %s ", prefix);
+			printed_prefix = true;
+		}
+
+		if (c >= 0x20 && c <= 0x7E)
+			appendPQExpBufferChar(&buf, c);
+		else if ((type == CURLINFO_HEADER_IN
+				  || type == CURLINFO_HEADER_OUT
+				  || type == CURLINFO_TEXT)
+				 && (c == '\r' || c == '\n'))
+		{
+			/*
+			 * Don't bother emitting <0D><0A> for headers and text; it's not
+			 * helpful noise.
+			 */
+		}
+		else
+			appendPQExpBuffer(&buf, "<%02X>", c);
+
+		if (c == '\n')
+		{
+			appendPQExpBufferChar(&buf, c);
+			printed_prefix = false;
+		}
+	}
+
+	if (printed_prefix)
+		appendPQExpBufferChar(&buf, '\n');	/* finish the line */
+
+	fprintf(stderr, "%s", buf.data);
+	termPQExpBuffer(&buf);
+	return 0;
+}
+
+/*
+ * Initializes the two libcurl handles in the async_ctx. The multi handle,
+ * actx->curlm, is what drives the asynchronous engine and tells us what to do
+ * next. The easy handle, actx->curl, encapsulates the state for a single
+ * request/response. It's added to the multi handle as needed, during
+ * start_request().
+ */
+static bool
+setup_curl_handles(struct async_ctx *actx)
+{
+	/*
+	 * Create our multi handle. This encapsulates the entire conversation with
+	 * libcurl for this connection.
+	 */
+	actx->curlm = curl_multi_init();
+	if (!actx->curlm)
+	{
+		/* We don't get a lot of feedback on the failure reason. */
+		actx_error(actx, "failed to create libcurl multi handle");
+		return false;
+	}
+
+	/*
+	 * The multi handle tells us what to wait on using two callbacks. These
+	 * will manipulate actx->mux as needed.
+	 */
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETFUNCTION, register_socket, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_SOCKETDATA, actx, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERFUNCTION, register_timer, return false);
+	CHECK_MSETOPT(actx, CURLMOPT_TIMERDATA, actx, return false);
+
+	/*
+	 * Set up an easy handle. All of our requests are made serially, so we
+	 * only ever need to keep track of one.
+	 */
+	actx->curl = curl_easy_init();
+	if (!actx->curl)
+	{
+		actx_error(actx, "failed to create libcurl handle");
+		return false;
+	}
+
+	/*
+	 * Multi-threaded applications must set CURLOPT_NOSIGNAL. This requires us
+	 * to handle the possibility of SIGPIPE ourselves using pq_block_sigpipe;
+	 * see pg_fe_run_oauth_flow().
+	 *
+	 * NB: If libcurl is not built against a friendly DNS resolver (c-ares or
+	 * threaded), setting this option prevents DNS lookups from timing out
+	 * correctly. We warn about this situation at configure time.
+	 *
+	 * TODO: Perhaps there's a clever way to warn the user about synchronous
+	 * DNS at runtime too? It's not immediately clear how to do that in a
+	 * helpful way: for many standard single-threaded use cases, the user
+	 * might not care at all, so spraying warnings to stderr would probably do
+	 * more harm than good.
+	 */
+	CHECK_SETOPT(actx, CURLOPT_NOSIGNAL, 1L, return false);
+
+	if (actx->debugging)
+	{
+		/*
+		 * Set a callback for retrieving error information from libcurl, the
+		 * function only takes effect when CURLOPT_VERBOSE has been set so
+		 * make sure the order is kept.
+		 */
+		CHECK_SETOPT(actx, CURLOPT_DEBUGFUNCTION, debug_callback, return false);
+		CHECK_SETOPT(actx, CURLOPT_VERBOSE, 1L, return false);
+	}
+
+	CHECK_SETOPT(actx, CURLOPT_ERRORBUFFER, actx->curl_err, return false);
+
+	/*
+	 * Only HTTPS is allowed. (Debug mode additionally allows HTTP; this is
+	 * intended for testing only.)
+	 *
+	 * There's a bit of unfortunate complexity around the choice of
+	 * CURLoption. CURLOPT_PROTOCOLS is deprecated in modern Curls, but its
+	 * replacement didn't show up until relatively recently.
+	 */
+	{
+#if CURL_AT_LEAST_VERSION(7, 85, 0)
+		const CURLoption popt = CURLOPT_PROTOCOLS_STR;
+		const char *protos = "https";
+		const char *const unsafe = "https,http";
+#else
+		const CURLoption popt = CURLOPT_PROTOCOLS;
+		long		protos = CURLPROTO_HTTPS;
+		const long	unsafe = CURLPROTO_HTTPS | CURLPROTO_HTTP;
+#endif
+
+		if (actx->debugging)
+			protos = unsafe;
+
+		CHECK_SETOPT(actx, popt, protos, return false);
+	}
+
+	/*
+	 * If we're in debug mode, allow the developer to change the trusted CA
+	 * list. For now, this is not something we expose outside of the UNSAFE
+	 * mode, because it's not clear that it's useful in production: both libpq
+	 * and the user's browser must trust the same authorization servers for
+	 * the flow to work at all, so any changes to the roots are likely to be
+	 * done system-wide.
+	 */
+	if (actx->debugging)
+	{
+		const char *env;
+
+		if ((env = getenv("PGOAUTHCAFILE")) != NULL)
+			CHECK_SETOPT(actx, CURLOPT_CAINFO, env, return false);
+	}
+
+	/*
+	 * Suppress the Accept header to make our request as minimal as possible.
+	 * (Ideally we would set it to "application/json" instead, but OpenID is
+	 * pretty strict when it comes to provider behavior, so we have to check
+	 * what comes back anyway.)
+	 */
+	actx->headers = curl_slist_append(actx->headers, "Accept:");
+	if (actx->headers == NULL)
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+	CHECK_SETOPT(actx, CURLOPT_HTTPHEADER, actx->headers, return false);
+
+	return true;
+}
+
+/*
+ * Generic HTTP Request Handlers
+ */
+
+/*
+ * Response callback from libcurl which appends the response body into
+ * actx->work_data (see start_request()). The maximum size of the data is
+ * defined by CURL_MAX_WRITE_SIZE which by default is 16kb (and can only be
+ * changed by recompiling libcurl).
+ */
+static size_t
+append_data(char *buf, size_t size, size_t nmemb, void *userdata)
+{
+	struct async_ctx *actx = userdata;
+	PQExpBuffer resp = &actx->work_data;
+	size_t		len = size * nmemb;
+
+	/* In case we receive data over the threshold, abort the transfer */
+	if ((resp->len + len) > MAX_OAUTH_RESPONSE_SIZE)
+	{
+		actx_error(actx, "response is too large");
+		return 0;
+	}
+
+	/* The data passed from libcurl is not null-terminated */
+	appendBinaryPQExpBuffer(resp, buf, len);
+
+	/*
+	 * Signal an error in order to abort the transfer in case we ran out of
+	 * memory in accepting the data.
+	 */
+	if (PQExpBufferBroken(resp))
+	{
+		actx_error(actx, "out of memory");
+		return 0;
+	}
+
+	return len;
+}
+
+/*
+ * Begins an HTTP request on the multi handle. The caller should have set up all
+ * request-specific options on actx->curl first. The server's response body will
+ * be accumulated in actx->work_data (which will be reset, so don't store
+ * anything important there across this call).
+ *
+ * Once a request is queued, it can be driven to completion via drive_request().
+ * If actx->running is zero upon return, the request has already finished and
+ * drive_request() can be called without returning control to the client.
+ */
+static bool
+start_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+
+	resetPQExpBuffer(&actx->work_data);
+	CHECK_SETOPT(actx, CURLOPT_WRITEFUNCTION, append_data, return false);
+	CHECK_SETOPT(actx, CURLOPT_WRITEDATA, actx, return false);
+
+	err = curl_multi_add_handle(actx->curlm, actx->curl);
+	if (err)
+	{
+		actx_error(actx, "failed to queue HTTP request: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	/*
+	 * actx->running tracks the number of running handles, so we can
+	 * immediately call back if no waiting is needed.
+	 *
+	 * Even though this is nominally an asynchronous process, there are some
+	 * operations that can synchronously fail by this point (e.g. connections
+	 * to closed local ports) or even synchronously succeed if the stars align
+	 * (all the libcurl connection caches hit and the server is fast).
+	 */
+	err = curl_multi_socket_action(actx->curlm, CURL_SOCKET_TIMEOUT, 0, &actx->running);
+	if (err)
+	{
+		actx_error(actx, "asynchronous HTTP request failed: %s",
+				   curl_multi_strerror(err));
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * CURL_IGNORE_DEPRECATION was added in 7.87.0. If it's not defined, we can make
+ * it a no-op.
+ */
+#ifndef CURL_IGNORE_DEPRECATION
+#define CURL_IGNORE_DEPRECATION(x) x
+#endif
+
+/*
+ * Drives the multi handle towards completion. The caller should have already
+ * set up an asynchronous request via start_request().
+ */
+static PostgresPollingStatusType
+drive_request(struct async_ctx *actx)
+{
+	CURLMcode	err;
+	CURLMsg    *msg;
+	int			msgs_left;
+	bool		done;
+
+	if (actx->running)
+	{
+		/*---
+		 * There's an async request in progress. Pump the multi handle.
+		 *
+		 * curl_multi_socket_all() is officially deprecated, because it's
+		 * inefficient and pointless if your event loop has already handed you
+		 * the exact sockets that are ready. But that's not our use case --
+		 * our client has no way to tell us which sockets are ready. (They
+		 * don't even know there are sockets to begin with.)
+		 *
+		 * We can grab the list of triggered events from the multiplexer
+		 * ourselves, but that's effectively what curl_multi_socket_all() is
+		 * going to do. And there are currently no plans for the Curl project
+		 * to remove or break this API, so ignore the deprecation. See
+		 *
+		 *    https://curl.se/mail/lib-2024-11/0028.html
+		 *
+		 */
+		CURL_IGNORE_DEPRECATION(
+			err = curl_multi_socket_all(actx->curlm, &actx->running);
+		)
+
+		if (err)
+		{
+			actx_error(actx, "asynchronous HTTP request failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		if (actx->running)
+		{
+			/* We'll come back again. */
+			return PGRES_POLLING_READING;
+		}
+	}
+
+	done = false;
+	while ((msg = curl_multi_info_read(actx->curlm, &msgs_left)) != NULL)
+	{
+		if (msg->msg != CURLMSG_DONE)
+		{
+			/*
+			 * Future libcurl versions may define new message types; we don't
+			 * know how to handle them, so we'll ignore them.
+			 */
+			continue;
+		}
+
+		/* First check the status of the request itself. */
+		if (msg->data.result != CURLE_OK)
+		{
+			/*
+			 * If a more specific error hasn't already been reported, use
+			 * libcurl's description.
+			 */
+			if (actx->errbuf.len == 0)
+				actx_error_str(actx, curl_easy_strerror(msg->data.result));
+
+			return PGRES_POLLING_FAILED;
+		}
+
+		/* Now remove the finished handle; we'll add it back later if needed. */
+		err = curl_multi_remove_handle(actx->curlm, msg->easy_handle);
+		if (err)
+		{
+			actx_error(actx, "libcurl easy handle removal failed: %s",
+					   curl_multi_strerror(err));
+			return PGRES_POLLING_FAILED;
+		}
+
+		done = true;
+	}
+
+	/* Sanity check. */
+	if (!done)
+	{
+		actx_error(actx, "no result was retrieved for the finished handle");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return PGRES_POLLING_OK;
+}
+
+/*
+ * URL-Encoding Helpers
+ */
+
+/*
+ * Encodes a string using the application/x-www-form-urlencoded format, and
+ * appends it to the given buffer.
+ */
+static void
+append_urlencoded(PQExpBuffer buf, const char *s)
+{
+	char	   *escaped;
+	char	   *haystack;
+	char	   *match;
+
+	/* The first parameter to curl_easy_escape is deprecated by Curl */
+	escaped = curl_easy_escape(NULL, s, 0);
+	if (!escaped)
+	{
+		termPQExpBuffer(buf);	/* mark the buffer broken */
+		return;
+	}
+
+	/*
+	 * curl_easy_escape() almost does what we want, but we need the
+	 * query-specific flavor which uses '+' instead of '%20' for spaces. The
+	 * Curl command-line tool does this with a simple search-and-replace, so
+	 * follow its lead.
+	 */
+	haystack = escaped;
+
+	while ((match = strstr(haystack, "%20")) != NULL)
+	{
+		/* Append the unmatched portion, followed by the plus sign. */
+		appendBinaryPQExpBuffer(buf, haystack, match - haystack);
+		appendPQExpBufferChar(buf, '+');
+
+		/* Keep searching after the match. */
+		haystack = match + 3 /* strlen("%20") */ ;
+	}
+
+	/* Push the remainder of the string onto the buffer. */
+	appendPQExpBufferStr(buf, haystack);
+
+	curl_free(escaped);
+}
+
+/*
+ * Convenience wrapper for encoding a single string. Returns NULL on allocation
+ * failure.
+ */
+static char *
+urlencode(const char *s)
+{
+	PQExpBufferData buf;
+
+	initPQExpBuffer(&buf);
+	append_urlencoded(&buf, s);
+
+	return PQExpBufferDataBroken(buf) ? NULL : buf.data;
+}
+
+/*
+ * Appends a key/value pair to the end of an application/x-www-form-urlencoded
+ * list.
+ */
+static void
+build_urlencoded(PQExpBuffer buf, const char *key, const char *value)
+{
+	if (buf->len)
+		appendPQExpBufferChar(buf, '&');
+
+	append_urlencoded(buf, key);
+	appendPQExpBufferChar(buf, '=');
+	append_urlencoded(buf, value);
+}
+
+/*
+ * Specific HTTP Request Handlers
+ *
+ * This is finally the beginning of the actual application logic. Generally
+ * speaking, a single request consists of a start_* and a finish_* step, with
+ * drive_request() pumping the machine in between.
+ */
+
+/*
+ * Queue an OpenID Provider Configuration Request:
+ *
+ *     https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfigurationRequest
+ *     https://www.rfc-editor.org/rfc/rfc8414#section-3.1
+ *
+ * This is done first to get the endpoint URIs we need to contact and to make
+ * sure the provider provides a device authorization flow. finish_discovery()
+ * will fill in actx->provider.
+ */
+static bool
+start_discovery(struct async_ctx *actx, const char *discovery_uri)
+{
+	CHECK_SETOPT(actx, CURLOPT_HTTPGET, 1L, return false);
+	CHECK_SETOPT(actx, CURLOPT_URL, discovery_uri, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_discovery(struct async_ctx *actx)
+{
+	long		response_code;
+
+	/*----
+	 * Now check the response. OIDC Discovery 1.0 is pretty strict:
+	 *
+	 *     A successful response MUST use the 200 OK HTTP status code and
+	 *     return a JSON object using the application/json content type that
+	 *     contains a set of Claims as its members that are a subset of the
+	 *     Metadata values defined in Section 3.
+	 *
+	 * Compared to standard HTTP semantics, this makes life easy -- we don't
+	 * need to worry about redirections (which would call the Issuer host
+	 * validation into question), or non-authoritative responses, or any other
+	 * complications.
+	 */
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	if (response_code != 200)
+	{
+		actx_error(actx, "unexpected response code %ld", response_code);
+		return false;
+	}
+
+	/*
+	 * Pull the fields we care about from the document.
+	 */
+	actx->errctx = "failed to parse OpenID discovery document";
+	if (!parse_provider(actx, &actx->provider))
+		return false;			/* error message already set */
+
+	/*
+	 * Fill in any defaults for OPTIONAL/RECOMMENDED fields we care about.
+	 */
+	if (!actx->provider.grant_types_supported)
+	{
+		/*
+		 * Per Section 3, the default is ["authorization_code", "implicit"].
+		 */
+		struct curl_slist *temp = actx->provider.grant_types_supported;
+
+		temp = curl_slist_append(temp, "authorization_code");
+		if (temp)
+		{
+			temp = curl_slist_append(temp, "implicit");
+		}
+
+		if (!temp)
+		{
+			actx_error(actx, "out of memory");
+			return false;
+		}
+
+		actx->provider.grant_types_supported = temp;
+	}
+
+	return true;
+}
+
+/*
+ * Ensure that the discovery document is provided by the expected issuer.
+ * Currently, issuers are statically configured in the connection string.
+ */
+static bool
+check_issuer(struct async_ctx *actx, PGconn *conn)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+
+	/*---
+	 * We require strict equality for issuer identifiers -- no path or case
+	 * normalization, no substitution of default ports and schemes, etc. This
+	 * is done to match the rules in OIDC Discovery Sec. 4.3 for config
+	 * validation:
+	 *
+	 *    The issuer value returned MUST be identical to the Issuer URL that
+	 *    was used as the prefix to /.well-known/openid-configuration to
+	 *    retrieve the configuration information.
+	 *
+	 * as well as the rules set out in RFC 9207 for avoiding mix-up attacks:
+	 *
+	 *    Clients MUST then [...] compare the result to the issuer identifier
+	 *    of the authorization server where the authorization request was
+	 *    sent to. This comparison MUST use simple string comparison as defined
+	 *    in Section 6.2.1 of [RFC3986].
+	 */
+	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	{
+		actx_error(actx,
+				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
+				   provider->issuer, conn->oauth_issuer_id);
+		return false;
+	}
+
+	return true;
+}
+
+#define HTTPS_SCHEME "https://"
+#define OAUTH_GRANT_TYPE_DEVICE_CODE "urn:ietf:params:oauth:grant-type:device_code"
+
+/*
+ * Ensure that the provider supports the Device Authorization flow (i.e. it
+ * provides an authorization endpoint, and both the token and authorization
+ * endpoint URLs seem reasonable).
+ */
+static bool
+check_for_device_flow(struct async_ctx *actx)
+{
+	const struct provider *provider = &actx->provider;
+
+	Assert(provider->issuer);	/* ensured by parse_provider() */
+	Assert(provider->token_endpoint);	/* ensured by parse_provider() */
+
+	if (!provider->device_authorization_endpoint)
+	{
+		actx_error(actx,
+				   "issuer \"%s\" does not provide a device authorization endpoint",
+				   provider->issuer);
+		return false;
+	}
+
+	/*
+	 * The original implementation checked that OAUTH_GRANT_TYPE_DEVICE_CODE
+	 * was present in the discovery document's grant_types_supported list. MS
+	 * Entra does not advertise this grant type, though, and since it doesn't
+	 * make sense to stand up a device_authorization_endpoint without also
+	 * accepting device codes at the token_endpoint, that's the only thing we
+	 * currently require.
+	 */
+
+	/*
+	 * Although libcurl will fail later if the URL contains an unsupported
+	 * scheme, that error message is going to be a bit opaque. This is a
+	 * decent time to bail out if we're not using HTTPS for the endpoints
+	 * we'll use for the flow.
+	 */
+	if (!actx->debugging)
+	{
+		if (pg_strncasecmp(provider->device_authorization_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "device authorization endpoint \"%s\" must use HTTPS",
+					   provider->device_authorization_endpoint);
+			return false;
+		}
+
+		if (pg_strncasecmp(provider->token_endpoint,
+						   HTTPS_SCHEME, strlen(HTTPS_SCHEME)) != 0)
+		{
+			actx_error(actx,
+					   "token endpoint \"%s\" must use HTTPS",
+					   provider->token_endpoint);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Adds the client ID (and secret, if provided) to the current request, using
+ * either HTTP headers or the request body.
+ */
+static bool
+add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
+{
+	bool		success = false;
+	char	   *username = NULL;
+	char	   *password = NULL;
+
+	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	{
+		/*----
+		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
+		 * Sec. 2.3.1,
+		 *
+		 *   Including the client credentials in the request-body using the
+		 *   two parameters is NOT RECOMMENDED and SHOULD be limited to
+		 *   clients unable to directly utilize the HTTP Basic authentication
+		 *   scheme (or other password-based HTTP authentication schemes).
+		 *
+		 * Additionally:
+		 *
+		 *   The client identifier is encoded using the
+		 *   "application/x-www-form-urlencoded" encoding algorithm per Appendix
+		 *   B, and the encoded value is used as the username; the client
+		 *   password is encoded using the same algorithm and used as the
+		 *   password.
+		 *
+		 * (Appendix B modifies application/x-www-form-urlencoded by requiring
+		 * an initial UTF-8 encoding step. Since the client ID and secret must
+		 * both be 7-bit ASCII -- RFC 6749 Appendix A -- we don't worry about
+		 * that in this function.)
+		 *
+		 * client_id is not added to the request body in this case. Not only
+		 * would it be redundant, but some providers in the wild (e.g. Okta)
+		 * refuse to accept it.
+		 */
+		username = urlencode(conn->oauth_client_id);
+		password = urlencode(conn->oauth_client_secret);
+
+		if (!username || !password)
+		{
+			actx_error(actx, "out of memory");
+			goto cleanup;
+		}
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
+		CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
+
+		actx->used_basic_auth = true;
+	}
+	else
+	{
+		/*
+		 * If we're not otherwise authenticating, client_id is REQUIRED in the
+		 * request body.
+		 */
+		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+
+		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
+		actx->used_basic_auth = false;
+	}
+
+	success = true;
+
+cleanup:
+	free(username);
+	free(password);
+
+	return success;
+}
+
+/*
+ * Queue a Device Authorization Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc8628#section-3.1
+ *
+ * This is the second step. We ask the provider to verify the end user out of
+ * band and authorize us to act on their behalf; it will give us the required
+ * nonces for us to later poll the request status, which we'll grab in
+ * finish_device_authz().
+ */
+static bool
+start_device_authz(struct async_ctx *actx, PGconn *conn)
+{
+	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	if (conn->oauth_scope && conn->oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, device_authz_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_device_authz(struct async_ctx *actx)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 8628, Section 3, a successful device authorization response
+	 * uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse device authorization";
+		if (!parse_device_authz(actx, &actx->authz))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * The device authorization endpoint uses the same error response as the
+	 * token endpoint, so the error handling roughly follows
+	 * finish_token_request(). The key difference is that an error here is
+	 * immediately fatal.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		struct token_error err = {0};
+
+		if (!parse_token_error(actx, &err))
+		{
+			free_token_error(&err);
+			return false;
+		}
+
+		/* Copy the token error into the context error buffer */
+		record_token_error(actx, &err);
+
+		free_token_error(&err);
+		return false;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Queue an Access Token Request:
+ *
+ *     https://www.rfc-editor.org/rfc/rfc6749#section-4.1.3
+ *
+ * This is the final step. We continually poll the token endpoint to see if the
+ * user has authorized us yet. finish_token_request() will pull either the token
+ * or a (ideally temporary) error status from the provider.
+ */
+static bool
+start_token_request(struct async_ctx *actx, PGconn *conn)
+{
+	const char *token_uri = actx->provider.token_endpoint;
+	const char *device_code = actx->authz.device_code;
+	PQExpBuffer work_buffer = &actx->work_data;
+
+	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(token_uri);			/* ensured by parse_provider() */
+	Assert(device_code);		/* ensured by parse_device_authz() */
+
+	/* Construct our request body. */
+	resetPQExpBuffer(work_buffer);
+	build_urlencoded(work_buffer, "device_code", device_code);
+	build_urlencoded(work_buffer, "grant_type", OAUTH_GRANT_TYPE_DEVICE_CODE);
+
+	if (!add_client_identification(actx, work_buffer, conn))
+		return false;
+
+	if (PQExpBufferBroken(work_buffer))
+	{
+		actx_error(actx, "out of memory");
+		return false;
+	}
+
+	/* Make our request. */
+	CHECK_SETOPT(actx, CURLOPT_URL, token_uri, return false);
+	CHECK_SETOPT(actx, CURLOPT_COPYPOSTFIELDS, work_buffer->data, return false);
+
+	return start_request(actx);
+}
+
+static bool
+finish_token_request(struct async_ctx *actx, struct token *tok)
+{
+	long		response_code;
+
+	CHECK_GETINFO(actx, CURLINFO_RESPONSE_CODE, &response_code, return false);
+
+	/*
+	 * Per RFC 6749, Section 5, a successful response uses 200 OK.
+	 */
+	if (response_code == 200)
+	{
+		actx->errctx = "failed to parse access token response";
+		if (!parse_access_token(actx, tok))
+			return false;		/* error message already set */
+
+		return true;
+	}
+
+	/*
+	 * An error response uses either 400 Bad Request or 401 Unauthorized.
+	 * There are references online to implementations using 403 for error
+	 * return which would violate the specification. For now we stick to the
+	 * specification but we might have to revisit this.
+	 */
+	if (response_code == 400 || response_code == 401)
+	{
+		if (!parse_token_error(actx, &tok->err))
+			return false;
+
+		return true;
+	}
+
+	/* Any other response codes are considered invalid */
+	actx_error(actx, "unexpected response code %ld", response_code);
+	return false;
+}
+
+/*
+ * Finishes the token request and examines the response. If the flow has
+ * completed, a valid token will be returned via the parameter list. Otherwise,
+ * the token parameter remains unchanged, and the caller needs to wait for
+ * another interval (which will have been increased in response to a slow_down
+ * message from the server) before starting a new token request.
+ *
+ * False is returned only for permanent error conditions.
+ */
+static bool
+handle_token_response(struct async_ctx *actx, char **token)
+{
+	bool		success = false;
+	struct token tok = {0};
+	const struct token_error *err;
+
+	if (!finish_token_request(actx, &tok))
+		goto token_cleanup;
+
+	/* A successful token request gives either a token or an in-band error. */
+	Assert(tok.access_token || tok.err.error);
+
+	if (tok.access_token)
+	{
+		*token = tok.access_token;
+		tok.access_token = NULL;
+
+		success = true;
+		goto token_cleanup;
+	}
+
+	/*
+	 * authorization_pending and slow_down are the only acceptable errors;
+	 * anything else and we bail. These are defined in RFC 8628, Sec. 3.5.
+	 */
+	err = &tok.err;
+	if (strcmp(err->error, "authorization_pending") != 0 &&
+		strcmp(err->error, "slow_down") != 0)
+	{
+		record_token_error(actx, err);
+		goto token_cleanup;
+	}
+
+	/*
+	 * A slow_down error requires us to permanently increase our retry
+	 * interval by five seconds.
+	 */
+	if (strcmp(err->error, "slow_down") == 0)
+	{
+		int			prev_interval = actx->authz.interval;
+
+		actx->authz.interval += 5;
+		if (actx->authz.interval < prev_interval)
+		{
+			actx_error(actx, "slow_down interval overflow");
+			goto token_cleanup;
+		}
+	}
+
+	success = true;
+
+token_cleanup:
+	free_token(&tok);
+	return success;
+}
+
+/*
+ * Displays a device authorization prompt for action by the end user, either via
+ * the PQauthDataHook, or by a message on standard error if no hook is set.
+ */
+static bool
+prompt_user(struct async_ctx *actx, PGconn *conn)
+{
+	int			res;
+	PGpromptOAuthDevice prompt = {
+		.verification_uri = actx->authz.verification_uri,
+		.user_code = actx->authz.user_code,
+		.verification_uri_complete = actx->authz.verification_uri_complete,
+		.expires_in = actx->authz.expires_in,
+	};
+
+	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+
+	if (!res)
+	{
+		/*
+		 * translator: The first %s is a URL for the user to visit in a
+		 * browser, and the second %s is a code to be copy-pasted there.
+		 */
+		fprintf(stderr, libpq_gettext("Visit %s and enter the code: %s\n"),
+				prompt.verification_uri, prompt.user_code);
+	}
+	else if (res < 0)
+	{
+		actx_error(actx, "device prompt failed");
+		return false;
+	}
+
+	return true;
+}
+
+/*
+ * Calls curl_global_init() in a thread-safe way.
+ *
+ * libcurl has stringent requirements for the thread context in which you call
+ * curl_global_init(), because it's going to try initializing a bunch of other
+ * libraries (OpenSSL, Winsock, etc). Recent versions of libcurl have improved
+ * the thread-safety situation, but there's a chicken-and-egg problem at
+ * runtime: you can't check the thread safety until you've initialized libcurl,
+ * which you can't do from within a thread unless you know it's thread-safe...
+ *
+ * Returns true if initialization was successful. Successful or not, this
+ * function will not try to reinitialize Curl on successive calls.
+ */
+static bool
+initialize_curl(PGconn *conn)
+{
+	/*
+	 * Don't let the compiler play tricks with this variable. In the
+	 * HAVE_THREADSAFE_CURL_GLOBAL_INIT case, we don't care if two threads
+	 * enter simultaneously, but we do care if this gets set transiently to
+	 * PG_BOOL_YES/NO in cases where that's not the final answer.
+	 */
+	static volatile PGTernaryBool init_successful = PG_BOOL_UNKNOWN;
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	curl_version_info_data *info;
+#endif
+
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * Lock around the whole function. If a libpq client performs its own work
+	 * with libcurl, it must either ensure that Curl is initialized safely
+	 * before calling us (in which case our call will be a no-op), or else it
+	 * must guard its own calls to curl_global_init() with a registered
+	 * threadlock handler. See PQregisterThreadLock().
+	 */
+	pglock_thread();
+#endif
+
+	/*
+	 * Skip initialization if we've already done it. (Curl tracks the number
+	 * of calls; there's no point in incrementing the counter every time we
+	 * connect.)
+	 */
+	if (init_successful == PG_BOOL_YES)
+		goto done;
+	else if (init_successful == PG_BOOL_NO)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init previously failed during OAuth setup");
+		goto done;
+	}
+
+	/*
+	 * We know we've already initialized Winsock by this point (see
+	 * pqMakeEmptyPGconn()), so we should be able to safely skip that bit. But
+	 * we have to tell libcurl to initialize everything else, because other
+	 * pieces of our client executable may already be using libcurl for their
+	 * own purposes. If we initialize libcurl with only a subset of its
+	 * features, we could break those other clients nondeterministically, and
+	 * that would probably be a nightmare to debug.
+	 *
+	 * If some other part of the program has already called this, it's a
+	 * no-op.
+	 */
+	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
+	{
+		libpq_append_conn_error(conn,
+								"curl_global_init failed during OAuth setup");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+
+#if HAVE_THREADSAFE_CURL_GLOBAL_INIT
+
+	/*
+	 * If we determined at configure time that the Curl installation is
+	 * thread-safe, our job here is much easier. We simply initialize above
+	 * without any locking (concurrent or duplicated calls are fine in that
+	 * situation), then double-check to make sure the runtime setting agrees,
+	 * to try to catch silent downgrades.
+	 */
+	info = curl_version_info(CURLVERSION_NOW);
+	if (!(info->features & CURL_VERSION_THREADSAFE))
+	{
+		/*
+		 * In a downgrade situation, the damage is already done. Curl global
+		 * state may be corrupted. Be noisy.
+		 */
+		libpq_append_conn_error(conn, "libcurl is no longer thread-safe\n"
+								"\tCurl initialization was reported thread-safe when libpq\n"
+								"\twas compiled, but the currently installed version of\n"
+								"\tlibcurl reports that it is not. Recompile libpq against\n"
+								"\tthe installed version of libcurl.");
+		init_successful = PG_BOOL_NO;
+		goto done;
+	}
+#endif
+
+	init_successful = PG_BOOL_YES;
+
+done:
+#if !HAVE_THREADSAFE_CURL_GLOBAL_INIT
+	pgunlock_thread();
+#endif
+	return (init_successful == PG_BOOL_YES);
+}
+
+/*
+ * The core nonblocking libcurl implementation. This will be called several
+ * times to pump the async engine.
+ *
+ * The architecture is based on PQconnectPoll(). The first half drives the
+ * connection state forward as necessary, returning if we're not ready to
+ * proceed to the next step yet. The second half performs the actual transition
+ * between states.
+ *
+ * You can trace the overall OAuth flow through the second half. It's linear
+ * until we get to the end, where we flip back and forth between
+ * OAUTH_STEP_TOKEN_REQUEST and OAUTH_STEP_WAIT_INTERVAL to regularly ping the
+ * provider.
+ */
+static PostgresPollingStatusType
+pg_fe_run_oauth_flow_impl(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	struct async_ctx *actx;
+
+	if (!initialize_curl(conn))
+		return PGRES_POLLING_FAILED;
+
+	if (!state->async_ctx)
+	{
+		/*
+		 * Create our asynchronous state, and hook it into the upper-level
+		 * OAuth state immediately, so any failures below won't leak the
+		 * context allocation.
+		 */
+		actx = calloc(1, sizeof(*actx));
+		if (!actx)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		actx->mux = PGINVALID_SOCKET;
+		actx->timerfd = -1;
+
+		/* Should we enable unsafe features? */
+		actx->debugging = oauth_unsafe_debugging_enabled();
+
+		state->async_ctx = actx;
+
+		initPQExpBuffer(&actx->work_data);
+		initPQExpBuffer(&actx->errbuf);
+
+		if (!setup_multiplexer(actx))
+			goto error_return;
+
+		if (!setup_curl_handles(actx))
+			goto error_return;
+	}
+
+	actx = state->async_ctx;
+
+	do
+	{
+		/* By default, the multiplexer is the altsock. Reassign as desired. */
+		conn->altsock = actx->mux;
+
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+			case OAUTH_STEP_TOKEN_REQUEST:
+				{
+					PostgresPollingStatusType status;
+
+					status = drive_request(actx);
+
+					if (status == PGRES_POLLING_FAILED)
+						goto error_return;
+					else if (status != PGRES_POLLING_OK)
+					{
+						/* not done yet */
+						return status;
+					}
+
+					break;
+				}
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+
+				/*
+				 * The client application is supposed to wait until our timer
+				 * expires before calling PQconnectPoll() again, but that
+				 * might not happen. To avoid sending a token request early,
+				 * check the timer before continuing.
+				 */
+				if (!timer_expired(actx))
+				{
+					conn->altsock = actx->timerfd;
+					return PGRES_POLLING_READING;
+				}
+
+				/* Disable the expired timer. */
+				if (!set_timer(actx, -1))
+					goto error_return;
+
+				break;
+		}
+
+		/*
+		 * Each case here must ensure that actx->running is set while we're
+		 * waiting on some asynchronous work. Most cases rely on
+		 * start_request() to do that for them.
+		 */
+		switch (actx->step)
+		{
+			case OAUTH_STEP_INIT:
+				actx->errctx = "failed to fetch OpenID discovery document";
+				if (!start_discovery(actx, conn->oauth_discovery_uri))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DISCOVERY;
+				break;
+
+			case OAUTH_STEP_DISCOVERY:
+				if (!finish_discovery(actx))
+					goto error_return;
+
+				if (!check_issuer(actx, conn))
+					goto error_return;
+
+				actx->errctx = "cannot run OAuth device authorization";
+				if (!check_for_device_flow(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain device authorization";
+				if (!start_device_authz(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_DEVICE_AUTHORIZATION;
+				break;
+
+			case OAUTH_STEP_DEVICE_AUTHORIZATION:
+				if (!finish_device_authz(actx))
+					goto error_return;
+
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+
+			case OAUTH_STEP_TOKEN_REQUEST:
+				if (!handle_token_response(actx, &conn->oauth_token))
+					goto error_return;
+
+				if (!actx->user_prompted)
+				{
+					/*
+					 * Now that we know the token endpoint isn't broken, give
+					 * the user the login instructions.
+					 */
+					if (!prompt_user(actx, conn))
+						goto error_return;
+
+					actx->user_prompted = true;
+				}
+
+				if (conn->oauth_token)
+					break;		/* done! */
+
+				/*
+				 * Wait for the required interval before issuing the next
+				 * request.
+				 */
+				if (!set_timer(actx, actx->authz.interval * 1000))
+					goto error_return;
+
+				/*
+				 * No Curl requests are running, so we can simplify by having
+				 * the client wait directly on the timerfd rather than the
+				 * multiplexer.
+				 */
+				conn->altsock = actx->timerfd;
+
+				actx->step = OAUTH_STEP_WAIT_INTERVAL;
+				actx->running = 1;
+				break;
+
+			case OAUTH_STEP_WAIT_INTERVAL:
+				actx->errctx = "failed to obtain access token";
+				if (!start_token_request(actx, conn))
+					goto error_return;
+
+				actx->step = OAUTH_STEP_TOKEN_REQUEST;
+				break;
+		}
+
+		/*
+		 * The vast majority of the time, if we don't have a token at this
+		 * point, actx->running will be set. But there are some corner cases
+		 * where we can immediately loop back around; see start_request().
+		 */
+	} while (!conn->oauth_token && !actx->running);
+
+	/* If we've stored a token, we're done. Otherwise come back later. */
+	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+
+error_return:
+
+	/*
+	 * Assemble the three parts of our error: context, body, and detail. See
+	 * also the documentation for struct async_ctx.
+	 */
+	if (actx->errctx)
+	{
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(&conn->errorMessage, ": ");
+	}
+
+	if (PQExpBufferDataBroken(actx->errbuf))
+		appendPQExpBufferStr(&conn->errorMessage,
+							 libpq_gettext("out of memory"));
+	else
+		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+
+	if (actx->curl_err[0])
+	{
+		size_t		len;
+
+		appendPQExpBuffer(&conn->errorMessage,
+						  " (libcurl: %s)", actx->curl_err);
+
+		/* Sometimes libcurl adds a newline to the error buffer. :( */
+		len = conn->errorMessage.len;
+		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		{
+			conn->errorMessage.data[len - 2] = ')';
+			conn->errorMessage.data[len - 1] = '\0';
+			conn->errorMessage.len--;
+		}
+	}
+
+	appendPQExpBufferStr(&conn->errorMessage, "\n");
+
+	return PGRES_POLLING_FAILED;
+}
+
+/*
+ * The top-level entry point. This is a convenient place to put necessary
+ * wrapper logic before handing off to the true implementation, above.
+ */
+PostgresPollingStatusType
+pg_fe_run_oauth_flow(PGconn *conn)
+{
+	PostgresPollingStatusType result;
+#ifndef WIN32
+	sigset_t	osigset;
+	bool		sigpipe_pending;
+	bool		masked;
+
+	/*---
+	 * Ignore SIGPIPE on this thread during all Curl processing.
+	 *
+	 * Because we support multiple threads, we have to set up libcurl with
+	 * CURLOPT_NOSIGNAL, which disables its default global handling of
+	 * SIGPIPE. From the Curl docs:
+	 *
+	 *     libcurl makes an effort to never cause such SIGPIPE signals to
+	 *     trigger, but some operating systems have no way to avoid them and
+	 *     even on those that have there are some corner cases when they may
+	 *     still happen, contrary to our desire.
+	 *
+	 * Note that libcurl is also at the mercy of its DNS resolution and SSL
+	 * libraries; if any of them forget a MSG_NOSIGNAL then we're in trouble.
+	 * Modern platforms and libraries seem to get it right, so this is a
+	 * difficult corner case to exercise in practice, and unfortunately it's
+	 * not really clear whether it's necessary in all cases.
+	 */
+	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+#endif
+
+	result = pg_fe_run_oauth_flow_impl(conn);
+
+#ifndef WIN32
+	if (masked)
+	{
+		/*
+		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
+		 * way of knowing at this level).
+		 */
+		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+	}
+#endif
+
+	return result;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
new file mode 100644
index 00000000000..fb1e9a1a8aa
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -0,0 +1,1163 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.c
+ *	   The front-end (client) implementation of OAuth/OIDC authentication
+ *	   using the SASL OAUTHBEARER mechanism.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "common/base64.h"
+#include "common/hmac.h"
+#include "common/jsonapi.h"
+#include "common/oauth-common.h"
+#include "fe-auth.h"
+#include "fe-auth-oauth.h"
+#include "mb/pg_wchar.h"
+
+/* The exported OAuth callback mechanism. */
+static void *oauth_init(PGconn *conn, const char *password,
+						const char *sasl_mechanism);
+static SASLStatus oauth_exchange(void *opaq, bool final,
+								 char *input, int inputlen,
+								 char **output, int *outputlen);
+static bool oauth_channel_bound(void *opaq);
+static void oauth_free(void *opaq);
+
+const pg_fe_sasl_mech pg_oauth_mech = {
+	oauth_init,
+	oauth_exchange,
+	oauth_channel_bound,
+	oauth_free,
+};
+
+/*
+ * Initializes mechanism state for OAUTHBEARER.
+ *
+ * For a full description of the API, see libpq/fe-auth-sasl.h.
+ */
+static void *
+oauth_init(PGconn *conn, const char *password,
+		   const char *sasl_mechanism)
+{
+	fe_oauth_state *state;
+
+	/*
+	 * We only support one SASL mechanism here; anything else is programmer
+	 * error.
+	 */
+	Assert(sasl_mechanism != NULL);
+	Assert(strcmp(sasl_mechanism, OAUTHBEARER_NAME) == 0);
+
+	state = calloc(1, sizeof(*state));
+	if (!state)
+		return NULL;
+
+	state->step = FE_OAUTH_INIT;
+	state->conn = conn;
+
+	return state;
+}
+
+/*
+ * Frees the state allocated by oauth_init().
+ *
+ * This handles only mechanism state tied to the connection lifetime; state
+ * stored in state->async_ctx is freed up either immediately after the
+ * authentication handshake succeeds, or before the mechanism is cleaned up on
+ * failure. See pg_fe_cleanup_oauth_flow() and cleanup_user_oauth_flow().
+ */
+static void
+oauth_free(void *opaq)
+{
+	fe_oauth_state *state = opaq;
+
+	/* Any async authentication state should have been cleaned up already. */
+	Assert(!state->async_ctx);
+
+	free(state);
+}
+
+#define kvsep "\x01"
+
+/*
+ * Constructs an OAUTHBEARER client initial response (RFC 7628, Sec. 3.1).
+ *
+ * If discover is true, the initial response will contain a request for the
+ * server's required OAuth parameters (Sec. 4.3). Otherwise, conn->token must
+ * be set; it will be sent as the connection's bearer token.
+ *
+ * Returns the response as a null-terminated string, or NULL on error.
+ */
+static char *
+client_initial_response(PGconn *conn, bool discover)
+{
+	static const char *const resp_format = "n,," kvsep "auth=%s%s" kvsep kvsep;
+
+	PQExpBufferData buf;
+	const char *authn_scheme;
+	char	   *response = NULL;
+	const char *token = conn->oauth_token;
+
+	if (discover)
+	{
+		/* Parameter discovery uses a completely empty auth value. */
+		authn_scheme = token = "";
+	}
+	else
+	{
+		/*
+		 * Use a Bearer authentication scheme (RFC 6750, Sec. 2.1). A trailing
+		 * space is used as a separator.
+		 */
+		authn_scheme = "Bearer ";
+
+		/* conn->token must have been set in this case. */
+		if (!token)
+		{
+			Assert(false);
+			libpq_append_conn_error(conn,
+									"internal error: no OAuth token was set for the connection");
+			return NULL;
+		}
+	}
+
+	initPQExpBuffer(&buf);
+	appendPQExpBuffer(&buf, resp_format, authn_scheme, token);
+
+	if (!PQExpBufferDataBroken(buf))
+		response = strdup(buf.data);
+	termPQExpBuffer(&buf);
+
+	if (!response)
+		libpq_append_conn_error(conn, "out of memory");
+
+	return response;
+}
+
+/*
+ * JSON Parser (for the OAUTHBEARER error result)
+ */
+
+/* Relevant JSON fields in the error result object. */
+#define ERROR_STATUS_FIELD "status"
+#define ERROR_SCOPE_FIELD "scope"
+#define ERROR_OPENID_CONFIGURATION_FIELD "openid-configuration"
+
+struct json_ctx
+{
+	char	   *errmsg;			/* any non-NULL value stops all processing */
+	PQExpBufferData errbuf;		/* backing memory for errmsg */
+	int			nested;			/* nesting level (zero is the top) */
+
+	const char *target_field_name;	/* points to a static allocation */
+	char	  **target_field;	/* see below */
+
+	/* target_field, if set, points to one of the following: */
+	char	   *status;
+	char	   *scope;
+	char	   *discovery_uri;
+};
+
+#define oauth_json_has_error(ctx) \
+	(PQExpBufferDataBroken((ctx)->errbuf) || (ctx)->errmsg)
+
+#define oauth_json_set_error(ctx, ...) \
+	do { \
+		appendPQExpBuffer(&(ctx)->errbuf, __VA_ARGS__); \
+		(ctx)->errmsg = (ctx)->errbuf.data; \
+	} while (0)
+
+static JsonParseErrorType
+oauth_json_object_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	++ctx->nested;
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_end(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	--ctx->nested;
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_object_field_start(void *state, char *name, bool isnull)
+{
+	struct json_ctx *ctx = state;
+
+	/* Only top-level keys are considered. */
+	if (ctx->nested == 1)
+	{
+		if (strcmp(name, ERROR_STATUS_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_STATUS_FIELD;
+			ctx->target_field = &ctx->status;
+		}
+		else if (strcmp(name, ERROR_SCOPE_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_SCOPE_FIELD;
+			ctx->target_field = &ctx->scope;
+		}
+		else if (strcmp(name, ERROR_OPENID_CONFIGURATION_FIELD) == 0)
+		{
+			ctx->target_field_name = ERROR_OPENID_CONFIGURATION_FIELD;
+			ctx->target_field = &ctx->discovery_uri;
+		}
+	}
+
+	return JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_array_start(void *state)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+	}
+	else if (ctx->target_field)
+	{
+		Assert(ctx->nested == 1);
+
+		oauth_json_set_error(ctx,
+							 libpq_gettext("field \"%s\" must be a string"),
+							 ctx->target_field_name);
+	}
+
+	return oauth_json_has_error(ctx) ? JSON_SEM_ACTION_FAILED : JSON_SUCCESS;
+}
+
+static JsonParseErrorType
+oauth_json_scalar(void *state, char *token, JsonTokenType type)
+{
+	struct json_ctx *ctx = state;
+
+	if (!ctx->nested)
+	{
+		ctx->errmsg = libpq_gettext("top-level element must be an object");
+		return JSON_SEM_ACTION_FAILED;
+	}
+
+	if (ctx->target_field)
+	{
+		if (ctx->nested != 1)
+		{
+			/*
+			 * ctx->target_field should not have been set for nested keys.
+			 * Assert and don't continue any further for production builds.
+			 */
+			Assert(false);
+			oauth_json_set_error(ctx,
+								 "internal error: target scalar found at nesting level %d during OAUTHBEARER parsing",
+								 ctx->nested);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/*
+		 * We don't allow duplicate field names; error out if the target has
+		 * already been set.
+		 */
+		if (*ctx->target_field)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" is duplicated"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		/* The only fields we support are strings. */
+		if (type != JSON_TOKEN_STRING)
+		{
+			oauth_json_set_error(ctx,
+								 libpq_gettext("field \"%s\" must be a string"),
+								 ctx->target_field_name);
+			return JSON_SEM_ACTION_FAILED;
+		}
+
+		*ctx->target_field = strdup(token);
+		if (!*ctx->target_field)
+			return JSON_OUT_OF_MEMORY;
+
+		ctx->target_field = NULL;
+		ctx->target_field_name = NULL;
+	}
+	else
+	{
+		/* otherwise we just ignore it */
+	}
+
+	return JSON_SUCCESS;
+}
+
+#define HTTPS_SCHEME "https://"
+#define HTTP_SCHEME "http://"
+
+/* We support both well-known suffixes defined by RFC 8414. */
+#define WK_PREFIX "/.well-known/"
+#define OPENID_WK_SUFFIX "openid-configuration"
+#define OAUTH_WK_SUFFIX "oauth-authorization-server"
+
+/*
+ * Derives an issuer identifier from one of our recognized .well-known URIs,
+ * using the rules in RFC 8414.
+ */
+static char *
+issuer_from_well_known_uri(PGconn *conn, const char *wkuri)
+{
+	const char *authority_start = NULL;
+	const char *wk_start;
+	const char *wk_end;
+	char	   *issuer;
+	ptrdiff_t	start_offset,
+				end_offset;
+	size_t		end_len;
+
+	/*
+	 * https:// is required for issuer identifiers (RFC 8414, Sec. 2; OIDC
+	 * Discovery 1.0, Sec. 3). This is a case-insensitive comparison at this
+	 * level (but issuer identifier comparison at the level above this is
+	 * case-sensitive, so in practice it's probably moot).
+	 */
+	if (pg_strncasecmp(wkuri, HTTPS_SCHEME, strlen(HTTPS_SCHEME)) == 0)
+		authority_start = wkuri + strlen(HTTPS_SCHEME);
+
+	if (!authority_start
+		&& oauth_unsafe_debugging_enabled()
+		&& pg_strncasecmp(wkuri, HTTP_SCHEME, strlen(HTTP_SCHEME)) == 0)
+	{
+		/* Allow http:// for testing only. */
+		authority_start = wkuri + strlen(HTTP_SCHEME);
+	}
+
+	if (!authority_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must use HTTPS",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Well-known URIs in general may support queries and fragments, but the
+	 * two types we support here do not. (They must be constructed from the
+	 * components of issuer identifiers, which themselves may not contain any
+	 * queries or fragments.)
+	 *
+	 * It's important to check this first, to avoid getting tricked later by a
+	 * prefix buried inside a query or fragment.
+	 */
+	if (strpbrk(authority_start, "?#") != NULL)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" must not contain query or fragment components",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Find the start of the .well-known prefix. IETF rules (RFC 8615) state
+	 * this must be at the beginning of the path component, but OIDC defined
+	 * it at the end instead (OIDC Discovery 1.0, Sec. 4), so we have to
+	 * search for it anywhere.
+	 */
+	wk_start = strstr(authority_start, WK_PREFIX);
+	if (!wk_start)
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" is not a .well-known URI",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Now find the suffix type. We only support the two defined in OIDC
+	 * Discovery 1.0 and RFC 8414.
+	 */
+	wk_end = wk_start + strlen(WK_PREFIX);
+
+	if (strncmp(wk_end, OPENID_WK_SUFFIX, strlen(OPENID_WK_SUFFIX)) == 0)
+		wk_end += strlen(OPENID_WK_SUFFIX);
+	else if (strncmp(wk_end, OAUTH_WK_SUFFIX, strlen(OAUTH_WK_SUFFIX)) == 0)
+		wk_end += strlen(OAUTH_WK_SUFFIX);
+	else
+		wk_end = NULL;
+
+	/*
+	 * Even if there's a match, we still need to check to make sure the suffix
+	 * takes up the entire path segment, to weed out constructions like
+	 * "/.well-known/openid-configuration-bad".
+	 */
+	if (!wk_end || (*wk_end != '/' && *wk_end != '\0'))
+	{
+		libpq_append_conn_error(conn,
+								"OAuth discovery URI \"%s\" uses an unsupported .well-known suffix",
+								wkuri);
+		return NULL;
+	}
+
+	/*
+	 * Finally, make sure the .well-known components are provided either as a
+	 * prefix (IETF style) or as a postfix (OIDC style). In other words,
+	 * "https://localhost/a/.well-known/openid-configuration/b" is not allowed
+	 * to claim association with "https://localhost/a/b".
+	 */
+	if (*wk_end != '\0')
+	{
+		/*
+		 * It's not at the end, so it's required to be at the beginning at the
+		 * path. Find the starting slash.
+		 */
+		const char *path_start;
+
+		path_start = strchr(authority_start, '/');
+		Assert(path_start);		/* otherwise we wouldn't have found WK_PREFIX */
+
+		if (wk_start != path_start)
+		{
+			libpq_append_conn_error(conn,
+									"OAuth discovery URI \"%s\" uses an invalid format",
+									wkuri);
+			return NULL;
+		}
+	}
+
+	/* Checks passed! Now build the issuer. */
+	issuer = strdup(wkuri);
+	if (!issuer)
+	{
+		libpq_append_conn_error(conn, "out of memory");
+		return NULL;
+	}
+
+	/*
+	 * The .well-known components are from [wk_start, wk_end). Remove those to
+	 * form the issuer ID, by shifting the path suffix (which may be empty)
+	 * leftwards.
+	 */
+	start_offset = wk_start - wkuri;
+	end_offset = wk_end - wkuri;
+	end_len = strlen(wk_end) + 1;	/* move the NULL terminator too */
+
+	memmove(issuer + start_offset, issuer + end_offset, end_len);
+
+	return issuer;
+}
+
+/*
+ * Parses the server error result (RFC 7628, Sec. 3.2.2) contained in msg and
+ * stores any discovered openid_configuration and scope settings for the
+ * connection.
+ */
+static bool
+handle_oauth_sasl_error(PGconn *conn, const char *msg, int msglen)
+{
+	JsonLexContext lex = {0};
+	JsonSemAction sem = {0};
+	JsonParseErrorType err;
+	struct json_ctx ctx = {0};
+	char	   *errmsg = NULL;
+	bool		success = false;
+
+	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+
+	/* Sanity check. */
+	if (strlen(msg) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error message contained an embedded NULL, and was discarded");
+		return false;
+	}
+
+	/*
+	 * pg_parse_json doesn't validate the incoming UTF-8, so we have to check
+	 * that up front.
+	 */
+	if (pg_encoding_verifymbstr(PG_UTF8, msg, msglen) != msglen)
+	{
+		libpq_append_conn_error(conn,
+								"server's error response is not valid UTF-8");
+		return false;
+	}
+
+	makeJsonLexContextCstringLen(&lex, msg, msglen, PG_UTF8, true);
+	setJsonLexContextOwnsTokens(&lex, true);	/* must not leak on error */
+
+	initPQExpBuffer(&ctx.errbuf);
+	sem.semstate = &ctx;
+
+	sem.object_start = oauth_json_object_start;
+	sem.object_end = oauth_json_object_end;
+	sem.object_field_start = oauth_json_object_field_start;
+	sem.array_start = oauth_json_array_start;
+	sem.scalar = oauth_json_scalar;
+
+	err = pg_parse_json(&lex, &sem);
+
+	if (err == JSON_SEM_ACTION_FAILED)
+	{
+		if (PQExpBufferDataBroken(ctx.errbuf))
+			errmsg = libpq_gettext("out of memory");
+		else if (ctx.errmsg)
+			errmsg = ctx.errmsg;
+		else
+		{
+			/*
+			 * Developer error: one of the action callbacks didn't call
+			 * oauth_json_set_error() before erroring out.
+			 */
+			Assert(oauth_json_has_error(&ctx));
+			errmsg = "<unexpected empty error>";
+		}
+	}
+	else if (err != JSON_SUCCESS)
+		errmsg = json_errdetail(err, &lex);
+
+	if (errmsg)
+		libpq_append_conn_error(conn,
+								"failed to parse server's error response: %s",
+								errmsg);
+
+	/* Don't need the error buffer or the JSON lexer anymore. */
+	termPQExpBuffer(&ctx.errbuf);
+	freeJsonLexContext(&lex);
+
+	if (errmsg)
+		goto cleanup;
+
+	if (ctx.discovery_uri)
+	{
+		char	   *discovery_issuer;
+
+		/*
+		 * The URI MUST correspond to our existing issuer, to avoid mix-ups.
+		 *
+		 * Issuer comparison is done byte-wise, rather than performing any URL
+		 * normalization; this follows the suggestions for issuer comparison
+		 * in RFC 9207 Sec. 2.4 (which requires simple string comparison) and
+		 * vastly simplifies things. Since this is the key protection against
+		 * a rogue server sending the client to an untrustworthy location,
+		 * simpler is better.
+		 */
+		discovery_issuer = issuer_from_well_known_uri(conn, ctx.discovery_uri);
+		if (!discovery_issuer)
+			goto cleanup;		/* error message already set */
+
+		if (strcmp(conn->oauth_issuer_id, discovery_issuer) != 0)
+		{
+			libpq_append_conn_error(conn,
+									"server's discovery document at %s (issuer \"%s\") is incompatible with oauth_issuer (%s)",
+									ctx.discovery_uri, discovery_issuer,
+									conn->oauth_issuer_id);
+
+			free(discovery_issuer);
+			goto cleanup;
+		}
+
+		free(discovery_issuer);
+
+		if (!conn->oauth_discovery_uri)
+		{
+			conn->oauth_discovery_uri = ctx.discovery_uri;
+			ctx.discovery_uri = NULL;
+		}
+		else
+		{
+			/* This must match the URI we'd previously determined. */
+			if (strcmp(conn->oauth_discovery_uri, ctx.discovery_uri) != 0)
+			{
+				libpq_append_conn_error(conn,
+										"server's discovery document has moved to %s (previous location was %s)",
+										ctx.discovery_uri,
+										conn->oauth_discovery_uri);
+				goto cleanup;
+			}
+		}
+	}
+
+	if (ctx.scope)
+	{
+		/* Servers may not override a previously set oauth_scope. */
+		if (!conn->oauth_scope)
+		{
+			conn->oauth_scope = ctx.scope;
+			ctx.scope = NULL;
+		}
+	}
+
+	if (!ctx.status)
+	{
+		libpq_append_conn_error(conn,
+								"server sent error response without a status");
+		goto cleanup;
+	}
+
+	if (strcmp(ctx.status, "invalid_token") != 0)
+	{
+		/*
+		 * invalid_token is the only error code we'll automatically retry for;
+		 * otherwise, just bail out now.
+		 */
+		libpq_append_conn_error(conn,
+								"server rejected OAuth bearer token: %s",
+								ctx.status);
+		goto cleanup;
+	}
+
+	success = true;
+
+cleanup:
+	free(ctx.status);
+	free(ctx.scope);
+	free(ctx.discovery_uri);
+
+	return success;
+}
+
+/*
+ * Callback implementation of conn->async_auth() for a user-defined OAuth flow.
+ * Delegates the retrieval of the token to the application's async callback.
+ *
+ * This will be called multiple times as needed; the application is responsible
+ * for setting an altsock to signal and returning the correct PGRES_POLLING_*
+ * statuses for use by PQconnectPoll().
+ */
+static PostgresPollingStatusType
+run_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+	PostgresPollingStatusType status;
+
+	if (!request->async)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow provided neither a token nor an async callback");
+		return PGRES_POLLING_FAILED;
+	}
+
+	status = request->async(conn, request, &conn->altsock);
+	if (status == PGRES_POLLING_FAILED)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		return status;
+	}
+	else if (status == PGRES_POLLING_OK)
+	{
+		/*
+		 * We already have a token, so copy it into the conn. (We can't hold
+		 * onto the original string, since it may not be safe for us to free()
+		 * it.)
+		 */
+		if (!request->token)
+		{
+			libpq_append_conn_error(conn,
+									"user-defined OAuth flow did not provide a token");
+			return PGRES_POLLING_FAILED;
+		}
+
+		conn->oauth_token = strdup(request->token);
+		if (!conn->oauth_token)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return PGRES_POLLING_FAILED;
+		}
+
+		return PGRES_POLLING_OK;
+	}
+
+	/* The hook wants the client to poll the altsock. Make sure it set one. */
+	if (conn->altsock == PGINVALID_SOCKET)
+	{
+		libpq_append_conn_error(conn,
+								"user-defined OAuth flow did not provide a socket for polling");
+		return PGRES_POLLING_FAILED;
+	}
+
+	return status;
+}
+
+/*
+ * Cleanup callback for the async user flow. Delegates most of its job to the
+ * user-provided cleanup implementation, then disconnects the altsock.
+ */
+static void
+cleanup_user_oauth_flow(PGconn *conn)
+{
+	fe_oauth_state *state = conn->sasl_state;
+	PGoauthBearerRequest *request = state->async_ctx;
+
+	Assert(request);
+
+	if (request->cleanup)
+		request->cleanup(conn, request);
+	conn->altsock = PGINVALID_SOCKET;
+
+	free(request);
+	state->async_ctx = NULL;
+}
+
+/*
+ * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
+ * token for presentation to the server.
+ *
+ * If the application has registered a custom flow handler using
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN, it may either return a token immediately (e.g.
+ * if it has one cached for immediate use), or set up for a series of
+ * asynchronous callbacks which will be managed by run_user_oauth_flow().
+ *
+ * If the default handler is used instead, a Device Authorization flow is used
+ * for the connection if support has been compiled in. (See
+ * fe-auth-oauth-curl.c for implementation details.)
+ *
+ * If neither a custom handler nor the builtin flow is available, the connection
+ * fails here.
+ */
+static bool
+setup_token_request(PGconn *conn, fe_oauth_state *state)
+{
+	int			res;
+	PGoauthBearerRequest request = {
+		.openid_configuration = conn->oauth_discovery_uri,
+		.scope = conn->oauth_scope,
+	};
+
+	Assert(request.openid_configuration);
+
+	/* The client may have overridden the OAuth flow. */
+	res = PQauthDataHook(PQAUTHDATA_OAUTH_BEARER_TOKEN, conn, &request);
+	if (res > 0)
+	{
+		PGoauthBearerRequest *request_copy;
+
+		if (request.token)
+		{
+			/*
+			 * We already have a token, so copy it into the conn. (We can't
+			 * hold onto the original string, since it may not be safe for us
+			 * to free() it.)
+			 */
+			conn->oauth_token = strdup(request.token);
+			if (!conn->oauth_token)
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				goto fail;
+			}
+
+			/* short-circuit */
+			if (request.cleanup)
+				request.cleanup(conn, &request);
+			return true;
+		}
+
+		request_copy = malloc(sizeof(*request_copy));
+		if (!request_copy)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			goto fail;
+		}
+
+		memcpy(request_copy, &request, sizeof(request));
+
+		conn->async_auth = run_user_oauth_flow;
+		conn->cleanup_async_auth = cleanup_user_oauth_flow;
+		state->async_ctx = request_copy;
+	}
+	else if (res < 0)
+	{
+		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
+		goto fail;
+	}
+	else
+	{
+#if USE_LIBCURL
+		/* Hand off to our built-in OAuth flow. */
+		conn->async_auth = pg_fe_run_oauth_flow;
+		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+#else
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		goto fail;
+
+#endif
+	}
+
+	return true;
+
+fail:
+	if (request.cleanup)
+		request.cleanup(conn, &request);
+	return false;
+}
+
+/*
+ * Fill in our issuer identifier (and discovery URI, if possible) using the
+ * connection parameters. If conn->oauth_discovery_uri can't be populated in
+ * this function, it will be requested from the server.
+ */
+static bool
+setup_oauth_parameters(PGconn *conn)
+{
+	/*
+	 * This is the only function that sets conn->oauth_issuer_id. If a
+	 * previous connection attempt has already computed it, don't overwrite it
+	 * or the discovery URI. (There's no reason for them to change once
+	 * they're set, and handle_oauth_sasl_error() will fail the connection if
+	 * the server attempts to switch them on us later.)
+	 */
+	if (conn->oauth_issuer_id)
+		return true;
+
+	/*---
+	 * To talk to a server, we require the user to provide issuer and client
+	 * identifiers.
+	 *
+	 * While it's possible for an OAuth client to support multiple issuers, it
+	 * requires additional effort to make sure the flows in use are safe -- to
+	 * quote RFC 9207,
+	 *
+	 *     OAuth clients that interact with only one authorization server are
+	 *     not vulnerable to mix-up attacks. However, when such clients decide
+	 *     to add support for a second authorization server in the future, they
+	 *     become vulnerable and need to apply countermeasures to mix-up
+	 *     attacks.
+	 *
+	 * For now, we allow only one.
+	 */
+	if (!conn->oauth_issuer || !conn->oauth_client_id)
+	{
+		libpq_append_conn_error(conn,
+								"server requires OAuth authentication, but oauth_issuer and oauth_client_id are not both set");
+		return false;
+	}
+
+	/*
+	 * oauth_issuer is interpreted differently if it's a well-known discovery
+	 * URI rather than just an issuer identifier.
+	 */
+	if (strstr(conn->oauth_issuer, WK_PREFIX) != NULL)
+	{
+		/*
+		 * Convert the URI back to an issuer identifier. (This also performs
+		 * validation of the URI format.)
+		 */
+		conn->oauth_issuer_id = issuer_from_well_known_uri(conn,
+														   conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+			return false;		/* error message already set */
+
+		conn->oauth_discovery_uri = strdup(conn->oauth_issuer);
+		if (!conn->oauth_discovery_uri)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+	else
+	{
+		/*
+		 * Treat oauth_issuer as an issuer identifier. We'll ask the server
+		 * for the discovery URI.
+		 */
+		conn->oauth_issuer_id = strdup(conn->oauth_issuer);
+		if (!conn->oauth_issuer_id)
+		{
+			libpq_append_conn_error(conn, "out of memory");
+			return false;
+		}
+	}
+
+	return true;
+}
+
+/*
+ * Implements the OAUTHBEARER SASL exchange (RFC 7628, Sec. 3.2).
+ *
+ * If the necessary OAuth parameters are set up on the connection, this will run
+ * the client flow asynchronously and present the resulting token to the server.
+ * Otherwise, an empty discovery response will be sent and any parameters sent
+ * back by the server will be stored for a second attempt.
+ *
+ * For a full description of the API, see libpq/sasl.h.
+ */
+static SASLStatus
+oauth_exchange(void *opaq, bool final,
+			   char *input, int inputlen,
+			   char **output, int *outputlen)
+{
+	fe_oauth_state *state = opaq;
+	PGconn	   *conn = state->conn;
+	bool		discover = false;
+
+	*output = NULL;
+	*outputlen = 0;
+
+	switch (state->step)
+	{
+		case FE_OAUTH_INIT:
+			/* We begin in the initial response phase. */
+			Assert(inputlen == -1);
+
+			if (!setup_oauth_parameters(conn))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * A previous connection already fetched the token; we'll use
+				 * it below.
+				 */
+			}
+			else if (conn->oauth_discovery_uri)
+			{
+				/*
+				 * We don't have a token, but we have a discovery URI already
+				 * stored. Decide whether we're using a user-provided OAuth
+				 * flow or the one we have built in.
+				 */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A really smart user implementation may have already
+					 * given us the token (e.g. if there was an unexpired copy
+					 * already cached), and we can use it immediately.
+					 */
+				}
+				else
+				{
+					/*
+					 * Otherwise, we'll have to hand the connection over to
+					 * our OAuth implementation.
+					 *
+					 * This could take a while, since it generally involves a
+					 * user in the loop. To avoid consuming the server's
+					 * authentication timeout, we'll continue this handshake
+					 * to the end, so that the server can close its side of
+					 * the connection. We'll open a second connection later
+					 * once we've retrieved a token.
+					 */
+					discover = true;
+				}
+			}
+			else
+			{
+				/*
+				 * If we don't have a token, and we don't have a discovery URI
+				 * to be able to request a token, we ask the server for one
+				 * explicitly.
+				 */
+				discover = true;
+			}
+
+			/*
+			 * Generate an initial response. This either contains a token, if
+			 * we have one, or an empty discovery response which is doomed to
+			 * fail.
+			 */
+			*output = client_initial_response(conn, discover);
+			if (!*output)
+				return SASL_FAILED;
+
+			*outputlen = strlen(*output);
+			state->step = FE_OAUTH_BEARER_SENT;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * For the purposes of require_auth, our side of
+				 * authentication is done at this point; the server will
+				 * either accept the connection or send an error. Unlike
+				 * SCRAM, there is no additional server data to check upon
+				 * success.
+				 */
+				conn->client_finished_auth = true;
+			}
+
+			return SASL_CONTINUE;
+
+		case FE_OAUTH_BEARER_SENT:
+			if (final)
+			{
+				/*
+				 * OAUTHBEARER does not make use of additional data with a
+				 * successful SASL exchange, so we shouldn't get an
+				 * AuthenticationSASLFinal message.
+				 */
+				libpq_append_conn_error(conn,
+										"server sent unexpected additional OAuth data");
+				return SASL_FAILED;
+			}
+
+			/*
+			 * An error message was sent by the server. Respond with the
+			 * required dummy message (RFC 7628, sec. 3.2.3).
+			 */
+			*output = strdup(kvsep);
+			if (unlikely(!*output))
+			{
+				libpq_append_conn_error(conn, "out of memory");
+				return SASL_FAILED;
+			}
+			*outputlen = strlen(*output);	/* == 1 */
+
+			/* Grab the settings from discovery. */
+			if (!handle_oauth_sasl_error(conn, input, inputlen))
+				return SASL_FAILED;
+
+			if (conn->oauth_token)
+			{
+				/*
+				 * The server rejected our token. Continue onwards towards the
+				 * expected FATAL message, but mark our state to catch any
+				 * unexpected "success" from the server.
+				 */
+				state->step = FE_OAUTH_SERVER_ERROR;
+				return SASL_CONTINUE;
+			}
+
+			if (!conn->async_auth)
+			{
+				/*
+				 * No OAuth flow is set up yet. Did we get enough information
+				 * from the server to create one?
+				 */
+				if (!conn->oauth_discovery_uri)
+				{
+					libpq_append_conn_error(conn,
+											"server requires OAuth authentication, but no discovery metadata was provided");
+					return SASL_FAILED;
+				}
+
+				/* Yes. Set up the flow now. */
+				if (!setup_token_request(conn, state))
+					return SASL_FAILED;
+
+				if (conn->oauth_token)
+				{
+					/*
+					 * A token was available in a custom flow's cache. Skip
+					 * the asynchronous processing.
+					 */
+					goto reconnect;
+				}
+			}
+
+			/*
+			 * Time to retrieve a token. This involves a number of HTTP
+			 * connections and timed waits, so we escape the synchronous auth
+			 * processing and tell PQconnectPoll to transfer control to our
+			 * async implementation.
+			 */
+			Assert(conn->async_auth);	/* should have been set already */
+			state->step = FE_OAUTH_REQUESTING_TOKEN;
+			return SASL_ASYNC;
+
+		case FE_OAUTH_REQUESTING_TOKEN:
+
+			/*
+			 * We've returned successfully from token retrieval. Double-check
+			 * that we have what we need for the next connection.
+			 */
+			if (!conn->oauth_token)
+			{
+				Assert(false);	/* should have failed before this point! */
+				libpq_append_conn_error(conn,
+										"internal error: OAuth flow did not set a token");
+				return SASL_FAILED;
+			}
+
+			goto reconnect;
+
+		case FE_OAUTH_SERVER_ERROR:
+
+			/*
+			 * After an error, the server should send an error response to
+			 * fail the SASL handshake, which is handled in higher layers.
+			 *
+			 * If we get here, the server either sent *another* challenge
+			 * which isn't defined in the RFC, or completed the handshake
+			 * successfully after telling us it was going to fail. Neither is
+			 * acceptable.
+			 */
+			libpq_append_conn_error(conn,
+									"server sent additional OAuth data after error");
+			return SASL_FAILED;
+
+		default:
+			libpq_append_conn_error(conn, "invalid OAuth exchange state");
+			break;
+	}
+
+	Assert(false);				/* should never get here */
+	return SASL_FAILED;
+
+reconnect:
+
+	/*
+	 * Despite being a failure from the point of view of SASL, we have enough
+	 * information to restart with a new connection.
+	 */
+	libpq_append_conn_error(conn, "retrying connection with new bearer token");
+	conn->oauth_want_retry = true;
+	return SASL_FAILED;
+}
+
+static bool
+oauth_channel_bound(void *opaq)
+{
+	/* This mechanism does not support channel binding. */
+	return false;
+}
+
+/*
+ * Fully clears out any stored OAuth token. This is done proactively upon
+ * successful connection as well as during pqClosePGconn().
+ */
+void
+pqClearOAuthToken(PGconn *conn)
+{
+	if (!conn->oauth_token)
+		return;
+
+	explicit_bzero(conn->oauth_token, strlen(conn->oauth_token));
+	free(conn->oauth_token);
+	conn->oauth_token = NULL;
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
new file mode 100644
index 00000000000..3f1a7503a01
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -0,0 +1,46 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth.h
+ *
+ *	  Definitions for OAuth authentication implementations
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq/fe-auth-oauth.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_H
+#define FE_AUTH_OAUTH_H
+
+#include "libpq-fe.h"
+#include "libpq-int.h"
+
+
+enum fe_oauth_step
+{
+	FE_OAUTH_INIT,
+	FE_OAUTH_BEARER_SENT,
+	FE_OAUTH_REQUESTING_TOKEN,
+	FE_OAUTH_SERVER_ERROR,
+};
+
+typedef struct
+{
+	enum fe_oauth_step step;
+
+	PGconn	   *conn;
+	void	   *async_ctx;
+} fe_oauth_state;
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+extern void pqClearOAuthToken(PGconn *conn);
+extern bool oauth_unsafe_debugging_enabled(void);
+
+/* Mechanisms in fe-auth-oauth.c */
+extern const pg_fe_sasl_mech pg_oauth_mech;
+
+#endif							/* FE_AUTH_OAUTH_H */
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 761ee8f88f7..ec7a9236044 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interfaces/libpq/fe-auth.c
@@ -40,9 +40,11 @@
 #endif
 
 #include "common/md5.h"
+#include "common/oauth-common.h"
 #include "common/scram-common.h"
 #include "fe-auth.h"
 #include "fe-auth-sasl.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 
 #ifdef ENABLE_GSS
@@ -535,6 +537,13 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 			conn->sasl = &pg_scram_mech;
 			conn->password_needed = true;
 		}
+		else if (strcmp(mechanism_buf.data, OAUTHBEARER_NAME) == 0 &&
+				 !selected_mechanism)
+		{
+			selected_mechanism = OAUTHBEARER_NAME;
+			conn->sasl = &pg_oauth_mech;
+			conn->password_needed = false;
+		}
 	}
 
 	if (!selected_mechanism)
@@ -559,13 +568,6 @@ pg_SASL_init(PGconn *conn, int payloadlen, bool *async)
 
 		if (!allowed)
 		{
-			/*
-			 * TODO: this is dead code until a second SASL mechanism is added;
-			 * the connection can't have proceeded past check_expected_areq()
-			 * if no SASL methods are allowed.
-			 */
-			Assert(false);
-
 			libpq_append_conn_error(conn, "authentication method requirement \"%s\" failed: server requested %s authentication",
 									conn->require_auth, selected_mechanism);
 			goto error;
@@ -1580,3 +1582,23 @@ PQchangePassword(PGconn *conn, const char *user, const char *passwd)
 		}
 	}
 }
+
+PQauthDataHook_type PQauthDataHook = PQdefaultAuthDataHook;
+
+PQauthDataHook_type
+PQgetAuthDataHook(void)
+{
+	return PQauthDataHook;
+}
+
+void
+PQsetAuthDataHook(PQauthDataHook_type hook)
+{
+	PQauthDataHook = hook ? hook : PQdefaultAuthDataHook;
+}
+
+int
+PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data)
+{
+	return 0;					/* handle nothing */
+}
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index 1d4991f8996..de98e0d20c4 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,6 +18,9 @@
 #include "libpq-int.h"
 
 
+extern PQauthDataHook_type PQauthDataHook;
+
+
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 85d1ca2864f..d5051f5e820 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -28,6 +28,7 @@
 #include "common/scram-common.h"
 #include "common/string.h"
 #include "fe-auth.h"
+#include "fe-auth-oauth.h"
 #include "libpq-fe.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
@@ -373,6 +374,23 @@ static const internalPQconninfoOption PQconninfoOptions[] = {
 	{"scram_server_key", NULL, NULL, NULL, "SCRAM-Server-Key", "D", SCRAM_MAX_KEY_LEN * 2,
 	offsetof(struct pg_conn, scram_server_key)},
 
+	/* OAuth v2 */
+	{"oauth_issuer", NULL, NULL, NULL,
+		"OAuth-Issuer", "", 40,
+	offsetof(struct pg_conn, oauth_issuer)},
+
+	{"oauth_client_id", NULL, NULL, NULL,
+		"OAuth-Client-ID", "", 40,
+	offsetof(struct pg_conn, oauth_client_id)},
+
+	{"oauth_client_secret", NULL, NULL, NULL,
+		"OAuth-Client-Secret", "", 40,
+	offsetof(struct pg_conn, oauth_client_secret)},
+
+	{"oauth_scope", NULL, NULL, NULL,
+		"OAuth-Scope", "", 15,
+	offsetof(struct pg_conn, oauth_scope)},
+
 	/* Terminating entry --- MUST BE LAST */
 	{NULL, NULL, NULL, NULL,
 	NULL, NULL, 0}
@@ -399,6 +417,7 @@ static const PQEnvironmentOption EnvironmentOptions[] =
 static const pg_fe_sasl_mech *supported_sasl_mechs[] =
 {
 	&pg_scram_mech,
+	&pg_oauth_mech,
 };
 #define SASL_MECHANISM_COUNT lengthof(supported_sasl_mechs)
 
@@ -655,6 +674,7 @@ pqDropServerData(PGconn *conn)
 	conn->write_failed = false;
 	free(conn->write_err_msg);
 	conn->write_err_msg = NULL;
+	conn->oauth_want_retry = false;
 
 	/*
 	 * Cancel connections need to retain their be_pid and be_key across
@@ -1144,7 +1164,7 @@ static inline void
 fill_allowed_sasl_mechs(PGconn *conn)
 {
 	/*---
-	 * We only support one mechanism at the moment, so rather than deal with a
+	 * We only support two mechanisms at the moment, so rather than deal with a
 	 * linked list, conn->allowed_sasl_mechs is an array of static length. We
 	 * rely on the compile-time assertion here to keep us honest.
 	 *
@@ -1519,6 +1539,10 @@ pqConnectOptions2(PGconn *conn)
 			{
 				mech = &pg_scram_mech;
 			}
+			else if (strcmp(method, "oauth") == 0)
+			{
+				mech = &pg_oauth_mech;
+			}
 
 			/*
 			 * Final group: meta-options.
@@ -4111,7 +4135,19 @@ keep_going:						/* We will come back to here until there is
 				conn->inStart = conn->inCursor;
 
 				if (res != STATUS_OK)
+				{
+					/*
+					 * OAuth connections may perform two-step discovery, where
+					 * the first connection is a dummy.
+					 */
+					if (conn->sasl == &pg_oauth_mech && conn->oauth_want_retry)
+					{
+						need_new_connection = true;
+						goto keep_going;
+					}
+
 					goto error_return;
+				}
 
 				/*
 				 * Just make sure that any data sent by pg_fe_sendauth is
@@ -4390,6 +4426,9 @@ keep_going:						/* We will come back to here until there is
 					}
 				}
 
+				/* Don't hold onto any OAuth tokens longer than necessary. */
+				pqClearOAuthToken(conn);
+
 				/*
 				 * For non cancel requests we can release the address list
 				 * now. For cancel requests we never actually resolve
@@ -5002,6 +5041,12 @@ freePGconn(PGconn *conn)
 	free(conn->load_balance_hosts);
 	free(conn->scram_client_key);
 	free(conn->scram_server_key);
+	free(conn->oauth_issuer);
+	free(conn->oauth_issuer_id);
+	free(conn->oauth_discovery_uri);
+	free(conn->oauth_client_id);
+	free(conn->oauth_client_secret);
+	free(conn->oauth_scope);
 	termPQExpBuffer(&conn->errorMessage);
 	termPQExpBuffer(&conn->workBuffer);
 
@@ -5155,6 +5200,7 @@ pqClosePGconn(PGconn *conn)
 	conn->asyncStatus = PGASYNC_IDLE;
 	conn->xactStatus = PQTRANS_IDLE;
 	conn->pipelineStatus = PQ_PIPELINE_OFF;
+	pqClearOAuthToken(conn);
 	pqClearAsyncResult(conn);	/* deallocate result */
 	pqClearConnErrorState(conn);
 
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index a3491faf0c3..b7399dee58e 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -59,6 +59,8 @@ extern "C"
 /* Features added in PostgreSQL v18: */
 /* Indicates presence of PQfullProtocolVersion */
 #define LIBPQ_HAS_FULL_PROTOCOL_VERSION 1
+/* Indicates presence of the PQAUTHDATA_PROMPT_OAUTH_DEVICE authdata hook */
+#define LIBPQ_HAS_PROMPT_OAUTH_DEVICE 1
 
 /*
  * Option flags for PQcopyResult
@@ -186,6 +188,13 @@ typedef enum
 	PQ_PIPELINE_ABORTED
 } PGpipelineStatus;
 
+typedef enum
+{
+	PQAUTHDATA_PROMPT_OAUTH_DEVICE, /* user must visit a device-authorization
+									 * URL */
+	PQAUTHDATA_OAUTH_BEARER_TOKEN,	/* server requests an OAuth Bearer token */
+} PGauthData;
+
 /* PGconn encapsulates a connection to the backend.
  * The contents of this struct are not supposed to be known to applications.
  */
@@ -720,10 +729,86 @@ extern int	PQenv2encoding(void);
 
 /* === in fe-auth.c === */
 
+typedef struct _PGpromptOAuthDevice
+{
+	const char *verification_uri;	/* verification URI to visit */
+	const char *user_code;		/* user code to enter */
+	const char *verification_uri_complete;	/* optional combination of URI and
+											 * code, or NULL */
+	int			expires_in;		/* seconds until user code expires */
+} PGpromptOAuthDevice;
+
+/* for PGoauthBearerRequest.async() */
+#ifdef _WIN32
+#define SOCKTYPE uintptr_t		/* avoids depending on winsock2.h for SOCKET */
+#else
+#define SOCKTYPE int
+#endif
+
+typedef struct _PGoauthBearerRequest
+{
+	/* Hook inputs (constant across all calls) */
+	const char *const openid_configuration; /* OIDC discovery URI */
+	const char *const scope;	/* required scope(s), or NULL */
+
+	/* Hook outputs */
+
+	/*---------
+	 * Callback implementing a custom asynchronous OAuth flow.
+	 *
+	 * The callback may return
+	 * - PGRES_POLLING_READING/WRITING, to indicate that a socket descriptor
+	 *   has been stored in *altsock and libpq should wait until it is
+	 *   readable or writable before calling back;
+	 * - PGRES_POLLING_OK, to indicate that the flow is complete and
+	 *   request->token has been set; or
+	 * - PGRES_POLLING_FAILED, to indicate that token retrieval has failed.
+	 *
+	 * This callback is optional. If the token can be obtained without
+	 * blocking during the original call to the PQAUTHDATA_OAUTH_BEARER_TOKEN
+	 * hook, it may be returned directly, but one of request->async or
+	 * request->token must be set by the hook.
+	 */
+	PostgresPollingStatusType (*async) (PGconn *conn,
+										struct _PGoauthBearerRequest *request,
+										SOCKTYPE * altsock);
+
+	/*
+	 * Callback to clean up custom allocations. A hook implementation may use
+	 * this to free request->token and any resources in request->user.
+	 *
+	 * This is technically optional, but highly recommended, because there is
+	 * no other indication as to when it is safe to free the token.
+	 */
+	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+
+	/*
+	 * The hook should set this to the Bearer token contents for the
+	 * connection, once the flow is completed.  The token contents must remain
+	 * available to libpq until the hook's cleanup callback is called.
+	 */
+	char	   *token;
+
+	/*
+	 * Hook-defined data. libpq will not modify this pointer across calls to
+	 * the async callback, so it can be used to keep track of
+	 * application-specific state. Resources allocated here should be freed by
+	 * the cleanup callback.
+	 */
+	void	   *user;
+} PGoauthBearerRequest;
+
+#undef SOCKTYPE
+
 extern char *PQencryptPassword(const char *passwd, const char *user);
 extern char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm);
 extern PGresult *PQchangePassword(PGconn *conn, const char *user, const char *passwd);
 
+typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
+extern void PQsetAuthDataHook(PQauthDataHook_type hook);
+extern PQauthDataHook_type PQgetAuthDataHook(void);
+extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+
 /* === in encnames.c === */
 
 extern int	pg_char_to_encoding(const char *name);
diff --git a/src/interfaces/libpq/libpq-int.h b/src/interfaces/libpq/libpq-int.h
index 2546f9f8a50..f36f7f19d58 100644
--- a/src/interfaces/libpq/libpq-int.h
+++ b/src/interfaces/libpq/libpq-int.h
@@ -437,6 +437,17 @@ struct pg_conn
 								 * cancel request, instead of being a normal
 								 * connection that's used for queries */
 
+	/* OAuth v2 */
+	char	   *oauth_issuer;	/* token issuer/URL */
+	char	   *oauth_issuer_id;	/* token issuer identifier */
+	char	   *oauth_discovery_uri;	/* URI of the issuer's discovery
+										 * document */
+	char	   *oauth_client_id;	/* client identifier */
+	char	   *oauth_client_secret;	/* client secret */
+	char	   *oauth_scope;	/* access token scope */
+	char	   *oauth_token;	/* access token */
+	bool		oauth_want_retry;	/* should we retry on failure? */
+
 	/* Optional file to write trace info to */
 	FILE	   *Pfdebug;
 	int			traceFlags;
@@ -505,7 +516,7 @@ struct pg_conn
 								 * the server? */
 	uint32		allowed_auth_methods;	/* bitmask of acceptable AuthRequest
 										 * codes */
-	const pg_fe_sasl_mech *allowed_sasl_mechs[1];	/* and acceptable SASL
+	const pg_fe_sasl_mech *allowed_sasl_mechs[2];	/* and acceptable SASL
 													 * mechanisms */
 	bool		client_finished_auth;	/* have we finished our half of the
 										 * authentication exchange? */
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index dd64d291b3e..19f4a52a97a 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -1,6 +1,7 @@
 # Copyright (c) 2022-2025, PostgreSQL Global Development Group
 
 libpq_sources = files(
+  'fe-auth-oauth.c',
   'fe-auth-scram.c',
   'fe-auth.c',
   'fe-cancel.c',
@@ -37,6 +38,10 @@ if gssapi.found()
   )
 endif
 
+if libcurl.found()
+  libpq_sources += files('fe-auth-oauth-curl.c')
+endif
+
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index d49b2079a44..60e13d50235 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -229,6 +229,7 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
+  'libcurl': libcurl,
   'libxml': libxml,
   'libxslt': libxslt,
   'llvm': llvm,
diff --git a/src/test/authentication/t/001_password.pl b/src/test/authentication/t/001_password.pl
index 1357f806b6f..4ce22ccbdf2 100644
--- a/src/test/authentication/t/001_password.pl
+++ b/src/test/authentication/t/001_password.pl
@@ -404,11 +404,11 @@ $node->connect_fails(
 $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"SCRAM authentication forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
-	expected_stderr => qr/server requested SASL authentication/);
+	expected_stderr => qr/server requested SCRAM-SHA-256 authentication/);
 
 # Test that bad passwords are rejected.
 $ENV{"PGPASSWORD"} = 'badpass';
@@ -465,13 +465,13 @@ $node->connect_fails(
 	"user=scram_role require_auth=!scram-sha-256",
 	"password authentication forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 $node->connect_fails(
 	"user=scram_role require_auth=!password,!md5,!scram-sha-256",
 	"multiple authentication types forbidden, fails with SCRAM auth",
 	expected_stderr =>
-	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SASL authentication/
+	  qr/authentication method requirement "!password,!md5,!scram-sha-256" failed: server requested SCRAM-SHA-256 authentication/
 );
 
 # Test SYSTEM_USER <> NULL with parallel workers.
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 89e78b7d114..4e4be3fa511 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -11,6 +11,7 @@ SUBDIRS = \
 		  dummy_index_am \
 		  dummy_seclabel \
 		  libpq_pipeline \
+		  oauth_validator \
 		  plsample \
 		  spgist_name_ops \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index a57077b682e..2b057451473 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -9,6 +9,7 @@ subdir('gin')
 subdir('injection_points')
 subdir('ldap_password_func')
 subdir('libpq_pipeline')
+subdir('oauth_validator')
 subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
new file mode 100644
index 00000000000..5dcb3ff9723
--- /dev/null
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -0,0 +1,4 @@
+# Generated subdirectories
+/log/
+/results/
+/tmp_check/
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
new file mode 100644
index 00000000000..05b9f06ed73
--- /dev/null
+++ b/src/test/modules/oauth_validator/Makefile
@@ -0,0 +1,40 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for src/test/modules/oauth_validator
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/test/modules/oauth_validator/Makefile
+#
+#-------------------------------------------------------------------------
+
+MODULES = validator fail_validator magic_validator
+PGFILEDESC = "validator - test OAuth validator module"
+
+PROGRAM = oauth_hook_client
+PGAPPICON = win32
+OBJS = $(WIN32RES) oauth_hook_client.o
+
+PG_CPPFLAGS = -I$(libpq_srcdir)
+PG_LIBS_INTERNAL += $(libpq_pgport)
+
+NO_INSTALLCHECK = 1
+
+TAP_TESTS = 1
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/oauth_validator
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+
+export PYTHON
+export with_libcurl
+export with_python
+
+endif
diff --git a/src/test/modules/oauth_validator/README b/src/test/modules/oauth_validator/README
new file mode 100644
index 00000000000..54eac5b117e
--- /dev/null
+++ b/src/test/modules/oauth_validator/README
@@ -0,0 +1,13 @@
+Test programs and libraries for OAuth
+-------------------------------------
+
+This folder contains tests for the client- and server-side OAuth
+implementations. Most tests are run end-to-end to test both simultaneously. The
+tests in t/001_server use a mock OAuth authorization server, implemented jointly
+by t/OAuth/Server.pm and t/oauth_server.py, to run the libpq Device
+Authorization flow. The tests in t/002_client exercise custom OAuth flows and
+don't need an authorization server.
+
+Tests in this folder require 'oauth' to be present in PG_TEST_EXTRA, since
+HTTPS servers listening on localhost with TCP/IP sockets will be started. A
+Python installation is required to run the mock authorization server.
diff --git a/src/test/modules/oauth_validator/fail_validator.c b/src/test/modules/oauth_validator/fail_validator.c
new file mode 100644
index 00000000000..a4c7a4451d3
--- /dev/null
+++ b/src/test/modules/oauth_validator/fail_validator.c
@@ -0,0 +1,47 @@
+/*-------------------------------------------------------------------------
+ *
+ * fail_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which is
+ *	  guaranteed to always fail in the validation callback
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/fail_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static bool fail_token(const ValidatorModuleState *state,
+					   const char *token,
+					   const char *role,
+					   ValidatorModuleResult *result);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
+	.validate_cb = fail_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static bool
+fail_token(const ValidatorModuleState *state,
+		   const char *token, const char *role,
+		   ValidatorModuleResult *res)
+{
+	elog(FATAL, "fail_validator: sentinel error");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/magic_validator.c b/src/test/modules/oauth_validator/magic_validator.c
new file mode 100644
index 00000000000..9dc55b602e3
--- /dev/null
+++ b/src/test/modules/oauth_validator/magic_validator.c
@@ -0,0 +1,48 @@
+/*-------------------------------------------------------------------------
+ *
+ * magic_validator.c
+ *	  Test module for serverside OAuth token validation callbacks, which
+ *	  should fail due to using the wrong PG_OAUTH_VALIDATOR_MAGIC marker
+ *	  and thus the wrong ABI version
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/magic_validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+
+PG_MODULE_MAGIC;
+
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
+
+/* Callback implementations (we only need the main one) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	0xdeadbeef,
+
+	.validate_cb = validate_token,
+};
+
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
+{
+	elog(FATAL, "magic_validator: this should be unreachable");
+	pg_unreachable();
+}
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
new file mode 100644
index 00000000000..36d1b26369f
--- /dev/null
+++ b/src/test/modules/oauth_validator/meson.build
@@ -0,0 +1,85 @@
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+validator_sources = files(
+  'validator.c',
+)
+
+if host_system == 'windows'
+  validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'validator',
+    '--FILEDESC', 'validator - test OAuth validator module',])
+endif
+
+validator = shared_module('validator',
+  validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += validator
+
+fail_validator_sources = files(
+  'fail_validator.c',
+)
+
+if host_system == 'windows'
+  fail_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'fail_validator',
+    '--FILEDESC', 'fail_validator - failing OAuth validator module',])
+endif
+
+fail_validator = shared_module('fail_validator',
+  fail_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += fail_validator
+
+magic_validator_sources = files(
+  'magic_validator.c',
+)
+
+if host_system == 'windows'
+  magic_validator_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'magic_validator',
+    '--FILEDESC', 'magic_validator - ABI incompatible OAuth validator module',])
+endif
+
+magic_validator = shared_module('magic_validator',
+  magic_validator_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += magic_validator
+
+oauth_hook_client_sources = files(
+  'oauth_hook_client.c',
+)
+
+if host_system == 'windows'
+  oauth_hook_client_sources += rc_bin_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'oauth_hook_client',
+    '--FILEDESC', 'oauth_hook_client - test program for libpq OAuth hooks',])
+endif
+
+oauth_hook_client = executable('oauth_hook_client',
+  oauth_hook_client_sources,
+  dependencies: [frontend_code, libpq],
+  kwargs: default_bin_args + {
+    'install': false,
+  },
+)
+testprep_targets += oauth_hook_client
+
+tests += {
+  'name': 'oauth_validator',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'tests': [
+      't/001_server.pl',
+      't/002_client.pl',
+    ],
+    'env': {
+      'PYTHON': python.path(),
+      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_python': 'yes',
+    },
+  },
+}
diff --git a/src/test/modules/oauth_validator/oauth_hook_client.c b/src/test/modules/oauth_validator/oauth_hook_client.c
new file mode 100644
index 00000000000..9f553792c05
--- /dev/null
+++ b/src/test/modules/oauth_validator/oauth_hook_client.c
@@ -0,0 +1,293 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth_hook_client.c
+ *		Test driver for t/002_client.pl, which verifies OAuth hook
+ *		functionality in libpq.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *		src/test/modules/oauth_validator/oauth_hook_client.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <sys/socket.h>
+
+#include "getopt_long.h"
+#include "libpq-fe.h"
+
+static int	handle_auth_data(PGauthData type, PGconn *conn, void *data);
+static PostgresPollingStatusType async_cb(PGconn *conn,
+										  PGoauthBearerRequest *req,
+										  pgsocket *altsock);
+static PostgresPollingStatusType misbehave_cb(PGconn *conn,
+											  PGoauthBearerRequest *req,
+											  pgsocket *altsock);
+
+static void
+usage(char *argv[])
+{
+	printf("usage: %s [flags] CONNINFO\n\n", argv[0]);
+
+	printf("recognized flags:\n");
+	printf(" -h, --help				show this message\n");
+	printf(" --expected-scope SCOPE	fail if received scopes do not match SCOPE\n");
+	printf(" --expected-uri URI		fail if received configuration link does not match URI\n");
+	printf(" --misbehave=MODE		have the hook fail required postconditions\n"
+		   "						(MODEs: no-hook, fail-async, no-token, no-socket)\n");
+	printf(" --no-hook				don't install OAuth hooks\n");
+	printf(" --hang-forever			don't ever return a token (combine with connect_timeout)\n");
+	printf(" --token TOKEN			use the provided TOKEN value\n");
+	printf(" --stress-async			busy-loop on PQconnectPoll rather than polling\n");
+}
+
+/* --options */
+static bool no_hook = false;
+static bool hang_forever = false;
+static bool stress_async = false;
+static const char *expected_uri = NULL;
+static const char *expected_scope = NULL;
+static const char *misbehave_mode = NULL;
+static char *token = NULL;
+
+int
+main(int argc, char *argv[])
+{
+	static const struct option long_options[] = {
+		{"help", no_argument, NULL, 'h'},
+
+		{"expected-scope", required_argument, NULL, 1000},
+		{"expected-uri", required_argument, NULL, 1001},
+		{"no-hook", no_argument, NULL, 1002},
+		{"token", required_argument, NULL, 1003},
+		{"hang-forever", no_argument, NULL, 1004},
+		{"misbehave", required_argument, NULL, 1005},
+		{"stress-async", no_argument, NULL, 1006},
+		{0}
+	};
+
+	const char *conninfo;
+	PGconn	   *conn;
+	int			c;
+
+	while ((c = getopt_long(argc, argv, "h", long_options, NULL)) != -1)
+	{
+		switch (c)
+		{
+			case 'h':
+				usage(argv);
+				return 0;
+
+			case 1000:			/* --expected-scope */
+				expected_scope = optarg;
+				break;
+
+			case 1001:			/* --expected-uri */
+				expected_uri = optarg;
+				break;
+
+			case 1002:			/* --no-hook */
+				no_hook = true;
+				break;
+
+			case 1003:			/* --token */
+				token = optarg;
+				break;
+
+			case 1004:			/* --hang-forever */
+				hang_forever = true;
+				break;
+
+			case 1005:			/* --misbehave */
+				misbehave_mode = optarg;
+				break;
+
+			case 1006:			/* --stress-async */
+				stress_async = true;
+				break;
+
+			default:
+				usage(argv);
+				return 1;
+		}
+	}
+
+	if (argc != optind + 1)
+	{
+		usage(argv);
+		return 1;
+	}
+
+	conninfo = argv[optind];
+
+	/* Set up our OAuth hooks. */
+	PQsetAuthDataHook(handle_auth_data);
+
+	/* Connect. (All the actual work is in the hook.) */
+	if (stress_async)
+	{
+		/*
+		 * Perform an asynchronous connection, busy-looping on PQconnectPoll()
+		 * without actually waiting on socket events. This stresses code paths
+		 * that rely on asynchronous work to be done before continuing with
+		 * the next step in the flow.
+		 */
+		PostgresPollingStatusType res;
+
+		conn = PQconnectStart(conninfo);
+
+		do
+		{
+			res = PQconnectPoll(conn);
+		} while (res != PGRES_POLLING_FAILED && res != PGRES_POLLING_OK);
+	}
+	else
+	{
+		/* Perform a standard synchronous connection. */
+		conn = PQconnectdb(conninfo);
+	}
+
+	if (PQstatus(conn) != CONNECTION_OK)
+	{
+		fprintf(stderr, "connection to database failed: %s\n",
+				PQerrorMessage(conn));
+		PQfinish(conn);
+		return 1;
+	}
+
+	printf("connection succeeded\n");
+	PQfinish(conn);
+	return 0;
+}
+
+/*
+ * PQauthDataHook implementation. Replaces the default client flow by handling
+ * PQAUTHDATA_OAUTH_BEARER_TOKEN.
+ */
+static int
+handle_auth_data(PGauthData type, PGconn *conn, void *data)
+{
+	PGoauthBearerRequest *req = data;
+
+	if (no_hook || (type != PQAUTHDATA_OAUTH_BEARER_TOKEN))
+		return 0;
+
+	if (hang_forever)
+	{
+		/* Start asynchronous processing. */
+		req->async = async_cb;
+		return 1;
+	}
+
+	if (misbehave_mode)
+	{
+		if (strcmp(misbehave_mode, "no-hook") != 0)
+			req->async = misbehave_cb;
+		return 1;
+	}
+
+	if (expected_uri)
+	{
+		if (!req->openid_configuration)
+		{
+			fprintf(stderr, "expected URI \"%s\", got NULL\n", expected_uri);
+			return -1;
+		}
+
+		if (strcmp(expected_uri, req->openid_configuration) != 0)
+		{
+			fprintf(stderr, "expected URI \"%s\", got \"%s\"\n", expected_uri, req->openid_configuration);
+			return -1;
+		}
+	}
+
+	if (expected_scope)
+	{
+		if (!req->scope)
+		{
+			fprintf(stderr, "expected scope \"%s\", got NULL\n", expected_scope);
+			return -1;
+		}
+
+		if (strcmp(expected_scope, req->scope) != 0)
+		{
+			fprintf(stderr, "expected scope \"%s\", got \"%s\"\n", expected_scope, req->scope);
+			return -1;
+		}
+	}
+
+	req->token = token;
+	return 1;
+}
+
+static PostgresPollingStatusType
+async_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (hang_forever)
+	{
+		/*
+		 * This code tests that nothing is interfering with libpq's handling
+		 * of connect_timeout.
+		 */
+		static pgsocket sock = PGINVALID_SOCKET;
+
+		if (sock == PGINVALID_SOCKET)
+		{
+			/* First call. Create an unbound socket to wait on. */
+#ifdef WIN32
+			WSADATA		wsaData;
+			int			err;
+
+			err = WSAStartup(MAKEWORD(2, 2), &wsaData);
+			if (err)
+			{
+				perror("WSAStartup failed");
+				return PGRES_POLLING_FAILED;
+			}
+#endif
+			sock = socket(AF_INET, SOCK_DGRAM, 0);
+			if (sock == PGINVALID_SOCKET)
+			{
+				perror("failed to create datagram socket");
+				return PGRES_POLLING_FAILED;
+			}
+		}
+
+		/* Make libpq wait on the (unreadable) socket. */
+		*altsock = sock;
+		return PGRES_POLLING_READING;
+	}
+
+	req->token = token;
+	return PGRES_POLLING_OK;
+}
+
+static PostgresPollingStatusType
+misbehave_cb(PGconn *conn, PGoauthBearerRequest *req, pgsocket *altsock)
+{
+	if (strcmp(misbehave_mode, "fail-async") == 0)
+	{
+		/* Just fail "normally". */
+		return PGRES_POLLING_FAILED;
+	}
+	else if (strcmp(misbehave_mode, "no-token") == 0)
+	{
+		/* Callbacks must assign req->token before returning OK. */
+		return PGRES_POLLING_OK;
+	}
+	else if (strcmp(misbehave_mode, "no-socket") == 0)
+	{
+		/* Callbacks must assign *altsock before asking for polling. */
+		return PGRES_POLLING_READING;
+	}
+	else
+	{
+		fprintf(stderr, "unrecognized --misbehave mode: %s\n", misbehave_mode);
+		exit(1);
+	}
+}
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
new file mode 100644
index 00000000000..6fa59fbeb25
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -0,0 +1,594 @@
+
+#
+# Tests the libpq builtin OAuth flow, as well as server-side HBA and validator
+# setup.
+#
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+use FindBin;
+use lib $FindBin::RealBin;
+
+use OAuth::Server;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+if ($windows_os)
+{
+	plan skip_all => 'OAuth server-side tests are not supported on Windows';
+}
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	plan skip_all => 'client-side OAuth not supported by this build';
+}
+
+if ($ENV{with_python} ne 'yes')
+{
+	plan skip_all => 'OAuth tests require --with-python to run';
+}
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+$node->safe_psql('postgres', 'CREATE USER testalt;');
+$node->safe_psql('postgres', 'CREATE USER testparam;');
+
+# Save a background connection for later configuration changes.
+my $bgconn = $node->background_psql('postgres');
+
+my $webserver = OAuth::Server->new();
+$webserver->run();
+
+END
+{
+	my $exit_code = $?;
+
+	$webserver->stop() if defined $webserver;    # might have been SKIP'd
+
+	$? = $exit_code;
+}
+
+my $port = $webserver->port();
+my $issuer = "http://localhost:$port";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer"       scope="openid postgres"
+local all testalt   oauth issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+local all testparam oauth issuer="$issuer/param" scope="openid postgres"
+});
+$node->reload;
+
+my $log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+# Check pg_hba_file_rules() support.
+my $contents = $bgconn->query_safe(
+	qq(SELECT rule_number, auth_method, options
+		 FROM pg_hba_file_rules
+		 ORDER BY rule_number;));
+is( $contents,
+	qq{1|oauth|\{issuer=$issuer,"scope=openid postgres",validator=validator\}
+2|oauth|\{issuer=$issuer/.well-known/oauth-authorization-server/alternate,"scope=openid postgres alt",validator=validator\}
+3|oauth|\{issuer=$issuer/param,"scope=openid postgres",validator=validator\}},
+	"pg_hba_file_rules recreates OAuth HBA settings");
+
+# To test against HTTP rather than HTTPS, we need to enable PGOAUTHDEBUG. But
+# first, check to make sure the client refuses such connections by default.
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"HTTPS is required without debug mode",
+	expected_stderr =>
+	  qr@OAuth discovery URI "\Q$issuer\E/.well-known/openid-configuration" must use HTTPS@
+);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+my $user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"connect as test",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234", role="$user"/,
+		qr/oauth_validator: issuer="\Q$issuer\E", scope="openid postgres"/,
+		qr/connection authenticated: identity="test" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The /alternate issuer uses slightly different parameters, along with an
+# OAuth-style discovery document.
+$user = "testalt";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer/alternate oauth_client_id=f02c6361-0636",
+	"connect as testalt",
+	expected_stderr =>
+	  qr@Visit https://example\.org/ and enter the code: postgresuser@,
+	log_like => [
+		qr/oauth_validator: token="9243959234-alt", role="$user"/,
+		qr|oauth_validator: issuer="\Q$issuer/.well-known/oauth-authorization-server/alternate\E", scope="openid postgres alt"|,
+		qr/connection authenticated: identity="testalt" method=oauth/,
+		qr/connection authorized/,
+	]);
+
+# The issuer linked by the server must match the client's oauth_issuer setting.
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0636",
+	"oauth_issuer must match discovery",
+	expected_stderr =>
+	  qr@server's discovery document at \Q$issuer/.well-known/oauth-authorization-server/alternate\E \(issuer "\Q$issuer/alternate\E"\) is incompatible with oauth_issuer \(\Q$issuer\E\)@
+);
+
+# Test require_auth settings against OAUTHBEARER.
+my @cases = (
+	{ require_auth => "oauth" },
+	{ require_auth => "oauth,scram-sha-256" },
+	{ require_auth => "password,oauth" },
+	{ require_auth => "none,oauth" },
+	{ require_auth => "!scram-sha-256" },
+	{ require_auth => "!none" },
+
+	{
+		require_auth => "!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "scram-sha-256",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "!password,!oauth",
+		failure => qr/server requested OAUTHBEARER authentication/
+	},
+	{
+		require_auth => "none",
+		failure => qr/server requested SASL authentication/
+	},
+	{
+		require_auth => "!oauth,!scram-sha-256",
+		failure => qr/server requested SASL authentication/
+	});
+
+$user = "test";
+foreach my $c (@cases)
+{
+	my $connstr =
+	  "user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635 require_auth=$c->{'require_auth'}";
+
+	if (defined $c->{'failure'})
+	{
+		$node->connect_fails(
+			$connstr,
+			"require_auth=$c->{'require_auth'} fails",
+			expected_stderr => $c->{'failure'});
+	}
+	else
+	{
+		$node->connect_ok(
+			$connstr,
+			"require_auth=$c->{'require_auth'} succeeds",
+			expected_stderr =>
+			  qr@Visit https://example\.com/ and enter the code: postgresuser@
+		);
+	}
+}
+
+# Make sure the client_id and secret are correctly encoded. $vschars contains
+# every allowed character for a client_id/_secret (the "VSCHAR" class).
+# $vschars_esc is additionally backslash-escaped for inclusion in a
+# single-quoted connection string.
+my $vschars =
+  " !\"#\$%&'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+my $vschars_esc =
+  " !\"#\$%&\\'()*+,-./0123456789:;<=>?\@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\\\]^_`abcdefghijklmnopqrstuvwxyz{|}~";
+
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc'",
+	"escapable characters: client_id",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id='$vschars_esc' oauth_client_secret='$vschars_esc'",
+	"escapable characters: client_id and secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+#
+# Further tests rely on support for specific behaviors in oauth_server.py. To
+# trigger these behaviors, we ask for the special issuer .../param (which is set
+# up in HBA for the testparam user) and encode magic instructions into the
+# oauth_client_id.
+#
+
+my $common_connstr =
+  "user=testparam dbname=postgres oauth_issuer=$issuer/param ";
+my $base_connstr = $common_connstr;
+
+sub connstr
+{
+	my (%params) = @_;
+
+	my $json = encode_json(\%params);
+	my $encoded = encode_base64($json, "");
+
+	return "$base_connstr oauth_client_id=$encoded";
+}
+
+# Make sure the param system works end-to-end first.
+$node->connect_ok(
+	connstr(),
+	"connect to /param",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'token', retries => 1),
+	"token retry",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'token', retries => 2),
+	"token retry (twice)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => 2),
+	"token retry (two second interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'all', retries => 1, interval => JSON::PP::null),
+	"token retry (default interval)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_ok(
+	connstr(stage => 'all', content_type => 'application/json;charset=utf-8'),
+	"content type with charset",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(
+		stage => 'all',
+		content_type => "application/json \t;\t charset=utf-8"),
+	"content type with charset (whitespace)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_ok(
+	connstr(stage => 'device', uri_spelling => "verification_url"),
+	"alternative spelling of verification_uri",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(stage => 'device', huge_response => JSON::PP::true),
+	"bad device authz response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain device authorization: response is too large/);
+$node->connect_fails(
+	connstr(stage => 'token', huge_response => JSON::PP::true),
+	"bad token response: overlarge JSON",
+	expected_stderr =>
+	  qr/failed to obtain access token: response is too large/);
+
+$node->connect_fails(
+	connstr(stage => 'device', content_type => 'text/plain'),
+	"bad device authz response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse device authorization: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'text/plain'),
+	"bad token response: wrong content type",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+$node->connect_fails(
+	connstr(stage => 'token', content_type => 'application/jsonx'),
+	"bad token response: wrong content type (correct prefix)",
+	expected_stderr =>
+	  qr/failed to parse access token response: unexpected content type/);
+
+$node->connect_fails(
+	connstr(
+		stage => 'all',
+		interval => ~0,
+		retries => 1,
+		retry_code => "slow_down"),
+	"bad token response: server overflows the device authz interval",
+	expected_stderr =>
+	  qr/failed to obtain access token: slow_down interval overflow/);
+
+$node->connect_fails(
+	connstr(stage => 'token', error_code => "invalid_grant"),
+	"bad token response: invalid_grant, no description",
+	expected_stderr => qr/failed to obtain access token: \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_grant",
+		error_desc => "grant expired"),
+	"bad token response: expired grant",
+	expected_stderr =>
+	  qr/failed to obtain access token: grant expired \(invalid_grant\)/);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider requires client authentication, and no oauth_client_secret is set \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "authn failure"),
+	"bad token response: client authentication failure, provided description",
+	expected_stderr =>
+	  qr/failed to obtain access token: authn failure \(invalid_client\)/);
+
+$node->connect_fails(
+	connstr(stage => 'token', token => ""),
+	"server rejects access: empty token",
+	expected_stderr => qr/bearer authentication failed/);
+$node->connect_fails(
+	connstr(stage => 'token', token => "****"),
+	"server rejects access: invalid token contents",
+	expected_stderr => qr/bearer authentication failed/);
+
+# Test behavior of the oauth_client_secret.
+$base_connstr = "$common_connstr oauth_client_secret=''";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => ''),
+	"empty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$base_connstr = "$common_connstr oauth_client_secret='$vschars_esc'";
+
+$node->connect_ok(
+	connstr(stage => 'all', expected_secret => $vschars),
+	"nonempty oauth_client_secret",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401),
+	"bad token response: client authentication failure, default description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: provider rejected the oauth_client_secret \(invalid_client\)/
+);
+$node->connect_fails(
+	connstr(
+		stage => 'token',
+		error_code => "invalid_client",
+		error_status => 401,
+		error_desc => "mutual TLS required for client"),
+	"bad token response: client authentication failure, provided description with oauth_client_secret",
+	expected_stderr =>
+	  qr/failed to obtain access token: mutual TLS required for client \(invalid_client\)/
+);
+
+# Stress test: make sure our builtin flow operates correctly even if the client
+# application isn't respecting PGRES_POLLING_READING/WRITING signals returned
+# from PQconnectPoll().
+$base_connstr =
+  "$common_connstr port=" . $node->port . " host=" . $node->host;
+my @cmd = (
+	"oauth_hook_client", "--no-hook", "--stress-async",
+	connstr(stage => 'all', retries => 1, interval => 1));
+
+note "running '" . join("' '", @cmd) . "'";
+my ($stdout, $stderr) = run_command(\@cmd);
+
+like($stdout, qr/connection succeeded/, "stress-async: stdout matches");
+unlike(
+	$stderr,
+	qr/connection to database failed/,
+	"stress-async: stderr matches");
+
+#
+# This section of tests reconfigures the validator module itself, rather than
+# the OAuth server.
+#
+
+# Searching the logs is easier if OAuth parameter discovery isn't cluttering
+# things up; hardcode the discovery URI. (Scope is hardcoded to empty to cover
+# that case as well.)
+$common_connstr =
+  "dbname=postgres oauth_issuer=$issuer/.well-known/openid-configuration oauth_scope='' oauth_client_id=f02c6361-0635";
+
+# Misbehaving validators must fail shut.
+$bgconn->query_safe("ALTER SYSTEM SET oauth_validator.authn_id TO ''");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must set authn_id",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity=""/,
+		qr/DETAIL:\s+Validator provided no identity/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+# Even if a validator authenticates the user, if the token isn't considered
+# valid, the connection fails.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'test\@example.org'");
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authorize_tokens TO false");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+$node->connect_fails(
+	"$common_connstr user=test",
+	"validator must authorize token explicitly",
+	expected_stderr => qr/OAuth bearer authentication failed/,
+	log_like => [
+		qr/connection authenticated: identity="test\@example\.org"/,
+		qr/DETAIL:\s+Validator failed to authorize the provided token/,
+		qr/FATAL:\s+OAuth bearer authentication failed/,
+	]);
+
+#
+# Test user mapping.
+#
+
+# Allow "user@example.com" to log in under the test role.
+unlink($node->data_dir . '/pg_ident.conf');
+$node->append_conf(
+	'pg_ident.conf', qq{
+oauthmap	user\@example.com	test
+});
+
+# test and testalt use the map; testparam uses ident delegation.
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test      oauth issuer="$issuer" scope="" map=oauthmap
+local all testalt   oauth issuer="$issuer" scope="" map=oauthmap
+local all testparam oauth issuer="$issuer" scope="" delegate_ident_mapping=1
+});
+
+# To start, have the validator use the role names as authn IDs.
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authorize_tokens");
+
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# The test and testalt roles should no longer map correctly.
+$node->connect_fails(
+	"$common_connstr user=test",
+	"mismatched username map (test)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# Have the validator identify the end user as user@example.com.
+$bgconn->query_safe(
+	"ALTER SYSTEM SET oauth_validator.authn_id TO 'user\@example.com'");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+# Now the test role can be logged into. (testalt still can't be mapped.)
+$node->connect_ok(
+	"$common_connstr user=test",
+	"matched username map (test)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+$node->connect_fails(
+	"$common_connstr user=testalt",
+	"mismatched username map (testalt)",
+	expected_stderr => qr/OAuth bearer authentication failed/);
+
+# testparam ignores the map entirely.
+$node->connect_ok(
+	"$common_connstr user=testparam",
+	"delegated ident (testparam)",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@);
+
+$bgconn->query_safe("ALTER SYSTEM RESET oauth_validator.authn_id");
+$node->reload;
+$log_start =
+  $node->wait_for_log(qr/reloading configuration files/, $log_start);
+
+#
+# Test multiple validators.
+#
+
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator, fail_validator'\n");
+
+# With multiple validators, every HBA line must explicitly declare one.
+my $result = $node->restart(fail_ok => 1);
+is($result, 0,
+	'restart fails without explicit validators in oauth HBA entries');
+
+$log_start = $node->wait_for_log(
+	qr/authentication method "oauth" requires argument "validator" to be set/,
+	$log_start);
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=validator      issuer="$issuer"           scope="openid postgres"
+local all testalt oauth validator=fail_validator issuer="$issuer/.well-known/oauth-authorization-server/alternate" scope="openid postgres alt"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+# The test user should work as before.
+$user = "test";
+$node->connect_ok(
+	"user=$user dbname=postgres oauth_issuer=$issuer oauth_client_id=f02c6361-0635",
+	"validator is used for $user",
+	expected_stderr =>
+	  qr@Visit https://example\.com/ and enter the code: postgresuser@,
+	log_like => [qr/connection authorized/]);
+
+# testalt should be routed through the fail_validator.
+$user = "testalt";
+$node->connect_fails(
+	"user=$user dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"fail_validator is used for $user",
+	expected_stderr => qr/FATAL:\s+fail_validator: sentinel error/);
+
+#
+# Test ABI compatibility magic marker
+#
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'magic_validator'\n");
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test    oauth validator=magic_validator      issuer="$issuer"           scope="openid postgres"
+});
+$node->restart;
+
+$log_start = $node->wait_for_log(qr/ready to accept connections/, $log_start);
+
+$node->connect_fails(
+	"user=test dbname=postgres oauth_issuer=$issuer/.well-known/oauth-authorization-server/alternate oauth_client_id=f02c6361-0636",
+	"magic_validator is used for $user",
+	expected_stderr =>
+	  qr/FATAL:\s+OAuth validator module "magic_validator": magic number mismatch/
+);
+$node->stop;
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
new file mode 100644
index 00000000000..ab83258d736
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -0,0 +1,154 @@
+#
+# Exercises the API for custom OAuth client flows, using the oauth_hook_client
+# test driver.
+#
+# Copyright (c) 2021-2025, PostgreSQL Global Development Group
+#
+
+use strict;
+use warnings FATAL => 'all';
+
+use JSON::PP     qw(encode_json);
+use MIME::Base64 qw(encode_base64);
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
+{
+	plan skip_all =>
+	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
+}
+
+#
+# Cluster Setup
+#
+
+my $node = PostgreSQL::Test::Cluster->new('primary');
+$node->init;
+$node->append_conf('postgresql.conf', "log_connections = on\n");
+$node->append_conf('postgresql.conf',
+	"oauth_validator_libraries = 'validator'\n");
+$node->start;
+
+$node->safe_psql('postgres', 'CREATE USER test;');
+
+# These tests don't use the builtin flow, and we don't have an authorization
+# server running, so the address used here shouldn't matter. Use an invalid IP
+# address, so if there's some cascade of errors that causes the client to
+# attempt a connection, we'll fail noisily.
+my $issuer = "https://256.256.256.256";
+my $scope = "openid postgres";
+
+unlink($node->data_dir . '/pg_hba.conf');
+$node->append_conf(
+	'pg_hba.conf', qq{
+local all test oauth issuer="$issuer" scope="$scope"
+});
+$node->reload;
+
+my ($log_start, $log_end);
+$log_start = $node->wait_for_log(qr/reloading configuration files/);
+
+$ENV{PGOAUTHDEBUG} = "UNSAFE";
+
+#
+# Tests
+#
+
+my $user = "test";
+my $base_connstr = $node->connstr() . " user=$user";
+my $common_connstr =
+  "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
+sub test
+{
+	my ($test_name, %params) = @_;
+
+	my $flags = [];
+	if (defined($params{flags}))
+	{
+		$flags = $params{flags};
+	}
+
+	my @cmd = ("oauth_hook_client", @{$flags}, $common_connstr);
+	note "running '" . join("' '", @cmd) . "'";
+
+	my ($stdout, $stderr) = run_command(\@cmd);
+
+	if (defined($params{expected_stdout}))
+	{
+		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
+	}
+
+	if (defined($params{expected_stderr}))
+	{
+		like($stderr, $params{expected_stderr}, "$test_name: stderr matches");
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
+}
+
+test(
+	"basic synchronous hook can provide a token",
+	flags => [
+		"--token", "my-token",
+		"--expected-uri", "$issuer/.well-known/openid-configuration",
+		"--expected-scope", $scope,
+	],
+	expected_stdout => qr/connection succeeded/);
+
+$node->log_check("validator receives correct token",
+	$log_start,
+	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
+
+if ($ENV{with_libcurl} ne 'yes')
+{
+	# libpq should help users out if no OAuth support is built in.
+	test(
+		"fails without custom hook installed",
+		flags => ["--no-hook"],
+		expected_stderr =>
+		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+	);
+}
+
+# connect_timeout should work if the flow doesn't respond.
+$common_connstr = "$common_connstr connect_timeout=1";
+test(
+	"connect_timeout interrupts hung client flow",
+	flags => ["--hang-forever"],
+	expected_stderr => qr/failed: timeout expired/);
+
+# Test various misbehaviors of the client hook.
+my @cases = (
+	{
+		flag => "--misbehave=no-hook",
+		expected_error =>
+		  qr/user-defined OAuth flow provided neither a token nor an async callback/,
+	},
+	{
+		flag => "--misbehave=fail-async",
+		expected_error => qr/user-defined OAuth flow failed/,
+	},
+	{
+		flag => "--misbehave=no-token",
+		expected_error => qr/user-defined OAuth flow did not provide a token/,
+	},
+	{
+		flag => "--misbehave=no-socket",
+		expected_error =>
+		  qr/user-defined OAuth flow did not provide a socket for polling/,
+	});
+
+foreach my $c (@cases)
+{
+	test(
+		"hook misbehavior: $c->{'flag'}",
+		flags => [ $c->{'flag'} ],
+		expected_stderr => $c->{'expected_error'});
+}
+
+done_testing();
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
new file mode 100644
index 00000000000..655b2870b0b
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -0,0 +1,140 @@
+
+# Copyright (c) 2025, PostgreSQL Global Development Group
+
+=pod
+
+=head1 NAME
+
+OAuth::Server - runs a mock OAuth authorization server for testing
+
+=head1 SYNOPSIS
+
+  use OAuth::Server;
+
+  my $server = OAuth::Server->new();
+  $server->run;
+
+  my $port = $server->port;
+  my $issuer = "http://localhost:$port";
+
+  # test against $issuer...
+
+  $server->stop;
+
+=head1 DESCRIPTION
+
+This is glue API between the Perl tests and the Python authorization server
+daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
+in its standard library, so the implementation was ported from Perl.)
+
+This authorization server does not use TLS (it implements a nonstandard, unsafe
+issuer at "http://localhost:<port>"), so libpq in particular will need to set
+PGOAUTHDEBUG=UNSAFE to be able to talk to it.
+
+=cut
+
+package OAuth::Server;
+
+use warnings;
+use strict;
+use Scalar::Util;
+use Test::More;
+
+=pod
+
+=head1 METHODS
+
+=over
+
+=item OAuth::Server->new()
+
+Create a new OAuth Server object.
+
+=cut
+
+sub new
+{
+	my $class = shift;
+
+	my $self = {};
+	bless($self, $class);
+
+	return $self;
+}
+
+=pod
+
+=item $server->port()
+
+Returns the port in use by the server.
+
+=cut
+
+sub port
+{
+	my $self = shift;
+
+	return $self->{'port'};
+}
+
+=pod
+
+=item $server->run()
+
+Runs the authorization server daemon in t/oauth_server.py.
+
+=cut
+
+sub run
+{
+	my $self = shift;
+	my $port;
+
+	my $pid = open(my $read_fh, "-|", $ENV{PYTHON}, "t/oauth_server.py")
+	  or die "failed to start OAuth server: $!";
+
+	# Get the port number from the daemon. It closes stdout afterwards; that way
+	# we can slurp in the entire contents here rather than worrying about the
+	# number of bytes to read.
+	$port = do { local $/ = undef; <$read_fh> }
+	  // die "failed to read port number: $!";
+	chomp $port;
+	die "server did not advertise a valid port"
+	  unless Scalar::Util::looks_like_number($port);
+
+	$self->{'pid'} = $pid;
+	$self->{'port'} = $port;
+	$self->{'child'} = $read_fh;
+
+	note("OAuth provider (PID $pid) is listening on port $port\n");
+}
+
+=pod
+
+=item $server->stop()
+
+Sends SIGTERM to the authorization server and waits for it to exit.
+
+=cut
+
+sub stop
+{
+	my $self = shift;
+
+	note("Sending SIGTERM to OAuth provider PID: $self->{'pid'}\n");
+
+	kill(15, $self->{'pid'});
+	$self->{'pid'} = undef;
+
+	# Closing the popen() handle waits for the process to exit.
+	close($self->{'child'});
+	$self->{'child'} = undef;
+}
+
+=pod
+
+=back
+
+=cut
+
+1;
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
new file mode 100755
index 00000000000..4faf3323d38
--- /dev/null
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -0,0 +1,391 @@
+#! /usr/bin/env python3
+#
+# A mock OAuth authorization server, designed to be invoked from
+# OAuth/Server.pm. This listens on an ephemeral port number (printed to stdout
+# so that the Perl tests can contact it) and runs as a daemon until it is
+# signaled.
+#
+
+import base64
+import http.server
+import json
+import os
+import sys
+import time
+import urllib.parse
+from collections import defaultdict
+
+
+class OAuthHandler(http.server.BaseHTTPRequestHandler):
+    """
+    Core implementation of the authorization server. The API is
+    inheritance-based, with entry points at do_GET() and do_POST(). See the
+    documentation for BaseHTTPRequestHandler.
+    """
+
+    JsonObject = dict[str, object]  # TypeAlias is not available until 3.10
+
+    def _check_issuer(self):
+        """
+        Switches the behavior of the provider depending on the issuer URI.
+        """
+        self._alt_issuer = (
+            self.path.startswith("/alternate/")
+            or self.path == "/.well-known/oauth-authorization-server/alternate"
+        )
+        self._parameterized = self.path.startswith("/param/")
+
+        if self._alt_issuer:
+            # The /alternate issuer uses IETF-style .well-known URIs.
+            if self.path.startswith("/.well-known/"):
+                self.path = self.path.removesuffix("/alternate")
+            else:
+                self.path = self.path.removeprefix("/alternate")
+        elif self._parameterized:
+            self.path = self.path.removeprefix("/param")
+
+    def _check_authn(self):
+        """
+        Checks the expected value of the Authorization header, if any.
+        """
+        secret = self._get_param("expected_secret", None)
+        if secret is None:
+            return
+
+        assert "Authorization" in self.headers
+        method, creds = self.headers["Authorization"].split()
+
+        if method != "Basic":
+            raise RuntimeError(f"client used {method} auth; expected Basic")
+
+        username = urllib.parse.quote_plus(self.client_id)
+        password = urllib.parse.quote_plus(secret)
+        expected_creds = f"{username}:{password}"
+
+        if creds.encode() != base64.b64encode(expected_creds.encode()):
+            raise RuntimeError(
+                f"client sent '{creds}'; expected b64encode('{expected_creds}')"
+            )
+
+    def do_GET(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        config_path = "/.well-known/openid-configuration"
+        if self._alt_issuer:
+            config_path = "/.well-known/oauth-authorization-server"
+
+        if self.path == config_path:
+            resp = self.config()
+        else:
+            self.send_error(404, "Not Found")
+            return
+
+        self._send_json(resp)
+
+    def _parse_params(self) -> dict[str, str]:
+        """
+        Parses apart the form-urlencoded request body and returns the resulting
+        dict. For use by do_POST().
+        """
+        size = int(self.headers["Content-Length"])
+        form = self.rfile.read(size)
+
+        assert self.headers["Content-Type"] == "application/x-www-form-urlencoded"
+        return urllib.parse.parse_qs(
+            form.decode("utf-8"),
+            strict_parsing=True,
+            keep_blank_values=True,
+            encoding="utf-8",
+            errors="strict",
+        )
+
+    @property
+    def client_id(self) -> str:
+        """
+        Returns the client_id sent in the POST body or the Authorization header.
+        self._parse_params() must have been called first.
+        """
+        if "client_id" in self._params:
+            return self._params["client_id"][0]
+
+        if "Authorization" not in self.headers:
+            raise RuntimeError("client did not send any client_id")
+
+        _, creds = self.headers["Authorization"].split()
+
+        decoded = base64.b64decode(creds).decode("utf-8")
+        username, _ = decoded.split(":", 1)
+
+        return urllib.parse.unquote_plus(username)
+
+    def do_POST(self):
+        self._response_code = 200
+        self._check_issuer()
+
+        self._params = self._parse_params()
+        if self._parameterized:
+            # Pull encoded test parameters out of the peer's client_id field.
+            # This is expected to be Base64-encoded JSON.
+            js = base64.b64decode(self.client_id)
+            self._test_params = json.loads(js)
+
+        self._check_authn()
+
+        if self.path == "/authorize":
+            resp = self.authorization()
+        elif self.path == "/token":
+            resp = self.token()
+        else:
+            self.send_error(404)
+            return
+
+        self._send_json(resp)
+
+    def _should_modify(self) -> bool:
+        """
+        Returns True if the client has requested a modification to this stage of
+        the exchange.
+        """
+        if not hasattr(self, "_test_params"):
+            return False
+
+        stage = self._test_params.get("stage")
+
+        return (
+            stage == "all"
+            or (
+                stage == "discovery"
+                and self.path == "/.well-known/openid-configuration"
+            )
+            or (stage == "device" and self.path == "/authorize")
+            or (stage == "token" and self.path == "/token")
+        )
+
+    def _get_param(self, name, default):
+        """
+        If the client has requested a modification to this stage (see
+        _should_modify()), this method searches the provided test parameters for
+        a key of the given name, and returns it if found. Otherwise the provided
+        default is returned.
+        """
+        if self._should_modify() and name in self._test_params:
+            return self._test_params[name]
+
+        return default
+
+    @property
+    def _content_type(self) -> str:
+        """
+        Returns "application/json" unless the test has requested something
+        different.
+        """
+        return self._get_param("content_type", "application/json")
+
+    @property
+    def _interval(self) -> int:
+        """
+        Returns 0 unless the test has requested something different.
+        """
+        return self._get_param("interval", 0)
+
+    @property
+    def _retry_code(self) -> str:
+        """
+        Returns "authorization_pending" unless the test has requested something
+        different.
+        """
+        return self._get_param("retry_code", "authorization_pending")
+
+    @property
+    def _uri_spelling(self) -> str:
+        """
+        Returns "verification_uri" unless the test has requested something
+        different.
+        """
+        return self._get_param("uri_spelling", "verification_uri")
+
+    @property
+    def _response_padding(self):
+        """
+        If the huge_response test parameter is set to True, returns a dict
+        containing a gigantic string value, which can then be folded into a JSON
+        response.
+        """
+        if not self._get_param("huge_response", False):
+            return dict()
+
+        return {"_pad_": "x" * 1024 * 1024}
+
+    @property
+    def _access_token(self):
+        """
+        The actual Bearer token sent back to the client on success. Tests may
+        override this with the "token" test parameter.
+        """
+        token = self._get_param("token", None)
+        if token is not None:
+            return token
+
+        token = "9243959234"
+        if self._alt_issuer:
+            token += "-alt"
+
+        return token
+
+    def _send_json(self, js: JsonObject) -> None:
+        """
+        Sends the provided JSON dict as an application/json response.
+        self._response_code can be modified to send JSON error responses.
+        """
+        resp = json.dumps(js).encode("ascii")
+        self.log_message("sending JSON response: %s", resp)
+
+        self.send_response(self._response_code)
+        self.send_header("Content-Type", self._content_type)
+        self.send_header("Content-Length", str(len(resp)))
+        self.end_headers()
+
+        self.wfile.write(resp)
+
+    def config(self) -> JsonObject:
+        port = self.server.socket.getsockname()[1]
+
+        issuer = f"http://localhost:{port}"
+        if self._alt_issuer:
+            issuer += "/alternate"
+        elif self._parameterized:
+            issuer += "/param"
+
+        return {
+            "issuer": issuer,
+            "token_endpoint": issuer + "/token",
+            "device_authorization_endpoint": issuer + "/authorize",
+            "response_types_supported": ["token"],
+            "subject_types_supported": ["public"],
+            "id_token_signing_alg_values_supported": ["RS256"],
+            "grant_types_supported": [
+                "authorization_code",
+                "urn:ietf:params:oauth:grant-type:device_code",
+            ],
+        }
+
+    @property
+    def _token_state(self):
+        """
+        A cached _TokenState object for the connected client (as determined by
+        the request's client_id), or a new one if it doesn't already exist.
+
+        This relies on the existence of a defaultdict attached to the server;
+        see main() below.
+        """
+        return self.server.token_state[self.client_id]
+
+    def _remove_token_state(self):
+        """
+        Removes any cached _TokenState for the current client_id. Call this
+        after the token exchange ends to get rid of unnecessary state.
+        """
+        if self.client_id in self.server.token_state:
+            del self.server.token_state[self.client_id]
+
+    def authorization(self) -> JsonObject:
+        uri = "https://example.com/"
+        if self._alt_issuer:
+            uri = "https://example.org/"
+
+        resp = {
+            "device_code": "postgres",
+            "user_code": "postgresuser",
+            self._uri_spelling: uri,
+            "expires_in": 5,
+            **self._response_padding,
+        }
+
+        interval = self._interval
+        if interval is not None:
+            resp["interval"] = interval
+            self._token_state.min_delay = interval
+        else:
+            self._token_state.min_delay = 5  # default
+
+        # Check the scope.
+        if "scope" in self._params:
+            assert self._params["scope"][0], "empty scopes should be omitted"
+
+        return resp
+
+    def token(self) -> JsonObject:
+        if err := self._get_param("error_code", None):
+            self._response_code = self._get_param("error_status", 400)
+
+            resp = {"error": err}
+            if desc := self._get_param("error_desc", ""):
+                resp["error_description"] = desc
+
+            return resp
+
+        if self._should_modify() and "retries" in self._test_params:
+            retries = self._test_params["retries"]
+
+            # Check to make sure the token interval is being respected.
+            now = time.monotonic()
+            if self._token_state.last_try is not None:
+                delay = now - self._token_state.last_try
+                assert (
+                    delay > self._token_state.min_delay
+                ), f"client waited only {delay} seconds between token requests (expected {self._token_state.min_delay})"
+
+            self._token_state.last_try = now
+
+            # If we haven't reached the required number of retries yet, return a
+            # "pending" response.
+            if self._token_state.retries < retries:
+                self._token_state.retries += 1
+
+                self._response_code = 400
+                return {"error": self._retry_code}
+
+        # Clean up any retry tracking state now that the exchange is ending.
+        self._remove_token_state()
+
+        return {
+            "access_token": self._access_token,
+            "token_type": "bearer",
+            **self._response_padding,
+        }
+
+
+def main():
+    """
+    Starts the authorization server on localhost. The ephemeral port in use will
+    be printed to stdout.
+    """
+
+    s = http.server.HTTPServer(("127.0.0.1", 0), OAuthHandler)
+
+    # Attach a "cache" dictionary to the server to allow the OAuthHandlers to
+    # track state across token requests. The use of defaultdict ensures that new
+    # entries will be created automatically.
+    class _TokenState:
+        retries = 0
+        min_delay = None
+        last_try = None
+
+    s.token_state = defaultdict(_TokenState)
+
+    # Give the parent the port number to contact (this is also the signal that
+    # we're ready to receive requests).
+    port = s.socket.getsockname()[1]
+    print(port)
+
+    # stdout is closed to allow the parent to just "read to the end".
+    stdout = sys.stdout.fileno()
+    sys.stdout.close()
+    os.close(stdout)
+
+    s.serve_forever()  # we expect our parent to send a termination signal
+
+
+if __name__ == "__main__":
+    main()
diff --git a/src/test/modules/oauth_validator/validator.c b/src/test/modules/oauth_validator/validator.c
new file mode 100644
index 00000000000..b2e5d182e1b
--- /dev/null
+++ b/src/test/modules/oauth_validator/validator.c
@@ -0,0 +1,143 @@
+/*-------------------------------------------------------------------------
+ *
+ * validator.c
+ *	  Test module for serverside OAuth token validation callbacks
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/test/modules/oauth_validator/validator.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "libpq/oauth.h"
+#include "miscadmin.h"
+#include "utils/guc.h"
+#include "utils/memutils.h"
+
+PG_MODULE_MAGIC;
+
+static void validator_startup(ValidatorModuleState *state);
+static void validator_shutdown(ValidatorModuleState *state);
+static bool validate_token(const ValidatorModuleState *state,
+						   const char *token,
+						   const char *role,
+						   ValidatorModuleResult *result);
+
+/* Callback implementations (exercise all three) */
+static const OAuthValidatorCallbacks validator_callbacks = {
+	PG_OAUTH_VALIDATOR_MAGIC,
+
+	.startup_cb = validator_startup,
+	.shutdown_cb = validator_shutdown,
+	.validate_cb = validate_token
+};
+
+/* GUCs */
+static char *authn_id = NULL;
+static bool authorize_tokens = true;
+
+/*---
+ * Extension entry point. Sets up GUCs for use by tests:
+ *
+ * - oauth_validator.authn_id	Sets the user identifier to return during token
+ *								validation. Defaults to the username in the
+ *								startup packet.
+ *
+ * - oauth_validator.authorize_tokens
+ *								Sets whether to successfully validate incoming
+ *								tokens. Defaults to true.
+ */
+void
+_PG_init(void)
+{
+	DefineCustomStringVariable("oauth_validator.authn_id",
+							   "Authenticated identity to use for future connections",
+							   NULL,
+							   &authn_id,
+							   NULL,
+							   PGC_SIGHUP,
+							   0,
+							   NULL, NULL, NULL);
+	DefineCustomBoolVariable("oauth_validator.authorize_tokens",
+							 "Should tokens be marked valid?",
+							 NULL,
+							 &authorize_tokens,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL, NULL, NULL);
+
+	MarkGUCPrefixReserved("oauth_validator");
+}
+
+/*
+ * Validator module entry point.
+ */
+const OAuthValidatorCallbacks *
+_PG_oauth_validator_module_init(void)
+{
+	return &validator_callbacks;
+}
+
+#define PRIVATE_COOKIE ((void *) 13579)
+
+/*
+ * Startup callback, to set up private data for the validator.
+ */
+static void
+validator_startup(ValidatorModuleState *state)
+{
+	/*
+	 * Make sure the server is correctly setting sversion. (Real modules
+	 * should not do this; it would defeat upgrade compatibility.)
+	 */
+	if (state->sversion != PG_VERSION_NUM)
+		elog(ERROR, "oauth_validator: sversion set to %d", state->sversion);
+
+	state->private_data = PRIVATE_COOKIE;
+}
+
+/*
+ * Shutdown callback, to tear down the validator.
+ */
+static void
+validator_shutdown(ValidatorModuleState *state)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(PANIC, "oauth_validator: private state cookie changed to %p in shutdown",
+			 state->private_data);
+}
+
+/*
+ * Validator implementation. Logs the incoming data and authorizes the token by
+ * default; the behavior can be modified via the module's GUC settings.
+ */
+static bool
+validate_token(const ValidatorModuleState *state,
+			   const char *token, const char *role,
+			   ValidatorModuleResult *res)
+{
+	/* Check to make sure our private state still exists. */
+	if (state->private_data != PRIVATE_COOKIE)
+		elog(ERROR, "oauth_validator: private state cookie changed to %p in validate",
+			 state->private_data);
+
+	elog(LOG, "oauth_validator: token=\"%s\", role=\"%s\"", token, role);
+	elog(LOG, "oauth_validator: issuer=\"%s\", scope=\"%s\"",
+		 MyProcPort->hba->oauth_issuer,
+		 MyProcPort->hba->oauth_scope);
+
+	res->authorized = authorize_tokens;
+	if (authn_id)
+		res->authn_id = pstrdup(authn_id);
+	else
+		res->authn_id = pstrdup(role);
+
+	return true;
+}
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index f521ad0b12f..f31af70edb6 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -2515,6 +2515,11 @@ instead of the default.
 
 If this regular expression is set, matches it with the output generated.
 
+=item expected_stderr => B<value>
+
+If this regular expression is set, matches it against the standard error
+stream; otherwise stderr must be empty.
+
 =item log_like => [ qr/required message/ ]
 
 =item log_unlike => [ qr/prohibited message/ ]
@@ -2558,7 +2563,22 @@ sub connect_ok
 		like($stdout, $params{expected_stdout}, "$test_name: stdout matches");
 	}
 
-	is($stderr, "", "$test_name: no stderr");
+	if (defined($params{expected_stderr}))
+	{
+		if (like(
+				$stderr, $params{expected_stderr},
+				"$test_name: stderr matches")
+			&& ($ret != 0))
+		{
+			# In this case (failing test but matching stderr) we'll have
+			# swallowed the output needed to debug. Put it back into the logs.
+			diag("$test_name: full stderr:\n" . $stderr);
+		}
+	}
+	else
+	{
+		is($stderr, "", "$test_name: no stderr");
+	}
 
 	$self->log_check($test_name, $log_location, %params);
 }
diff --git a/src/tools/pgindent/pgindent b/src/tools/pgindent/pgindent
index d8acce7e929..7dccf4614aa 100755
--- a/src/tools/pgindent/pgindent
+++ b/src/tools/pgindent/pgindent
@@ -242,6 +242,14 @@ sub pre_indent
 	# Protect wrapping in CATALOG()
 	$source =~ s!^(CATALOG\(.*)$!/*$1*/!gm;
 
+	# Treat a CURL_IGNORE_DEPRECATION() as braces for the purposes of
+	# indentation. (The recursive regex comes from the perlre documentation; it
+	# matches balanced parentheses as group $1 and the contents as group $2.)
+	my $curlopen = '{ /* CURL_IGNORE_DEPRECATION */';
+	my $curlclose = '} /* CURL_IGNORE_DEPRECATION */';
+	$source =~
+	  s!^[ \t]+CURL_IGNORE_DEPRECATION(\(((?:(?>[^()]+)|(?1))*)\))!$curlopen$2$curlclose!gms;
+
 	return $source;
 }
 
@@ -256,6 +264,12 @@ sub post_indent
 	$source =~ s!^/\* Open extern "C" \*/$!{!gm;
 	$source =~ s!^/\* Close extern "C" \*/$!}!gm;
 
+	# Restore the CURL_IGNORE_DEPRECATION() macro, keeping in mind that our
+	# markers may have been re-indented.
+	$source =~
+	  s!{[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!CURL_IGNORE_DEPRECATION(!gm;
+	$source =~ s!}[ \t]+/\* CURL_IGNORE_DEPRECATION \*/!)!gm;
+
 	## Comments
 
 	# Undo change of dash-protected block comments
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 64c6bf7a891..befdcaf2425 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -371,6 +371,9 @@ CState
 CTECycleClause
 CTEMaterialize
 CTESearchClause
+CURL
+CURLM
+CURLoption
 CV
 CachedExpression
 CachedPlan
@@ -1724,6 +1727,7 @@ NumericDigit
 NumericSortSupport
 NumericSumAccum
 NumericVar
+OAuthValidatorCallbacks
 OM_uint32
 OP
 OSAPerGroupState
@@ -1833,6 +1837,7 @@ PGVerbosity
 PG_Locale_Strategy
 PG_Lock_Status
 PG_init_t
+PGauthData
 PGcancel
 PGcancelConn
 PGcmdQueueEntry
@@ -1840,7 +1845,9 @@ PGconn
 PGdataValue
 PGlobjfuncs
 PGnotify
+PGoauthBearerRequest
 PGpipelineStatus
+PGpromptOAuthDevice
 PGresAttDesc
 PGresAttValue
 PGresParamDesc
@@ -1953,6 +1960,7 @@ PQArgBlock
 PQEnvironmentOption
 PQExpBuffer
 PQExpBufferData
+PQauthDataHook_type
 PQcommMethods
 PQconninfoOption
 PQnoticeProcessor
@@ -3094,6 +3102,8 @@ VacuumRelation
 VacuumStmt
 ValidIOData
 ValidateIndexState
+ValidatorModuleState
+ValidatorModuleResult
 ValuesScan
 ValuesScanState
 Var
@@ -3491,6 +3501,7 @@ explain_get_index_name_hook_type
 f_smgr
 fasthash_state
 fd_set
+fe_oauth_state
 fe_scram_state
 fe_scram_state_enum
 fetch_range_request
-- 
2.39.3 (Apple Git-146)

#212Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#211)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Feb 19, 2025 at 6:13 AM Daniel Gustafsson <daniel@yesql.se> wrote:

The attached rebased has your 0002 fix as well as some minor tweaks like a few
small whitespace changes from a pgperltidy run and a copyright date fix which
still said 2024.

LGTM.

Thanks!
--Jacob

#213Daniel Gustafsson
daniel@yesql.se
In reply to: Daniel Gustafsson (#211)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 19 Feb 2025, at 15:13, Daniel Gustafsson <daniel@yesql.se> wrote:

Unless something shows up I plan to commit this sometime tomorrow to allow it
ample time in the tree before the freeze.

I spent a few more hours staring at this, and ran it through a number of CI and
local builds, without anything showing up. Pushed to master with the first set
of buildfarm animals showing green builds.

--
Daniel Gustafsson

#214Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#213)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Feb 20, 2025 at 8:30 AM Daniel Gustafsson <daniel@yesql.se> wrote:

I spent a few more hours staring at this, and ran it through a number of CI and
local builds, without anything showing up. Pushed to master with the first set
of buildfarm animals showing green builds.

Thank you!! And _huge thanks_ to everyone who's reviewed and provided
feedback. I'm going to start working with Andrew on getting the new
tests going in the buildfarm.

If you've been reading along and would like to get started with OAuth
validators and flows, but don't know where to start, please reach out.
The proof of the new APIs will be in the using, and the best time to
tell me if you hate those APIs is now :D

--Jacob

#215Tom Lane
tgl@sss.pgh.pa.us
In reply to: Daniel Gustafsson (#213)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Daniel Gustafsson <daniel@yesql.se> writes:

I spent a few more hours staring at this, and ran it through a number of CI and
local builds, without anything showing up. Pushed to master with the first set
of buildfarm animals showing green builds.

After doing an in-tree "make check", I see

$ git status
On branch master
Your branch is up to date with 'origin/master'.

Untracked files:
(use "git add <file>..." to include in what will be committed)
src/test/modules/oauth_validator/oauth_hook_client

Looks like we're short a .gitignore entry. (It does appear that
"make clean" cleans it up, at least.)

regards, tom lane

#216Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Tom Lane (#215)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Feb 20, 2025 at 12:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Looks like we're short a .gitignore entry. (It does appear that
"make clean" cleans it up, at least.)

So we are! Sorry about that. The attached patch gets in-tree builds
clean for me again.

Thanks,
--Jacob

Attachments:

fix-gitignore.patchapplication/octet-stream; name=fix-gitignore.patchDownload
diff --git a/src/test/modules/oauth_validator/.gitignore b/src/test/modules/oauth_validator/.gitignore
index 5dcb3ff9723..8f18bcd2e1c 100644
--- a/src/test/modules/oauth_validator/.gitignore
+++ b/src/test/modules/oauth_validator/.gitignore
@@ -2,3 +2,4 @@
 /log/
 /results/
 /tmp_check/
+/oauth_hook_client
#217Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#216)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 20 Feb 2025, at 21:21, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Thu, Feb 20, 2025 at 12:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Looks like we're short a .gitignore entry. (It does appear that
"make clean" cleans it up, at least.)

So we are! Sorry about that. The attached patch gets in-tree builds
clean for me again.

Fixed, thanks for the report!

--
Daniel Gustafsson

#218Andres Freund
andres@anarazel.de
In reply to: Jacob Champion (#214)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-02-20 09:28:36 -0800, Jacob Champion wrote:

On Thu, Feb 20, 2025 at 8:30 AM Daniel Gustafsson <daniel@yesql.se> wrote:

I spent a few more hours staring at this, and ran it through a number of CI and
local builds, without anything showing up. Pushed to master with the first set
of buildfarm animals showing green builds.

Thank you!! And _huge thanks_ to everyone who's reviewed and provided
feedback. I'm going to start working with Andrew on getting the new
tests going in the buildfarm.

+1

One question about interruptability. The docs say:

+    <varlistentry>
+     <term>Interruptibility</term>
+     <listitem>
+      <para>
+       Modules must remain interruptible by signals so that the server can
+       correctly handle authentication timeouts and shutdown signals from
+       <application>pg_ctl</application>. For example, a module receiving
+       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
+       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
+       The same should be done during any long-running loops. Failure to follow
+       this guidance may result in unresponsive backend sessions.
+      </para>
+     </listitem>
+    </varlistentry>

Is EINTR checking really sufficient?

I don't think we can generally rely on all blocking system calls to be
interruptible by signals on all platforms?

And, probably worse, isn't relying on getting EINTR rather racy, due to the
chance of the signal arriving between CHECK_FOR_INTERRUPTS() and the blocking
system call?

Afaict the only real way to do safely across platforms is to never call
blocking functions, e.g. by using non-blocking sockets and waiting for
readiness using latches.

And a second one about supporting !CURL_VERSION_ASYNCHDNS:

Is it a good idea to support that? We e.g. rely on libpq connections made by
the backend to be interruptible. Admittedly that's, I think, already not
bulletproof, due to libpq's DNS lookups going through libc if connection
string contains a name that needs to be looked up, but this seems to make that
a bit worse? With the connection string the DNS lookups can at least be
avoided, not that I think we properly document that...

Greetings,

Andres

#219Daniel Gustafsson
daniel@yesql.se
In reply to: Andres Freund (#218)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 21 Feb 2025, at 18:18, Andres Freund <andres@anarazel.de> wrote:

One question about interruptability. The docs say:
....

Afaict the only real way to do safely across platforms is to never call
blocking functions, e.g. by using non-blocking sockets and waiting for
readiness using latches.

Fair point, we'll work on a proposed new wording for this.

And a second one about supporting !CURL_VERSION_ASYNCHDNS:

Is it a good idea to support that?

We could block building instead of the current warning, but that's at best what
we can do. I spent some time skimming over package definitions for the major
distributions ans OS's and couldn't find any that use sync dns.

--
Daniel Gustafsson

#220Tom Lane
tgl@sss.pgh.pa.us
In reply to: Daniel Gustafsson (#213)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Daniel Gustafsson <daniel@yesql.se> writes:

I spent a few more hours staring at this, and ran it through a number of CI and
local builds, without anything showing up. Pushed to master with the first set
of buildfarm animals showing green builds.

Coverity has a nit-pick about this:

/srv/coverity/git/pgsql-git/postgresql/src/interfaces/libpq/fe-auth-oauth.c: 784 in setup_token_request()
778 if (!request_copy)
779 {
780 libpq_append_conn_error(conn, "out of memory");
781 goto fail;
782 }
783

CID 1643156: High impact quality (WRITE_CONST_FIELD)
A write to an aggregate overwrites a const-qualified field within the aggregate.

784 memcpy(request_copy, &request, sizeof(request));
785
786 conn->async_auth = run_user_oauth_flow;
787 conn->cleanup_async_auth = cleanup_user_oauth_flow;
788 state->async_ctx = request_copy;
789 }

This is evidently because of the fields declared const:

/* Hook inputs (constant across all calls) */
const char *const openid_configuration; /* OIDC discovery URI */
const char *const scope; /* required scope(s), or NULL */

IMO, the set of cases where it's legitimate to mark individual struct
fields as const is negligibly small, and this doesn't seem to be one
of them. It's not obvious to me where/how PGoauthBearerRequest
structs are supposed to be constructed, but I find it hard to believe
that they will all spring full-grown from the forehead of Zeus.
Nonetheless, this declaration requires exactly that.

(I'm kind of surprised that we're not getting similar bleats from
any buildfarm animals, but so far I don't see any.)

BTW, as another nitpicky style matter: why do PGoauthBearerRequest
etc. spell their struct tag names differently from their typedef names
(that is, with/without an underscore)? That is not our project style
anywhere else, and I'm failing to detect a good reason to do it here.

regards, tom lane

#221Daniel Gustafsson
daniel@yesql.se
In reply to: Tom Lane (#220)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 23 Feb 2025, at 17:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Daniel Gustafsson <daniel@yesql.se> writes:

I spent a few more hours staring at this, and ran it through a number of CI and
local builds, without anything showing up. Pushed to master with the first set
of buildfarm animals showing green builds.

Coverity has a nit-pick about this:

/srv/coverity/git/pgsql-git/postgresql/src/interfaces/libpq/fe-auth-oauth.c: 784 in setup_token_request()
778 if (!request_copy)
779 {
780 libpq_append_conn_error(conn, "out of memory");
781 goto fail;
782 }
783

CID 1643156: High impact quality (WRITE_CONST_FIELD)
A write to an aggregate overwrites a const-qualified field within the aggregate.

784 memcpy(request_copy, &request, sizeof(request));
785
786 conn->async_auth = run_user_oauth_flow;
787 conn->cleanup_async_auth = cleanup_user_oauth_flow;
788 state->async_ctx = request_copy;
789 }

This is evidently because of the fields declared const:

/* Hook inputs (constant across all calls) */
const char *const openid_configuration; /* OIDC discovery URI */
const char *const scope; /* required scope(s), or NULL */

IMO, the set of cases where it's legitimate to mark individual struct
fields as const is negligibly small, and this doesn't seem to be one
of them.

Thanks for the report, will fix.

BTW, as another nitpicky style matter: why do PGoauthBearerRequest
etc. spell their struct tag names differently from their typedef names
(that is, with/without an underscore)? That is not our project style
anywhere else, and I'm failing to detect a good reason to do it here.

Indeed it isn't, the only explanation is that I missed it. Will fix.

--
Daniel Gustafsson

#222Daniel Gustafsson
daniel@yesql.se
In reply to: Daniel Gustafsson (#221)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

IMO, the set of cases where it's legitimate to mark individual struct
fields as const is negligibly small, and this doesn't seem to be one
of them.

Thanks for the report, will fix.

BTW, as another nitpicky style matter: why do PGoauthBearerRequest
etc. spell their struct tag names differently from their typedef names
(that is, with/without an underscore)? That is not our project style
anywhere else, and I'm failing to detect a good reason to do it here.

Indeed it isn't, the only explanation is that I missed it. Will fix.

The attached diff passes CI and works for me, will revisit in the morning.

--
Daniel Gustafsson

Attachments:

structfixups.diffapplication/octet-stream; name=structfixups.diff; x-unix-mode=0644Download
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index ddb3596df83..8fa0515c6a0 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10318,21 +10318,21 @@ typedef struct _PGpromptOAuthDevice
          of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
          by the implementation:
 <synopsis>
-typedef struct _PGoauthBearerRequest
+typedef struct PGoauthBearerRequest
 {
     /* Hook inputs (constant across all calls) */
-    const char *const openid_configuration; /* OIDC discovery URL */
-    const char *const scope;                /* required scope(s), or NULL */
+    const char *openid_configuration; /* OIDC discovery URL */
+    const char *scope;                /* required scope(s), or NULL */
 
     /* Hook outputs */
 
     /* Callback implementing a custom asynchronous OAuth flow. */
     PostgresPollingStatusType (*async) (PGconn *conn,
-                                        struct _PGoauthBearerRequest *request,
+                                        struct PGoauthBearerRequest *request,
                                         SOCKTYPE *altsock);
 
     /* Callback to clean up custom allocations. */
-    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+    void        (*cleanup) (PGconn *conn, struct PGoauthBearerRequest *request);
 
     char       *token;   /* acquired Bearer token */
     void       *user;    /* hook-defined allocated data */
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index b7399dee58e..34ddfdb1831 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -745,11 +745,11 @@ typedef struct _PGpromptOAuthDevice
 #define SOCKTYPE int
 #endif
 
-typedef struct _PGoauthBearerRequest
+typedef struct PGoauthBearerRequest
 {
 	/* Hook inputs (constant across all calls) */
-	const char *const openid_configuration; /* OIDC discovery URI */
-	const char *const scope;	/* required scope(s), or NULL */
+	const char *openid_configuration;	/* OIDC discovery URI */
+	const char *scope;			/* required scope(s), or NULL */
 
 	/* Hook outputs */
 
@@ -770,7 +770,7 @@ typedef struct _PGoauthBearerRequest
 	 * request->token must be set by the hook.
 	 */
 	PostgresPollingStatusType (*async) (PGconn *conn,
-										struct _PGoauthBearerRequest *request,
+										struct PGoauthBearerRequest *request,
 										SOCKTYPE * altsock);
 
 	/*
@@ -780,7 +780,7 @@ typedef struct _PGoauthBearerRequest
 	 * This is technically optional, but highly recommended, because there is
 	 * no other indication as to when it is safe to free the token.
 	 */
-	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+	void		(*cleanup) (PGconn *conn, struct PGoauthBearerRequest *request);
 
 	/*
 	 * The hook should set this to the Bearer token contents for the
#223Daniel Gustafsson
daniel@yesql.se
In reply to: Daniel Gustafsson (#222)
2 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 24 Feb 2025, at 00:45, Daniel Gustafsson <daniel@yesql.se> wrote:

The attached diff passes CI and works for me, will revisit in the morning.

Dave reported (on Discord) that the OPTIONAL macro collided with windef.h, so
attached is a small fix for that as well (even though we don't support Windows
here right now there is little point in clashing since we don't need that
particular macro name so rename anyways).

--
Daniel Gustafsson

Attachments:

0002-oauth-Rename-macro-to-avoid-collisions-on-Windows.patchapplication/octet-stream; name=0002-oauth-Rename-macro-to-avoid-collisions-on-Windows.patch; x-unix-mode=0644Download
From 9f13bd3e416263d7cc444245dc5f858478cc0562 Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Mon, 24 Feb 2025 09:49:18 +0100
Subject: [PATCH 2/2] oauth: Rename macro to avoid collisions on Windows

Our json parsing defined the macros OPTIONAL and REQUIRED to decorate the
structs with for increased readability. This however collides with macros
in the <windef.h> header on Windows.

../src/interfaces/libpq/fe-auth-oauth-curl.c:398:9: warning: "OPTIONAL" redefined
  398 | #define OPTIONAL false
      |         ^~~~~~~~
In file included from D:/a/_temp/msys64/ucrt64/include/windef.h:9,
                 from D:/a/_temp/msys64/ucrt64/include/windows.h:69,
                 from D:/a/_temp/msys64/ucrt64/include/winsock2.h:23,
                 from ../src/include/port/win32_port.h:60,
                 from ../src/include/port.h:24,
                 from ../src/include/c.h:1331,
                 from ../src/include/postgres_fe.h:28,
                 from ../src/interfaces/libpq/fe-auth-oauth-curl.c:16:
include/minwindef.h:65:9: note: this is the location of the previous definition
   65 | #define OPTIONAL
      |         ^~~~~~~~

Rename to avoid compilation errors in anticipation of implementing
support for Windows.

Reported-by: Dave Cramer (on PostgreSQL Hacking Discord)
---
 src/interfaces/libpq/fe-auth-oauth-curl.c | 34 +++++++++++------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index a80e2047bb7..ae339579f88 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -394,8 +394,8 @@ struct json_field
 };
 
 /* Documentation macros for json_field.required. */
-#define REQUIRED true
-#define OPTIONAL false
+#define PG_OAUTH_REQUIRED true
+#define PG_OAUTH_OPTIONAL false
 
 /* Parse state for parse_oauth_json(). */
 struct oauth_parse
@@ -844,8 +844,8 @@ static bool
 parse_provider(struct async_ctx *actx, struct provider *provider)
 {
 	struct json_field fields[] = {
-		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, REQUIRED},
-		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, REQUIRED},
+		{"issuer", JSON_TOKEN_STRING, {&provider->issuer}, PG_OAUTH_REQUIRED},
+		{"token_endpoint", JSON_TOKEN_STRING, {&provider->token_endpoint}, PG_OAUTH_REQUIRED},
 
 		/*----
 		 * The following fields are technically REQUIRED, but we don't use
@@ -857,8 +857,8 @@ parse_provider(struct async_ctx *actx, struct provider *provider)
 		 * - id_token_signing_alg_values_supported
 		 */
 
-		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, OPTIONAL},
-		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, OPTIONAL},
+		{"device_authorization_endpoint", JSON_TOKEN_STRING, {&provider->device_authorization_endpoint}, PG_OAUTH_OPTIONAL},
+		{"grant_types_supported", JSON_TOKEN_ARRAY_START, {.array = &provider->grant_types_supported}, PG_OAUTH_OPTIONAL},
 
 		{0},
 	};
@@ -955,24 +955,24 @@ static bool
 parse_device_authz(struct async_ctx *actx, struct device_authz *authz)
 {
 	struct json_field fields[] = {
-		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, REQUIRED},
-		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, REQUIRED},
-		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
-		{"expires_in", JSON_TOKEN_NUMBER, {&authz->expires_in_str}, REQUIRED},
+		{"device_code", JSON_TOKEN_STRING, {&authz->device_code}, PG_OAUTH_REQUIRED},
+		{"user_code", JSON_TOKEN_STRING, {&authz->user_code}, PG_OAUTH_REQUIRED},
+		{"verification_uri", JSON_TOKEN_STRING, {&authz->verification_uri}, PG_OAUTH_REQUIRED},
+		{"expires_in", JSON_TOKEN_NUMBER, {&authz->expires_in_str}, PG_OAUTH_REQUIRED},
 
 		/*
 		 * Some services (Google, Azure) spell verification_uri differently.
 		 * We accept either.
 		 */
-		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, REQUIRED},
+		{"verification_url", JSON_TOKEN_STRING, {&authz->verification_uri}, PG_OAUTH_REQUIRED},
 
 		/*
 		 * There is no evidence of verification_uri_complete being spelled
 		 * with "url" instead with any service provider, so only support
 		 * "uri".
 		 */
-		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, OPTIONAL},
-		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, OPTIONAL},
+		{"verification_uri_complete", JSON_TOKEN_STRING, {&authz->verification_uri_complete}, PG_OAUTH_OPTIONAL},
+		{"interval", JSON_TOKEN_NUMBER, {&authz->interval_str}, PG_OAUTH_OPTIONAL},
 
 		{0},
 	};
@@ -1010,9 +1010,9 @@ parse_token_error(struct async_ctx *actx, struct token_error *err)
 {
 	bool		result;
 	struct json_field fields[] = {
-		{"error", JSON_TOKEN_STRING, {&err->error}, REQUIRED},
+		{"error", JSON_TOKEN_STRING, {&err->error}, PG_OAUTH_REQUIRED},
 
-		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, OPTIONAL},
+		{"error_description", JSON_TOKEN_STRING, {&err->error_description}, PG_OAUTH_OPTIONAL},
 
 		{0},
 	};
@@ -1069,8 +1069,8 @@ static bool
 parse_access_token(struct async_ctx *actx, struct token *tok)
 {
 	struct json_field fields[] = {
-		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, REQUIRED},
-		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, REQUIRED},
+		{"access_token", JSON_TOKEN_STRING, {&tok->access_token}, PG_OAUTH_REQUIRED},
+		{"token_type", JSON_TOKEN_STRING, {&tok->token_type}, PG_OAUTH_REQUIRED},
 
 		/*---
 		 * We currently have no use for the following OPTIONAL fields:
-- 
2.39.3 (Apple Git-146)

0001-oauth-Fix-incorrect-const-markers-in-struct.patchapplication/octet-stream; name=0001-oauth-Fix-incorrect-const-markers-in-struct.patch; x-unix-mode=0644Download
From 577e8918c67dc66c0ec1686f92c85eb63c045bcc Mon Sep 17 00:00:00 2001
From: Daniel Gustafsson <dgustafsson@postgresql.org>
Date: Mon, 24 Feb 2025 09:46:53 +0100
Subject: [PATCH 1/2] oauth: Fix incorrect const markers in struct

Two members in PGoauthBearerRequest were incorrectly marked as const.
While in there, align the name of the struct with the typedef as per
project style.

Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/912516.1740329361@sss.pgh.pa.us
---
 doc/src/sgml/libpq.sgml         | 10 +++++-----
 src/interfaces/libpq/libpq-fe.h | 10 +++++-----
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index ddb3596df83..8fa0515c6a0 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10318,21 +10318,21 @@ typedef struct _PGpromptOAuthDevice
          of <symbol>PGoauthBearerRequest</symbol>, which should be filled in
          by the implementation:
 <synopsis>
-typedef struct _PGoauthBearerRequest
+typedef struct PGoauthBearerRequest
 {
     /* Hook inputs (constant across all calls) */
-    const char *const openid_configuration; /* OIDC discovery URL */
-    const char *const scope;                /* required scope(s), or NULL */
+    const char *openid_configuration; /* OIDC discovery URL */
+    const char *scope;                /* required scope(s), or NULL */
 
     /* Hook outputs */
 
     /* Callback implementing a custom asynchronous OAuth flow. */
     PostgresPollingStatusType (*async) (PGconn *conn,
-                                        struct _PGoauthBearerRequest *request,
+                                        struct PGoauthBearerRequest *request,
                                         SOCKTYPE *altsock);
 
     /* Callback to clean up custom allocations. */
-    void        (*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+    void        (*cleanup) (PGconn *conn, struct PGoauthBearerRequest *request);
 
     char       *token;   /* acquired Bearer token */
     void       *user;    /* hook-defined allocated data */
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index b7399dee58e..34ddfdb1831 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -745,11 +745,11 @@ typedef struct _PGpromptOAuthDevice
 #define SOCKTYPE int
 #endif
 
-typedef struct _PGoauthBearerRequest
+typedef struct PGoauthBearerRequest
 {
 	/* Hook inputs (constant across all calls) */
-	const char *const openid_configuration; /* OIDC discovery URI */
-	const char *const scope;	/* required scope(s), or NULL */
+	const char *openid_configuration;	/* OIDC discovery URI */
+	const char *scope;			/* required scope(s), or NULL */
 
 	/* Hook outputs */
 
@@ -770,7 +770,7 @@ typedef struct _PGoauthBearerRequest
 	 * request->token must be set by the hook.
 	 */
 	PostgresPollingStatusType (*async) (PGconn *conn,
-										struct _PGoauthBearerRequest *request,
+										struct PGoauthBearerRequest *request,
 										SOCKTYPE * altsock);
 
 	/*
@@ -780,7 +780,7 @@ typedef struct _PGoauthBearerRequest
 	 * This is technically optional, but highly recommended, because there is
 	 * no other indication as to when it is safe to free the token.
 	 */
-	void		(*cleanup) (PGconn *conn, struct _PGoauthBearerRequest *request);
+	void		(*cleanup) (PGconn *conn, struct PGoauthBearerRequest *request);
 
 	/*
 	 * The hook should set this to the Bearer token contents for the
-- 
2.39.3 (Apple Git-146)

#224Andrew Dunstan
andrew@dunslane.net
In reply to: Daniel Gustafsson (#213)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 2025-02-20 Th 11:29 AM, Daniel Gustafsson wrote:

On 19 Feb 2025, at 15:13, Daniel Gustafsson<daniel@yesql.se> wrote:
Unless something shows up I plan to commit this sometime tomorrow to allow it
ample time in the tree before the freeze.

I spent a few more hours staring at this, and ran it through a number of CI and
local builds, without anything showing up. Pushed to master with the first set
of buildfarm animals showing green builds.

I notice that 001_server.pl contains this:

if ($ENV{with_python} ne 'yes')
{
    plan skip_all => 'OAuth tests require --with-python to run';
}

and various other things that insist on this. But I think all we should
need is for Python to be present, whether or not we are building plpython.

cheers

andrew

--
Andrew Dunstan
EDB:https://www.enterprisedb.com

#225Daniel Gustafsson
daniel@yesql.se
In reply to: Andrew Dunstan (#224)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 24 Feb 2025, at 15:41, Andrew Dunstan <andrew@dunslane.net> wrote:

I notice that 001_server.pl contains this:

if ($ENV{with_python} ne 'yes')
{
plan skip_all => 'OAuth tests require --with-python to run';
}

and various other things that insist on this. But I think all we should need is for Python to be present, whether or not we are building plpython.

Agreed. Right now the --with-python check is what runs PGAC_PATH_PYTHON but
maybe there is value in separating this going forward into a) have python; b)
have python and build plpython?

--
Daniel Gustafsson

#226Andres Freund
andres@anarazel.de
In reply to: Daniel Gustafsson (#225)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-02-24 15:48:13 +0100, Daniel Gustafsson wrote:

On 24 Feb 2025, at 15:41, Andrew Dunstan <andrew@dunslane.net> wrote:

I notice that 001_server.pl contains this:

if ($ENV{with_python} ne 'yes')
{
plan skip_all => 'OAuth tests require --with-python to run';
}

and various other things that insist on this. But I think all we should need is for Python to be present, whether or not we are building plpython.

Agreed. Right now the --with-python check is what runs PGAC_PATH_PYTHON but
maybe there is value in separating this going forward into a) have python; b)
have python and build plpython?

FWIW, For other things we need in tests, e.g. gzip, we don't actually use a
configure test. I think we could do the same for the python commandline
binary.

Greetings,

Andres Freund

#227Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Andres Freund (#218)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Feb 21, 2025 at 9:19 AM Andres Freund <andres@anarazel.de> wrote:

I don't think we can generally rely on all blocking system calls to be
interruptible by signals on all platforms?

Probably not; I wasn't sure how much detail to put in here after "must
remain interruptible."

And, probably worse, isn't relying on getting EINTR rather racy, due to the
chance of the signal arriving between CHECK_FOR_INTERRUPTS() and the blocking
system call?

That is worse, and to be honest I hadn't ever thought about that race
condition before your email. So wait... how do people actually rely on
EINTR in production-grade client software? pselect/ppoll exist,
clearly, but they're used so rarely IME. (Have we all just been
subconsciously trained to mash Ctrl-C until the program finally stops?
I'm honestly kind of horrified by this revelation.)

Anyway, yes, this documentation clearly needs to be strengthened.

Is it a good idea to support that? We e.g. rely on libpq connections made by
the backend to be interruptible. Admittedly that's, I think, already not
bulletproof, due to libpq's DNS lookups going through libc if connection
string contains a name that needs to be looked up, but this seems to make that
a bit worse?

A bit. The same for Kerberos, IIRC. Is the current configure warning
not strong enough to imply that the packager is on shaky ground? (I
patterned that off of the LDAP crash warning, which seemed much worse
to me. :D)

We can always declare non-support, I suppose, but anyone who doesn't
care about that problem (say, because they just want to use
single-connection psql) is then stuck hacking around it.

--Jacob

#228Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Tom Lane (#220)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sun, Feb 23, 2025 at 8:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

IMO, the set of cases where it's legitimate to mark individual struct
fields as const is negligibly small, and this doesn't seem to be one
of them. It's not obvious to me where/how PGoauthBearerRequest
structs are supposed to be constructed, but I find it hard to believe
that they will all spring full-grown from the forehead of Zeus.
Nonetheless, this declaration requires exactly that.

As read-only inputs to the client API, they're not meant to be changed
for the lifetime of the struct (and the lifetime of the client flow).
The only place to initialize such a struct is directly above this
code.

(I'm kind of surprised that we're not getting similar bleats from
any buildfarm animals, but so far I don't see any.)

Is there a reason for compilers to complain? memcpy's the way I know
of to put a const-member struct on the heap, but maybe there are other
ways that don't annoy Coverity?

If the cost of this warning is too high, removing the const
declarations isn't the end of the world. But we use unconstify and
other type-punning copies in so many other places that this didn't
seem all that bad for the goal of helping out the client writer.

BTW, as another nitpicky style matter: why do PGoauthBearerRequest
etc. spell their struct tag names differently from their typedef names
(that is, with/without an underscore)? That is not our project style
anywhere else, and I'm failing to detect a good reason to do it here.

This underscore pattern was copied directly from PQconninfoOption and
PQprintOpt.

Thanks,
--Jacob

#229Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#223)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Feb 24, 2025 at 1:00 AM Daniel Gustafsson <daniel@yesql.se> wrote:

Dave reported (on Discord) that the OPTIONAL macro collided with windef.h,

Ugh.

so
attached is a small fix for that as well (even though we don't support Windows
here right now there is little point in clashing since we don't need that
particular macro name so rename anyways).

No objections to those patches, from inspection. (See note to Tom,
above, but I won't die on the const hill.)

Thanks!
--Jacob

#230Andres Freund
andres@anarazel.de
In reply to: Jacob Champion (#227)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-02-24 09:39:52 -0800, Jacob Champion wrote:

On Fri, Feb 21, 2025 at 9:19 AM Andres Freund <andres@anarazel.de> wrote:

And, probably worse, isn't relying on getting EINTR rather racy, due to the
chance of the signal arriving between CHECK_FOR_INTERRUPTS() and the blocking
system call?

That is worse, and to be honest I hadn't ever thought about that race
condition before your email. So wait... how do people actually rely on
EINTR in production-grade client software? pselect/ppoll exist,
clearly, but they're used so rarely IME. (Have we all just been
subconsciously trained to mash Ctrl-C until the program finally stops?
I'm honestly kind of horrified by this revelation.)

If you need to handle the race it you need to combine it with something
additional, e.g. the so called "self pipe trick". Which e.g. the latch / wait
event set code does.

Is it a good idea to support that? We e.g. rely on libpq connections made by
the backend to be interruptible. Admittedly that's, I think, already not
bulletproof, due to libpq's DNS lookups going through libc if connection
string contains a name that needs to be looked up, but this seems to make that
a bit worse?

A bit. The same for Kerberos, IIRC. Is the current configure warning
not strong enough to imply that the packager is on shaky ground?

I don't think it's strong enough.

(I patterned that off of the LDAP crash warning, which seemed much worse to
me. :D)

I don't think that's a comparable case, because there were in-production uses
of PG+ldap that (kind of) worked. Whereas we start on a green field here.

Greetings,

Andres Freund

#231Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Andres Freund (#230)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Feb 24, 2025 at 12:30 PM Andres Freund <andres@anarazel.de> wrote:

If you need to handle the race it you need to combine it with something
additional, e.g. the so called "self pipe trick". Which e.g. the latch / wait
event set code does.

Right; I'm just used to that trick being deployed in massively
parallel async event engines rather than linear synchronous code
waiting on a single descriptor. I'm still a bit in disbelief, to be
honest. I'll get over it. Thank you for the note!

A bit. The same for Kerberos, IIRC. Is the current configure warning
not strong enough to imply that the packager is on shaky ground?

I don't think it's strong enough.

(I patterned that off of the LDAP crash warning, which seemed much worse to
me. :D)

I don't think that's a comparable case, because there were in-production uses
of PG+ldap that (kind of) worked. Whereas we start on a green field here.

Fair enough. I'll work on a patch to disallow it; best case, no one
ever complains, and we've pruned an entire configuration from the list
of things to worry about.

Thanks!
--Jacob

#232Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#231)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Feb 24, 2025 at 2:02 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Fair enough. I'll work on a patch to disallow it; best case, no one
ever complains, and we've pruned an entire configuration from the list
of things to worry about.

Here goes:

- 0001 fails configuration if the AsynchDNS feature is not built into libcurl.
- 0002 removes EINTR references from the validator documentation and
instead points authors towards our internal Wait APIs.
- 0003 is an optional followup to the const changes from upthread:
there's no need to memcpy() now, and anyone reading the code without
the history might wonder why I chose such a convoluted way to copy a
struct. :D

WDYT?

--Jacob

Attachments:

0002-oauth-Improve-validator-docs-on-interruptibility.patchapplication/octet-stream; name=0002-oauth-Improve-validator-docs-on-interruptibility.patchDownload
From aca624e05b9b0c46f1f6a17c66af02e71c201dab Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 25 Feb 2025 07:42:43 -0800
Subject: [PATCH 2/3] oauth: Improve validator docs on interruptibility

Andres pointed out that EINTR handling is inadequate for real-world use
cases. Direct module writers to our wait APIs instead.

Discussion: https://postgr.es/m/p4bd7mn6dxr2zdak74abocyltpfdxif4pxqzixqpxpetjwt34h%40qc6jgfmoddvq
---
 doc/src/sgml/oauth-validators.sgml | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index 356f11d3bd8..704089dd7b3 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -209,11 +209,13 @@
       <para>
        Modules must remain interruptible by signals so that the server can
        correctly handle authentication timeouts and shutdown signals from
-       <application>pg_ctl</application>. For example, a module receiving
-       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
-       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
-       The same should be done during any long-running loops. Failure to follow
-       this guidance may result in unresponsive backend sessions.
+       <application>pg_ctl</application>. For example, blocking calls on sockets
+       should generally be replaced with code that handles both socket events
+       and interrupts without races (see <function>WaitLatchOrSocket()</function>,
+       <function>WaitEventSetWait()</function>, et al), and long-running loops
+       should periodically call <function>CHECK_FOR_INTERRUPTS()</function>.
+       Failure to follow this guidance may result in unresponsive backend
+       sessions.
       </para>
      </listitem>
     </varlistentry>
-- 
2.34.1

0001-oauth-Disallow-synchronous-DNS-in-libcurl.patchapplication/octet-stream; name=0001-oauth-Disallow-synchronous-DNS-in-libcurl.patchDownload
From f6fb7ca5cc6a9b8e749a22516d282181a4ec5d8f Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 24 Feb 2025 15:02:01 -0800
Subject: [PATCH 1/3] oauth: Disallow synchronous DNS in libcurl

There is concern that a blocking DNS lookup in libpq could stall a
backend process (say, via FDW). Since there's currently no strong
evidence that synchronous DNS is a popular option, disallow it entirely
rather than warning at configure time. We can revisit if anyone
complains.

Per query from Andres Freund.

Discussion: https://postgr.es/m/p4bd7mn6dxr2zdak74abocyltpfdxif4pxqzixqpxpetjwt34h%40qc6jgfmoddvq
---
 config/programs.m4 | 10 +++++-----
 configure          | 14 +++++---------
 meson.build        | 18 ++++++------------
 3 files changed, 16 insertions(+), 26 deletions(-)

diff --git a/config/programs.m4 b/config/programs.m4
index 061b13376ac..0a07feb37cc 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -316,7 +316,7 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
               [Define to 1 if curl_global_init() is guaranteed to be thread-safe.])
   fi
 
-  # Warn if a thread-friendly DNS resolver isn't built.
+  # Fail if a thread-friendly DNS resolver isn't built.
   AC_CACHE_CHECK([for curl support for asynchronous DNS], [pgac_cv__libcurl_async_dns],
   [AC_RUN_IFELSE([AC_LANG_PROGRAM([
 #include <curl/curl.h>
@@ -332,10 +332,10 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
   [pgac_cv__libcurl_async_dns=yes],
   [pgac_cv__libcurl_async_dns=no],
   [pgac_cv__libcurl_async_dns=unknown])])
-  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
-    AC_MSG_WARN([
+  if test x"$pgac_cv__libcurl_async_dns" = xno ; then
+    AC_MSG_ERROR([
 *** The installed version of libcurl does not support asynchronous DNS
-*** lookups. Connection timeouts will not be honored during DNS resolution,
-*** which may lead to hangs in client programs.])
+*** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
+*** to use it with libpq.])
   fi
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 93fddd69981..559f535f5cd 100755
--- a/configure
+++ b/configure
@@ -12493,7 +12493,7 @@ $as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
 
   fi
 
-  # Warn if a thread-friendly DNS resolver isn't built.
+  # Fail if a thread-friendly DNS resolver isn't built.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl support for asynchronous DNS" >&5
 $as_echo_n "checking for curl support for asynchronous DNS... " >&6; }
 if ${pgac_cv__libcurl_async_dns+:} false; then :
@@ -12535,15 +12535,11 @@ fi
 fi
 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_async_dns" >&5
 $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
-  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
-    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING:
-*** The installed version of libcurl does not support asynchronous DNS
-*** lookups. Connection timeouts will not be honored during DNS resolution,
-*** which may lead to hangs in client programs." >&5
-$as_echo "$as_me: WARNING:
+  if test x"$pgac_cv__libcurl_async_dns" = xno ; then
+    as_fn_error $? "
 *** The installed version of libcurl does not support asynchronous DNS
-*** lookups. Connection timeouts will not be honored during DNS resolution,
-*** which may lead to hangs in client programs." >&2;}
+*** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
+*** to use it with libpq." "$LINENO" 5
   fi
 
 fi
diff --git a/meson.build b/meson.build
index 13c13748e5d..b6daa5b7040 100644
--- a/meson.build
+++ b/meson.build
@@ -909,9 +909,7 @@ if not libcurlopt.disabled()
       cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
     endif
 
-    # Warn if a thread-friendly DNS resolver isn't built.
-    libcurl_async_dns = false
-
+    # Fail if a thread-friendly DNS resolver isn't built.
     if not meson.is_cross_build()
       r = cc.run('''
         #include <curl/curl.h>
@@ -931,16 +929,12 @@ if not libcurlopt.disabled()
       )
 
       assert(r.compiled())
-      if r.returncode() == 0
-        libcurl_async_dns = true
-      endif
-    endif
-
-    if not libcurl_async_dns
-      warning('''
+      if r.returncode() != 0
+        error('''
 *** The installed version of libcurl does not support asynchronous DNS
-*** lookups. Connection timeouts will not be honored during DNS resolution,
-*** which may lead to hangs in client programs.''')
+*** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
+*** to use it with libpq.''')
+      endif
     endif
   endif
 
-- 
2.34.1

0003-oauth-Simplify-copy-of-PGoauthBearerRequest.patchapplication/octet-stream; name=0003-oauth-Simplify-copy-of-PGoauthBearerRequest.patchDownload
From dea3b63b7bd2196b504a940e515f6940b1794a60 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 24 Feb 2025 15:43:09 -0800
Subject: [PATCH 3/3] oauth: Simplify copy of PGoauthBearerRequest

Follow-up to 03366b61d. Since there are no more const members in the
PGoauthBearerRequest struct, the previous memcpy() can be replaced with
simple assignment.
---
 src/interfaces/libpq/fe-auth-oauth.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index fb1e9a1a8aa..cf1a25e2ccc 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -781,7 +781,7 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 			goto fail;
 		}
 
-		memcpy(request_copy, &request, sizeof(request));
+		*request_copy = request;
 
 		conn->async_auth = run_user_oauth_flow;
 		conn->cleanup_async_auth = cleanup_user_oauth_flow;
-- 
2.34.1

#233Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#232)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

If you trigger the new optional NetBSD CI task, the oauthvalidator
tests implode[1]https://cirrus-ci.com/task/6354435774873600. Apparently that OS's kevent() doesn't like zero
relative timeouts for EVFILT_TIMER[2]https://github.com/NetBSD/src/blob/67c7c4658e77aa4534b6aac8c041d77097c5e722/sys/kern/kern_event.c#L1375. I see that you worked around
the same problem for Linux timerfd already by rounding 0 up to 1, so
we could just do the same here, and it passes with the attached. A
cute alternative, not tested, might be to put NOTE_ABSTIME into fflag
if timeout == 0 (then it's an absolute time in the past, which should
fire immediately).

But I'm curious, how hard would it be to do this ↓ instead and not
have that problem on any OS?

* There might be an optimization opportunity here: if timeout == 0, we
* could signal drive_request to immediately call
* curl_multi_socket_action, rather than returning all the way up the
* stack only to come right back. But it's not clear that the additional
* code complexity is worth it.

[1]: https://cirrus-ci.com/task/6354435774873600
[2]: https://github.com/NetBSD/src/blob/67c7c4658e77aa4534b6aac8c041d77097c5e722/sys/kern/kern_event.c#L1375

Attachments:

0001-Fix-OAUTH-on-NetBSD.patchtext/x-patch; charset=US-ASCII; name=0001-Fix-OAUTH-on-NetBSD.patchDownload
From 7deb153caf552c9bcb380f88eddbca94be33a0c2 Mon Sep 17 00:00:00 2001
From: Thomas Munro <thomas.munro@gmail.com>
Date: Sat, 1 Mar 2025 00:27:01 +1300
Subject: [PATCH] Fix OAUTH on NetBSD.

NetBSD's EVFILT_TIMER doesn't like zero relative timeouts and fails with
EINVAL.  Steal the workaround from the same problem on Linux from a few
lines up: round 0 up to 1.

This could be seen on the optional NetBSD CI task, but not on the NetBSD
build farm animals because they aren't finding curl and testing this
code.
---
 src/interfaces/libpq/fe-auth-oauth-curl.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index ae339579f88..6e60a81574d 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -1363,6 +1363,16 @@ set_timer(struct async_ctx *actx, long timeout)
 #ifdef HAVE_SYS_EVENT_H
 	struct kevent ev;
 
+#ifdef __NetBSD__
+
+	/*
+	 * Work around NetBSD's rejection of zero timeouts (EINVAL), a bit like
+	 * timerfd above.
+	 */
+	if (timeout == 0)
+		timeout = 1;
+#endif
+
 	/* Enable/disable the timer itself. */
 	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
 		   0, timeout, 0);
-- 
2.48.1

#234Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Thomas Munro (#233)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Feb 28, 2025 at 5:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:

If you trigger the new optional NetBSD CI task, the oauthvalidator
tests implode[1].

Oh, thank you for reporting that. I need to pay more attention to the
BSD CI thread.

Apparently that OS's kevent() doesn't like zero
relative timeouts for EVFILT_TIMER[2]. I see that you worked around
the same problem for Linux timerfd already by rounding 0 up to 1, so
we could just do the same here, and it passes with the attached.

Just from inspection, that looks good to me. I'll look into running
the new BSD tasks on the other patches I posted above, too.

Should we maybe consider just doing that across the board, and put up
with the inefficiency? Admittedly 1ms is a lot more dead time than
1ns...

A
cute alternative, not tested, might be to put NOTE_ABSTIME into fflag
if timeout == 0 (then it's an absolute time in the past, which should
fire immediately).

That could work. I think I need to stare at the man pages more if we
go that direction, since (IIUC) NOTE_ABSTIME changes up some other
default behavior for the timer.

But I'm curious, how hard would it be to do this ↓ instead and not
have that problem on any OS?

* There might be an optimization opportunity here: if timeout == 0, we
* could signal drive_request to immediately call
* curl_multi_socket_action, rather than returning all the way up the
* stack only to come right back. But it's not clear that the additional
* code complexity is worth it.

I'm not sure if it's hard, so much as it is confusing. My first
attempt at it a while back gave me the feeling that I wouldn't
remember how it worked in a few months.

Here are the things that I think we would have to consider, at minimum:
1. Every call to curl_multi_socket_action/all now has to look for the
new "time out immediately" flag after returning.
2. If it is set, and if actx->running has been set to zero, we have to
decide what that means conceptually. (Is that an impossible case? Or
do we ignore the timeout and assume the request is done/failed?)
3. Otherwise, we need to clear the flag and immediately call
curl_multi_socket_action(CURL_SOCKET_TIMEOUT), and repeat until that
flag is no longer set. That feels brittle to me, because if there's
some misunderstanding in our code or some strange corner case in an
old version of Curl on some platform, and we keep getting a timeout of
zero, we'll hit an infinite loop. (The current behavior instead
returns control to the top level every time, and gives
curl_multi_socket_all() a chance to right the ship by checking the
status of all the outstanding sockets.)
4. Even if 2 and 3 are just FUD, there's another potential call to
set_timer(0) in OAUTH_STEP_TOKEN_REQUEST. The interval can only be
zero in debug mode, so if we add new code for that case alone, the
tests will be mostly exercising a non-production code path.

I prefer your patch, personally.

Thanks!
--Jacob

#235Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#234)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Feb 28, 2025 at 9:37 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Just from inspection, that looks good to me. I'll look into running
the new BSD tasks on the other patches I posted above, too.

After your patch gets us past the zero timeout bug, NetBSD next runs into

failed to fetch OpenID discovery document: Unsupported protocol
(libcurl: Received HTTP/0.9 when not allowed)'

...but only for a single test (nonempty oauth_client_secret), which is
very strange. And it doesn't always fail during the same HTTP request.
I'll look into it.

--Jacob

#236Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#234)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sat, Mar 1, 2025 at 6:37 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Should we maybe consider just doing that across the board, and put up
with the inefficiency? Admittedly 1ms is a lot more dead time than
1ns...

Last time I checked, NetBSD is still using scheduler ticks (100hz
periodic interrupt) for all this kind of stuff so it's even worse than
that :-)

I prefer your patch, personally.

Cool, I'll commit it shortly unless someone else comes up with a better idea.

#237Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#235)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sat, Mar 1, 2025 at 10:57 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

After your patch gets us past the zero timeout bug, NetBSD next runs into

failed to fetch OpenID discovery document: Unsupported protocol
(libcurl: Received HTTP/0.9 when not allowed)'

...but only for a single test (nonempty oauth_client_secret), which is
very strange. And it doesn't always fail during the same HTTP request.
I'll look into it.

In case it's relevant, it was green for me, but I also ran it in
combination with my 3x-go-faster patch on that other thread. . o O {
Timing/race stuff? Normally the build farm shakes that stuff out a
bit more reliably than CI, but I doubt libcurl is set up on many
animals... }

#238Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Thomas Munro (#237)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Feb 28, 2025 at 4:37 PM Thomas Munro <thomas.munro@gmail.com> wrote:

In case it's relevant, it was green for me, but I also ran it in
combination with my 3x-go-faster patch on that other thread. . o O {
Timing/race stuff? Normally the build farm shakes that stuff out a
bit more reliably than CI, but I doubt libcurl is set up on many
animals... }

That does help, thanks. Luckily, I can still sometimes reproduce with
that patch, which should speed things up nicely.

Commenting out the failing test causes the next test to fail with
basically the same error, so there's something stateful going on.
There are some suspicious messages that occasionally show up right
before the failure:

# [libcurl] * IPv6: ::1
# [libcurl] * IPv4: 127.0.0.1
# [libcurl] * Trying [::1]:65269...
# [libcurl] * getsockname() failed with errno 22: Invalid argument
# [libcurl] * connect to ::1 port 65269 from ::1 port 65270
failed: Connection refused
# [libcurl] * Trying 127.0.0.1:65269...
# [libcurl] * Connected to localhost (127.0.0.1) port 65269

Later, Curl reconnects via IPv6 -- this time succeeding -- but then
the response gets mangled in some way. I assume headers are being
truncated, based on Curl's complaint about "HTTP/0.9".

The NetBSD man pages say that EINVAL is returned when the socket is
already shut down, suggesting some sort of bad interaction between
Curl and the test authorization server (and/or the OS?). I wonder if
my test server doesn't handle dual-stack setups correctly. I'll see if
I can get ktruss working on either side.

Thanks,
--Jacob

#239Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#238)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Mar 4, 2025 at 1:07 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

# [libcurl] * getsockname() failed with errno 22: Invalid argument

Weird.

Later, Curl reconnects via IPv6 -- this time succeeding -- but then
the response gets mangled in some way. I assume headers are being
truncated, based on Curl's complaint about "HTTP/0.9".

And weirder. With no evidence, I kinda wonder if that part could be a
bug in curl, if it gets a failure in an unexpected place like that and
gets confused, but let's start with the error...

The NetBSD man pages say that EINVAL is returned when the socket is
already shut down, suggesting some sort of bad interaction between
Curl and the test authorization server (and/or the OS?). I wonder if
my test server doesn't handle dual-stack setups correctly. I'll see if
I can get ktruss working on either side.

POSIX says that about getsockname() too, as does the macOS man page,
but not those of FreeBSD, OpenBSD or Linux, but I don't think that's
the issue. (From a quick peek: FreeBSD (in_getsockaddr) asserts that
sotoinpcb(so) is not NULL so it's always able to cough up the address
from the protocol control block object, while NetBSD (tcp_sockaddr)
and macOS/XNU (in6_getsockaddr) check for NULL and return EINVAL, so
at a wild guess, there may some path somewhere that knows the socket
is shutdown in both directions and all data has been drained so it can
be destroyed, and POSIX doesn't require sockets in this state to be
able to tell you their address so that's all OK?)

It's also unspecified if you haven't called connect() or bind() (well,
POSIX actually says that the address it gives you is unspecified, not
that the error is unspecified...).

I tried on a NetBSD 9 Vagrant box I had lying around, and ... ahh:

28729 1 psql write(0x2, 0x7f7fffddf3d0, 0x24) = 36
"[libcurl] * Trying [::1]:64244...\n"
28729 1 psql setsockopt(0x9, 0x6, 0x1, 0x7f7fffde015c, 0x4) = 0
28729 1 psql setsockopt(0x9, 0xffff, 0x800, 0x7f7fffde015c, 0x4) = 0
26362 1 perl __select50 = 1
28729 1 psql connect Err#36 EINPROGRESS
26362 1 perl read = 36
28729 1 psql getsockname(0x9, 0x7f7fffde0240,
0x7f7fffde023c) Err#22 EINVAL

Other times it succeeds:

28729 1 psql connect Err#36 EINPROGRESS
28729 1 psql getsockname = 0

I think that is telling us that a non-blocking socket can be in a
state that is not yet connected enough even to tell you its local
address? That is, connect() returns without having allocated a local
address, and does that part asynchronously too? I don't know what to
think about that yet...

https://stackoverflow.com/questions/25333547/is-it-safe-to-call-getsockname-while-a-nonblocking-stream-socket-is-connecting

#240Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Thomas Munro (#239)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Mar 3, 2025 at 4:07 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

I wonder if
my test server doesn't handle dual-stack setups correctly.

Spoilers: it's this.

I'll see if
I can get ktruss working on either side.

ktruss shows absolutely no syscall activity on the authorization
server during the failing test, because Curl's talking to something
else. sockstat confirms that I completely forgot to listen on IPv6 in
the test server. Dual stack sockets only work from the IPv6
direction...

There must be some law of conservation of weirdness, where the
strangest failure modes have the most boring explanations. I'll work
on a fix.

On Mon, Mar 3, 2025 at 8:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:

I think that is telling us that a non-blocking socket can be in a
state that is not yet connected enough even to tell you its local
address? That is, connect() returns without having allocated a local
address, and does that part asynchronously too? I don't know what to
think about that yet...

That is also really good to know, though. So that EINVAL message
might, in the end, be completely unrelated to the bug? (Curl doesn't
worry about the error, looks like, just prints it to the debug
stream.)

Thanks!
--Jacob

#241Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#240)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Mar 5, 2025 at 6:08 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

ktruss shows absolutely no syscall activity on the authorization
server during the failing test, because Curl's talking to something
else. sockstat confirms that I completely forgot to listen on IPv6 in
the test server. Dual stack sockets only work from the IPv6
direction...

There must be some law of conservation of weirdness, where the
strangest failure modes have the most boring explanations. I'll work
on a fix.

Heh, wow, that was confusing :-) Actually I'm still confused (why
passing sometimes then?) but I'm sure all will become clear with your
patch...

On Mon, Mar 3, 2025 at 8:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:

I think that is telling us that a non-blocking socket can be in a
state that is not yet connected enough even to tell you its local
address? That is, connect() returns without having allocated a local
address, and does that part asynchronously too? I don't know what to
think about that yet...

That is also really good to know, though. So that EINVAL message
might, in the end, be completely unrelated to the bug? (Curl doesn't
worry about the error, looks like, just prints it to the debug
stream.)

Yeah. I wonder if it only happens if the connection is doomed to fail
already, or something. But I don't plan to dig further if it's
harmless (maybe curl shouldn't really do that, IDK).

#242Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Thomas Munro (#241)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Mar 4, 2025 at 2:38 PM Thomas Munro <thomas.munro@gmail.com> wrote:

Heh, wow, that was confusing :-) Actually I'm still confused (why
passing sometimes then?)

Curl doesn't mind if the IPv6 connection fails outright; it'll use the
IPv4 in that case. But if something else ephemeral pops up on IPv6 and
starts speaking something that's not HTTP, that's a problem.

but I'm sure all will become clear with your
patch...

Maybe. My first attempt gets all the BSDs green except macOS -- which
now fails in a completely different test, haha... -_-

I'll keep you posted.

--Jacob

#243Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#242)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Mar 4, 2025 at 2:44 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Maybe. My first attempt gets all the BSDs green except macOS -- which
now fails in a completely different test, haha... -_-

Small update: there is not one bug, but three that interact. ಠ_ಠ

1) The test server advertises an issuer of `https://localhost:&lt;port&gt;`,
but it doesn't listen on all localhost interfaces. When Curl tries to
contact the issuer on IPv6, its Happy Eyeballs handling usually falls
back to IPv4 after discovering that IPv6 is nonfunctional, but
occasionally it contacts something that was temporarily listening
there instead.

Since I don't really want to write a bunch of IPv6 fallback code for
the test server -- this should be testing OAuth, not finding all the
ways that buildfarm OSes can expose dual stack sockets -- I changed
the issuer to be IPv4-only. When I did this, the interval timing tests
immediately failed on macOS.

2) macOS's EVFILT_TIMER implementation seems to be different from the
other BSDs. On Mac, when you re-add a timer to a kqueue, any existing
timer-fired events for it are not cleared out and the kqueue might
remain readable. This breaks a postcondition of our set_timer()
function, which is that new timeouts are supposed to completely
replace previous timeouts.

With a dual stack issuer, the Happy Eyeballs timeouts would be
routinely cleared out by libcurl, setting up a clean slate for the
next call to set_timer(). But with an IPv4-only issuer, libcurl didn't
need to clear out the timeouts (they'd already fired), which meant
that our call to set the ping interval was ineffective.

3) There is a related performance bug on other platforms. If a Curl
timeout happens partway through a request (so libcurl won't clear it),
the timer-expired event will stay set and CPU will be burned to spin
pointlessly on drive_request(). This is much easier to notice after
taking Happy Eyeballs out of the picture. It doesn't cause logical
failures -- Curl basically discards the unnecessary calls -- but it's
definitely unintended.

--

Problem 1 is a simple patch. I am working on a fix for Problem 2, but
I got stuck trying to get a "perfect" solution working yesterday...
Since this is a partial(?) blocker for getting NetBSD going, I'm going
to pivot to an ugly-but-simple approach today.

I plan to defer working on Problem 3, which should just be a
performance bug, until the tests are green again. And I would like to
eventually add some stronger unit tests for the timer behavior, to
catch other potential OS-specific problems in the future.

Thanks,
--Jacob

#244Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#243)
5 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Mar 6, 2025 at 12:57 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Problem 1 is a simple patch. I am working on a fix for Problem 2, but
I got stuck trying to get a "perfect" solution working yesterday...
Since this is a partial(?) blocker for getting NetBSD going, I'm going
to pivot to an ugly-but-simple approach today.

Attached:
- 0001 fixes IPv6 failures,
- 0002 fixes set_timer() on Mac, and
- 0003-0005 are the existing followup patches from upthread.

Thanks!
--Jacob

Attachments:

0001-oauth-Use-IPv4-only-issuer-in-oauth_validator-tests.patchapplication/octet-stream; name=0001-oauth-Use-IPv4-only-issuer-in-oauth_validator-tests.patchDownload
From 4e1024eb47a3d19145a8db42a48d55d608ad4054 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 4 Mar 2025 09:41:19 -0800
Subject: [PATCH 1/5] oauth: Use IPv4-only issuer in oauth_validator tests

The test authorization server implemented in oauth_server.py does not
listen on IPv6. Most of the time, libcurl happily falls back to IPv4
after failing its initial connection, but on NetBSD, something is
consistently showing up on the unreserved IPv6 port and causing a test
failure.

Rather than deal with dual-stack details across all test platforms,
change the issuer to enforce the use of IPv4 only. (This elicits more
punishing timeout behavior from libcurl, so it's a useful change from
the testing perspective as well.)

Reported-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CAOYmi%2Bn4EDOOUL27_OqYT2-F2rS6S%2B3mK-ppWb2Ec92UEoUbYA%40mail.gmail.com
---
 src/test/modules/oauth_validator/t/001_server.pl   | 2 +-
 src/test/modules/oauth_validator/t/OAuth/Server.pm | 4 ++--
 src/test/modules/oauth_validator/t/oauth_server.py | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 6fa59fbeb25..30295364ebd 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -68,7 +68,7 @@ END
 }
 
 my $port = $webserver->port();
-my $issuer = "http://localhost:$port";
+my $issuer = "http://127.0.0.1:$port";
 
 unlink($node->data_dir . '/pg_hba.conf');
 $node->append_conf(
diff --git a/src/test/modules/oauth_validator/t/OAuth/Server.pm b/src/test/modules/oauth_validator/t/OAuth/Server.pm
index 655b2870b0b..52ae7afa991 100644
--- a/src/test/modules/oauth_validator/t/OAuth/Server.pm
+++ b/src/test/modules/oauth_validator/t/OAuth/Server.pm
@@ -15,7 +15,7 @@ OAuth::Server - runs a mock OAuth authorization server for testing
   $server->run;
 
   my $port = $server->port;
-  my $issuer = "http://localhost:$port";
+  my $issuer = "http://127.0.0.1:$port";
 
   # test against $issuer...
 
@@ -28,7 +28,7 @@ daemon implemented in t/oauth_server.py. (Python has a fairly usable HTTP server
 in its standard library, so the implementation was ported from Perl.)
 
 This authorization server does not use TLS (it implements a nonstandard, unsafe
-issuer at "http://localhost:<port>"), so libpq in particular will need to set
+issuer at "http://127.0.0.1:<port>"), so libpq in particular will need to set
 PGOAUTHDEBUG=UNSAFE to be able to talk to it.
 
 =cut
diff --git a/src/test/modules/oauth_validator/t/oauth_server.py b/src/test/modules/oauth_validator/t/oauth_server.py
index 4faf3323d38..5bc30be87fd 100755
--- a/src/test/modules/oauth_validator/t/oauth_server.py
+++ b/src/test/modules/oauth_validator/t/oauth_server.py
@@ -251,7 +251,7 @@ class OAuthHandler(http.server.BaseHTTPRequestHandler):
     def config(self) -> JsonObject:
         port = self.server.socket.getsockname()[1]
 
-        issuer = f"http://localhost:{port}"
+        issuer = f"http://127.0.0.1:{port}"
         if self._alt_issuer:
             issuer += "/alternate"
         elif self._parameterized:
-- 
2.34.1

0002-oauth-Fix-postcondition-for-set_timer-on-BSD.patchapplication/octet-stream; name=0002-oauth-Fix-postcondition-for-set_timer-on-BSD.patchDownload
From 410dce1670a04de81d533dd1b5456e363b144ca8 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Thu, 6 Mar 2025 15:02:37 -0800
Subject: [PATCH 2/5] oauth: Fix postcondition for set_timer on BSD

On macOS, readding an EVFILT_TIMER to a kqueue does not appear to clear
out previously queued timer events, so checks for timer expiration do
not work correctly during token retrieval. Switching to IPv4-only
communication exposes the problem, because libcurl is no longer clearing
out other timeouts related to Happy Eyeballs dual-stack handling.

Fully remove and re-register the kqueue timer events during each call to
set_timer(), to clear out any stale expirations.

Discussion: https://postgr.es/m/CAOYmi%2Bn4EDOOUL27_OqYT2-F2rS6S%2B3mK-ppWb2Ec92UEoUbYA%40mail.gmail.com
---
 src/interfaces/libpq/fe-auth-oauth-curl.c | 48 +++++++++++++++++------
 1 file changed, 35 insertions(+), 13 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 6e60a81574d..2d6d4b1a123 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -1326,6 +1326,10 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
  * in the set at all times and just disarm it when it's not needed. For kqueue,
  * the timer is removed completely when disabled to prevent stale timeouts from
  * remaining in the queue.
+ *
+ * To meet Curl requirements for the CURLMOPT_TIMERFUNCTION, implementations of
+ * set_timer must handle repeated calls by fully discarding any previous running
+ * or expired timer.
  */
 static bool
 set_timer(struct async_ctx *actx, long timeout)
@@ -1373,26 +1377,44 @@ set_timer(struct async_ctx *actx, long timeout)
 		timeout = 1;
 #endif
 
-	/* Enable/disable the timer itself. */
-	EV_SET(&ev, 1, EVFILT_TIMER, timeout < 0 ? EV_DELETE : (EV_ADD | EV_ONESHOT),
-		   0, timeout, 0);
+	/*
+	 * Always disable the timer, and remove it from the multiplexer, to clear
+	 * out any already-queued events. (On some BSDs, adding an EVFILT_TIMER to
+	 * a kqueue that already has one will clear stale events, but not on
+	 * macOS.)
+	 *
+	 * If there was no previous timer set, the kevent calls will result in
+	 * ENOENT, which is fine.
+	 */
+	EV_SET(&ev, 1, EVFILT_TIMER, EV_DELETE, 0, 0, 0);
 	if (kevent(actx->timerfd, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
 	{
-		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		actx_error(actx, "deleting kqueue timer: %m", timeout);
 		return false;
 	}
 
-	/*
-	 * Add/remove the timer to/from the mux. (In contrast with epoll, if we
-	 * allowed the timer to remain registered here after being disabled, the
-	 * mux queue would retain any previous stale timeout notifications and
-	 * remain readable.)
-	 */
-	EV_SET(&ev, actx->timerfd, EVFILT_READ, timeout < 0 ? EV_DELETE : EV_ADD,
-		   0, 0, 0);
+	EV_SET(&ev, actx->timerfd, EVFILT_READ, EV_DELETE, 0, 0, 0);
 	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0 && errno != ENOENT)
 	{
-		actx_error(actx, "could not update timer on kqueue: %m");
+		actx_error(actx, "removing kqueue timer from multiplexer: %m");
+		return false;
+	}
+
+	/* If we're not adding a timer, we're done. */
+	if (timeout < 0)
+		return true;
+
+	EV_SET(&ev, 1, EVFILT_TIMER, (EV_ADD | EV_ONESHOT), 0, timeout, 0);
+	if (kevent(actx->timerfd, &ev, 1, NULL, 0, NULL) < 0)
+	{
+		actx_error(actx, "setting kqueue timer to %ld: %m", timeout);
+		return false;
+	}
+
+	EV_SET(&ev, actx->timerfd, EVFILT_READ, EV_ADD, 0, 0, 0);
+	if (kevent(actx->mux, &ev, 1, NULL, 0, NULL) < 0)
+	{
+		actx_error(actx, "adding kqueue timer to multiplexer: %m");
 		return false;
 	}
 
-- 
2.34.1

0003-oauth-Disallow-synchronous-DNS-in-libcurl.patchapplication/octet-stream; name=0003-oauth-Disallow-synchronous-DNS-in-libcurl.patchDownload
From c2e098c592a8f2d2c3d8f12e82b2736a630ca282 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 24 Feb 2025 15:02:01 -0800
Subject: [PATCH 3/5] oauth: Disallow synchronous DNS in libcurl

There is concern that a blocking DNS lookup in libpq could stall a
backend process (say, via FDW). Since there's currently no strong
evidence that synchronous DNS is a popular option, disallow it entirely
rather than warning at configure time. We can revisit if anyone
complains.

Per query from Andres Freund.

Discussion: https://postgr.es/m/p4bd7mn6dxr2zdak74abocyltpfdxif4pxqzixqpxpetjwt34h%40qc6jgfmoddvq
---
 config/programs.m4 | 10 +++++-----
 configure          | 14 +++++---------
 meson.build        | 18 ++++++------------
 3 files changed, 16 insertions(+), 26 deletions(-)

diff --git a/config/programs.m4 b/config/programs.m4
index 061b13376ac..0a07feb37cc 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -316,7 +316,7 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
               [Define to 1 if curl_global_init() is guaranteed to be thread-safe.])
   fi
 
-  # Warn if a thread-friendly DNS resolver isn't built.
+  # Fail if a thread-friendly DNS resolver isn't built.
   AC_CACHE_CHECK([for curl support for asynchronous DNS], [pgac_cv__libcurl_async_dns],
   [AC_RUN_IFELSE([AC_LANG_PROGRAM([
 #include <curl/curl.h>
@@ -332,10 +332,10 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
   [pgac_cv__libcurl_async_dns=yes],
   [pgac_cv__libcurl_async_dns=no],
   [pgac_cv__libcurl_async_dns=unknown])])
-  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
-    AC_MSG_WARN([
+  if test x"$pgac_cv__libcurl_async_dns" = xno ; then
+    AC_MSG_ERROR([
 *** The installed version of libcurl does not support asynchronous DNS
-*** lookups. Connection timeouts will not be honored during DNS resolution,
-*** which may lead to hangs in client programs.])
+*** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
+*** to use it with libpq.])
   fi
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 93fddd69981..559f535f5cd 100755
--- a/configure
+++ b/configure
@@ -12493,7 +12493,7 @@ $as_echo "#define HAVE_THREADSAFE_CURL_GLOBAL_INIT 1" >>confdefs.h
 
   fi
 
-  # Warn if a thread-friendly DNS resolver isn't built.
+  # Fail if a thread-friendly DNS resolver isn't built.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl support for asynchronous DNS" >&5
 $as_echo_n "checking for curl support for asynchronous DNS... " >&6; }
 if ${pgac_cv__libcurl_async_dns+:} false; then :
@@ -12535,15 +12535,11 @@ fi
 fi
 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv__libcurl_async_dns" >&5
 $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
-  if test x"$pgac_cv__libcurl_async_dns" != xyes ; then
-    { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING:
-*** The installed version of libcurl does not support asynchronous DNS
-*** lookups. Connection timeouts will not be honored during DNS resolution,
-*** which may lead to hangs in client programs." >&5
-$as_echo "$as_me: WARNING:
+  if test x"$pgac_cv__libcurl_async_dns" = xno ; then
+    as_fn_error $? "
 *** The installed version of libcurl does not support asynchronous DNS
-*** lookups. Connection timeouts will not be honored during DNS resolution,
-*** which may lead to hangs in client programs." >&2;}
+*** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
+*** to use it with libpq." "$LINENO" 5
   fi
 
 fi
diff --git a/meson.build b/meson.build
index 13c13748e5d..b6daa5b7040 100644
--- a/meson.build
+++ b/meson.build
@@ -909,9 +909,7 @@ if not libcurlopt.disabled()
       cdata.set('HAVE_THREADSAFE_CURL_GLOBAL_INIT', 1)
     endif
 
-    # Warn if a thread-friendly DNS resolver isn't built.
-    libcurl_async_dns = false
-
+    # Fail if a thread-friendly DNS resolver isn't built.
     if not meson.is_cross_build()
       r = cc.run('''
         #include <curl/curl.h>
@@ -931,16 +929,12 @@ if not libcurlopt.disabled()
       )
 
       assert(r.compiled())
-      if r.returncode() == 0
-        libcurl_async_dns = true
-      endif
-    endif
-
-    if not libcurl_async_dns
-      warning('''
+      if r.returncode() != 0
+        error('''
 *** The installed version of libcurl does not support asynchronous DNS
-*** lookups. Connection timeouts will not be honored during DNS resolution,
-*** which may lead to hangs in client programs.''')
+*** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
+*** to use it with libpq.''')
+      endif
     endif
   endif
 
-- 
2.34.1

0004-oauth-Improve-validator-docs-on-interruptibility.patchapplication/octet-stream; name=0004-oauth-Improve-validator-docs-on-interruptibility.patchDownload
From d9f12352eec4002d30aa397dd3921f4340401c08 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Tue, 25 Feb 2025 07:42:43 -0800
Subject: [PATCH 4/5] oauth: Improve validator docs on interruptibility

Andres pointed out that EINTR handling is inadequate for real-world use
cases. Direct module writers to our wait APIs instead.

Discussion: https://postgr.es/m/p4bd7mn6dxr2zdak74abocyltpfdxif4pxqzixqpxpetjwt34h%40qc6jgfmoddvq
---
 doc/src/sgml/oauth-validators.sgml | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/doc/src/sgml/oauth-validators.sgml b/doc/src/sgml/oauth-validators.sgml
index 356f11d3bd8..704089dd7b3 100644
--- a/doc/src/sgml/oauth-validators.sgml
+++ b/doc/src/sgml/oauth-validators.sgml
@@ -209,11 +209,13 @@
       <para>
        Modules must remain interruptible by signals so that the server can
        correctly handle authentication timeouts and shutdown signals from
-       <application>pg_ctl</application>. For example, a module receiving
-       <symbol>EINTR</symbol>/<symbol>EAGAIN</symbol> from a blocking call
-       should call <function>CHECK_FOR_INTERRUPTS()</function> before retrying.
-       The same should be done during any long-running loops. Failure to follow
-       this guidance may result in unresponsive backend sessions.
+       <application>pg_ctl</application>. For example, blocking calls on sockets
+       should generally be replaced with code that handles both socket events
+       and interrupts without races (see <function>WaitLatchOrSocket()</function>,
+       <function>WaitEventSetWait()</function>, et al), and long-running loops
+       should periodically call <function>CHECK_FOR_INTERRUPTS()</function>.
+       Failure to follow this guidance may result in unresponsive backend
+       sessions.
       </para>
      </listitem>
     </varlistentry>
-- 
2.34.1

0005-oauth-Simplify-copy-of-PGoauthBearerRequest.patchapplication/octet-stream; name=0005-oauth-Simplify-copy-of-PGoauthBearerRequest.patchDownload
From b5beb488b7727a0ec88401833f7d08a37438bbf4 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 24 Feb 2025 15:43:09 -0800
Subject: [PATCH 5/5] oauth: Simplify copy of PGoauthBearerRequest

Follow-up to 03366b61d. Since there are no more const members in the
PGoauthBearerRequest struct, the previous memcpy() can be replaced with
simple assignment.
---
 src/interfaces/libpq/fe-auth-oauth.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index fb1e9a1a8aa..cf1a25e2ccc 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -781,7 +781,7 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 			goto fail;
 		}
 
-		memcpy(request_copy, &request, sizeof(request));
+		*request_copy = request;
 
 		conn->async_auth = run_user_oauth_flow;
 		conn->cleanup_async_auth = cleanup_user_oauth_flow;
-- 
2.34.1

#245Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#243)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Mar 7, 2025 at 9:57 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

2) macOS's EVFILT_TIMER implementation seems to be different from the
other BSDs. On Mac, when you re-add a timer to a kqueue, any existing
timer-fired events for it are not cleared out and the kqueue might
remain readable. This breaks a postcondition of our set_timer()
function, which is that new timeouts are supposed to completely
replace previous timeouts.

I don't see that behaviour on my Mac with a simple program, and that
seems like it couldn't possibly be intended. Hmm... <browses source
code painfully> I wonder if this atomic generation scheme has a hole
in it, under concurrency...

https://github.com/apple-oss-distributions/xnu/blob/8d741a5de7ff4191bf97d57b9f54c2f6d4a15585/bsd/kern/kern_event.c#L1661

The code on the other OSes just dequeues it when reprogramming the
timer, which involves a lock and no doubt a few more cycles, and is
clearly not quite as exciting but ...

#246Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Thomas Munro (#245)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Mar 6, 2025 at 9:13 PM Thomas Munro <thomas.munro@gmail.com> wrote:

I don't see that behaviour on my Mac with a simple program, and that
seems like it couldn't possibly be intended.

What version of macOS?

Just to make sure I'm not chasing ghosts, I've attached my test
program. Here are my CI results for running it:

= FreeBSD =

[ 6 us] timer is set
[ 1039 us] kqueue is readable
[ 1050 us] timer is reset
[ 1052 us] kqueue is not readable

= NetBSD =

[ 3 us] timer is set
[ 14993 us] kqueue is readable
[ 15000 us] timer is reset
[ 15002 us] kqueue is not readable

= OpenBSD =

[ 24 us] timer is set
[ 19660 us] kqueue is readable
[ 19709 us] timer is reset
[ 19712 us] kqueue is not readable

= macOS Sonoma =

[ 4 us] timer is set
[ 1282 us] kqueue is readable
[ 1286 us] timer is reset
[ 1287 us] kqueue is still readable

--Jacob

Attachments:

kqueue_test.capplication/octet-stream; name=kqueue_test.cDownload
#247Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#246)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sat, Mar 8, 2025 at 6:31 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

On Thu, Mar 6, 2025 at 9:13 PM Thomas Munro <thomas.munro@gmail.com> wrote:

I don't see that behaviour on my Mac with a simple program, and that
seems like it couldn't possibly be intended.

What version of macOS?

Just to make sure I'm not chasing ghosts, I've attached my test
program. Here are my CI results for running it:

Ah, right, yeah I see that here too. I thought you were saying that
kevent() could report an already triggered alarm even though we'd
replaced it (it doesn') but of course you meant poll(kq) as libpq
does.

I believe I know exactly why: kqueues are considered readable (by
poll/select/other kqueues) if there are any events queued[1]https://github.com/apple-oss-distributions/xnu/blob/8d741a5de7ff4191bf97d57b9f54c2f6d4a15585/bsd/kern/kern_event.c#L1078. Apple's
EVFILT_TIMER implementation is doing that trick[2]https://github.com/apple-oss-distributions/xnu/blob/8d741a5de7ff4191bf97d57b9f54c2f6d4a15585/bsd/kern/kern_event.c#L1661 where it leaves
them queued, but filt_timerprocess() filters them out if its own
private _FIRED flag isn't set, so kevent() itself won't wake up or
return them. That trick doesn't survive nesting. I think I would
call that a bug. (I think I would keep the atomic CAS piece -- it
means you don't have to drain the timer callout synchronously when
reprogramming it which is a cool trick, but I think they overshot when
they left the knote queued.)

Maybe just do the delete-and-add in one call?

EV_SET(&ev[0], 1, EVFILT_TIMER, EV_DELETE, 0, 0, 0);
EV_SET(&ev[1]https://github.com/apple-oss-distributions/xnu/blob/8d741a5de7ff4191bf97d57b9f54c2f6d4a15585/bsd/kern/kern_event.c#L1078, 1, EVFILT_TIMER, EV_ADD | EV_ONESHOT, 0, timeout, 0);
if (kevent(kq, &ev[0], 2, NULL, 0, NULL) < 0)

[1]: https://github.com/apple-oss-distributions/xnu/blob/8d741a5de7ff4191bf97d57b9f54c2f6d4a15585/bsd/kern/kern_event.c#L1078
[2]: https://github.com/apple-oss-distributions/xnu/blob/8d741a5de7ff4191bf97d57b9f54c2f6d4a15585/bsd/kern/kern_event.c#L1661

#248Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Thomas Munro (#247)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Mar 7, 2025 at 1:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:

I believe I know exactly why: kqueues are considered readable (by
poll/select/other kqueues) if there are any events queued[1]. Apple's
EVFILT_TIMER implementation is doing that trick[2] where it leaves
them queued, but filt_timerprocess() filters them out if its own
private _FIRED flag isn't set, so kevent() itself won't wake up or
return them. That trick doesn't survive nesting. I think I would
call that a bug.

Bleh. Thank you for the analysis!

Maybe just do the delete-and-add in one call?

EV_SET(&ev[0], 1, EVFILT_TIMER, EV_DELETE, 0, 0, 0);
EV_SET(&ev[1], 1, EVFILT_TIMER, EV_ADD | EV_ONESHOT, 0, timeout, 0);
if (kevent(kq, &ev[0], 2, NULL, 0, NULL) < 0)

I think that requires me to copy the EV_RECEIPT handling from
register_socket(), to make sure an ENOENT is correctly ignored on
delete but doesn't mask failures from the addition. Do you prefer that
to the separate calls? (Or, better yet, is it easier than I'm making
it?)

Thanks!
--Jacob

#249Nazir Bilal Yavuz
byavuz81@gmail.com
In reply to: Jacob Champion (#248)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

I just wanted to report that the 'oauth_validator/t/001_server.pl'
test failed on FreeBSD in one of my local CI runs [1]https://cirrus-ci.com/task/4621590844932096. I looked at the
thread but could not find the same error report; if this is already
known, please excuse me.

HEAD was at 3943f5cff6 and there were no other changes. Sharing the
failure here for visibility:

[11:09:56.548] stderr:
[11:09:56.548] # Failed test 'stress-async: stdout matches'
[11:09:56.548] # at
/tmp/cirrus-ci-build/src/test/modules/oauth_validator/t/001_server.pl
line 409.
[11:09:56.548] # ''
[11:09:56.548] # doesn't match '(?^:connection succeeded)'
[11:09:56.548] # Failed test 'stress-async: stderr matches'
[11:09:56.548] # at
/tmp/cirrus-ci-build/src/test/modules/oauth_validator/t/001_server.pl
line 410.
[11:09:56.548] # '[libcurl] * Host localhost:39251
was resolved.
[11:09:56.548] # [libcurl] * IPv6: ::1
[11:09:56.548] # [libcurl] * IPv4: 127.0.0.1
[11:09:56.548] # [libcurl] * Trying [::1]:39251...
[11:09:56.548] # [libcurl] * Immediate connect fail for ::1: Connection refused
[11:09:56.548] # [libcurl] * Trying 127.0.0.1:39251...
[11:09:56.548] # [libcurl] * Connected to localhost (127.0.0.1) port 39251
[11:09:56.548] # [libcurl] * using HTTP/1.x
[11:09:56.548] # [libcurl] > GET
/param/.well-known/openid-configuration HTTP/1.1
[11:09:56.548] # [libcurl] > Host: localhost:39251
[11:09:56.548] # [libcurl] >
[11:09:56.548] # [libcurl] * Request completely sent off
[11:09:56.548] # [libcurl] * HTTP 1.0, assume close after body
[11:09:56.548] # [libcurl] < HTTP/1.0 200 OK
[11:09:56.548] # [libcurl] < Server: BaseHTTP/0.6 Python/3.11.11
[11:09:56.548] # [libcurl] < Date: Mon, 17 Mar 2025 11:09:55 GMT
[11:09:56.548] # [libcurl] < Content-Type: application/json
[11:09:56.548] # [libcurl] < Content-Length: 400
[11:09:56.548] # [libcurl] <
[11:09:56.548] # [libcurl] < {"issuer":
"http://localhost:39251/param&quot;, "token_endpoint":
"http://localhost:39251/param/token&quot;, "device_authorization_endpoint":
"http://localhost:39251/param/authorize&quot;, "response_types_supported":
["token"], "subject_types_supported": ["public"],
"id_token_signing_alg_values_supported": ["RS256"],
"grant_types_supported": ["authorization_code",
"urn:ietf:params:oauth:grant-type:device_code"]}
[11:09:56.548] # [libcurl] * shutting down connection #0
[11:09:56.548] # [libcurl] * Hostname localhost was found in DNS cache
[11:09:56.548] # [libcurl] * Trying [::1]:39251...
[11:09:56.548] # [libcurl] * Immediate connect fail for ::1: Connection refused
[11:09:56.548] # [libcurl] * Trying 127.0.0.1:39251...
[11:09:56.548] # [libcurl] * Connected to localhost (127.0.0.1) port 39251
[11:09:56.548] # [libcurl] * using HTTP/1.x
[11:09:56.548] # [libcurl] > POST /param/authorize HTTP/1.1
[11:09:56.548] # [libcurl] > Host: localhost:39251
[11:09:56.548] # [libcurl] > Content-Length: 92
[11:09:56.548] # [libcurl] > Content-Type: application/x-www-form-urlencoded
[11:09:56.548] # [libcurl] >
[11:09:56.548] # [libcurl] >
scope=openid+postgres&client_id=eyJpbnRlcnZhbCI6MSwicmV0cmllcyI6MSwic3RhZ2UiOiJhbGwifQ%3D%3D
[11:09:56.548] # [libcurl] * upload completely sent off: 92 bytes
[11:09:56.548] # [libcurl] * HTTP 1.0, assume close after body
[11:09:56.548] # [libcurl] < HTTP/1.0 200 OK
[11:09:56.548] # [libcurl] < Server: BaseHTTP/0.6 Python/3.11.11
[11:09:56.548] # [libcurl] < Date: Mon, 17 Mar 2025 11:09:55 GMT
[11:09:56.548] # [libcurl] < Content-Type: application/json
[11:09:56.548] # [libcurl] < Content-Length: 132
[11:09:56.548] # [libcurl] <
[11:09:56.548] # [libcurl] < {"device_code": "postgres", "user_code":
"postgresuser", "verification_uri": "https://example.com/&quot;,
"expires_in": 5, "interval": 1}
[11:09:56.548] # [libcurl] * shutting down connection #1
[11:09:56.548] # [libcurl] * Hostname localhost was found in DNS cache
[11:09:56.548] # [libcurl] * Trying [::1]:39251...
[11:09:56.548] # [libcurl] * Connected to localhost (::1) port 39251
[11:09:56.548] # [libcurl] * using HTTP/1.x
[11:09:56.548] # [libcurl] > POST /param/token HTTP/1.1
[11:09:56.548] # [libcurl] > Host: localhost:39251
[11:09:56.548] # [libcurl] > Content-Length: 157
[11:09:56.548] # [libcurl] > Content-Type: application/x-www-form-urlencoded
[11:09:56.548] # [libcurl] >
[11:09:56.548] # [libcurl] >
device_code=postgres&grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code&client_id=eyJpbnRlcnZhbCI6MSwicmV0cmllcyI6MSwic3RhZ2UiOiJhbGwifQ%3D%3D
[11:09:56.548] # [libcurl] * upload completely sent off: 157 bytes
[11:09:56.548] # [libcurl] * Received HTTP/0.9 when not allowed
[11:09:56.548] # [libcurl] * closing connection #2
[11:09:56.548] # connection to database failed: connection to server
on socket "/tmp/xZKYtq40nL/.s.PGSQL.21400" failed: failed to obtain
access token: Unsupported protocol (libcurl: Received HTTP/0.9 when
not allowed)
[11:09:56.548] # '
[11:09:56.548] # matches '(?^:connection to database failed)'
[11:09:56.548] # Looks like you failed 2 tests of 121.
[11:09:56.548]
[11:09:56.548] (test program exited with status code 2)

[1]: https://cirrus-ci.com/task/4621590844932096

--
Regards,
Nazir Bilal Yavuz
Microsoft

#250Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Nazir Bilal Yavuz (#249)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Mar 17, 2025 at 4:37 AM Nazir Bilal Yavuz <byavuz81@gmail.com> wrote:

Hi,

I just wanted to report that the 'oauth_validator/t/001_server.pl'
test failed on FreeBSD in one of my local CI runs [1]. I looked at the
thread but could not find the same error report; if this is already
known, please excuse me.

Thanks for the report! Yes, this looks like the issue that NetBSD was having:

[11:09:56.548] # [libcurl] * Trying [::1]:39251...
[11:09:56.548] # [libcurl] * Connected to localhost (::1) port 39251

Curl should not have connected to ::1 (the test server isn't listening
on IPv6). Whatever is talking on that port doesn't understand HTTP,
and we later fail with the "HTTP/0.9" error -- a slightly confusing
way to describe a protocol violation.

0001 will fix that. I think we should get that and 0002 in, ASAP. (And
the others.) Thomas has shown me a side quest to get rid of the second
kqueue instance, but so far that is not bearing fruit and we shouldn't
wait on it.

Thanks again!
--Jacob

#251Thomas Munro
thomas.munro@gmail.com
In reply to: Jacob Champion (#250)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Mar 18, 2025 at 4:08 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

0001 will fix that. I think we should get that and 0002 in, ASAP. (And
the others.)

All pushed (wasn't sure if Daniel was going to but once I got tangled
up in all that kqueue stuff he probably quite reasonably assumed that
I would :-)).

Thomas has shown me a side quest to get rid of the second
kqueue instance, but so far that is not bearing fruit and we shouldn't
wait on it.

Cool, thanks for looking into it anyway. (Feel free to post a
nonworking patch with a got-stuck-here-because problem statement...)

#252Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#251)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Thomas Munro <thomas.munro@gmail.com> writes:

All pushed

You may have noticed it already, but indri reports that this
printf-like call isn't right:

fe-auth-oauth-curl.c:1392:49: error: data argument not used by format string [-Werror,-Wformat-extra-args]
1392 | actx_error(actx, "deleting kqueue timer: %m", timeout);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
fe-auth-oauth-curl.c:324:59: note: expanded from macro 'actx_error'
324 | appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
| ~~~ ^

"timeout" isn't being used anymore.

regards, tom lane

#253Thomas Munro
thomas.munro@gmail.com
In reply to: Tom Lane (#252)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Mar 19, 2025 at 5:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

fe-auth-oauth-curl.c:1392:49: error: data argument not used by format string [-Werror,-Wformat-extra-args]
1392 | actx_error(actx, "deleting kqueue timer: %m", timeout);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
fe-auth-oauth-curl.c:324:59: note: expanded from macro 'actx_error'
324 | appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
| ~~~ ^

"timeout" isn't being used anymore.

Yeah. Thanks, fixed.

Now I'm wondering about teaching CI to fail on compiler warnings, ie
not just the special warnings task but also in the Mac etc builds.
The reason it doesn't is because it's sort of annoying to stop the
main tests because of a format string snafu, but we must be able to
put a new step at the end after all tests that scans the build logs
for warning and then raises the alarm...

#254Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#253)
Re: [PoC] Federated Authn/z with OAUTHBEARER

BTW, I was pretty seriously disheartened just now to realize that
this feature was implemented by making libpq depend on libcurl.
I'd misread the relevant commit messages to say that libcurl was
just being used as test infrastructure; but nope, it's a genuine
build and runtime dependency. I wonder how much big-picture
thinking went into that. I can see at least two objections:

* This represents a pretty large expansion of dependency footprint,
not just for us but for the umpteen hundred packages that depend on
libpq. libcurl alone maybe wouldn't be so bad, but have you looked
at libcurl's dependencies? On RHEL8,

$ ldd /usr/lib64/libcurl.so.4.5.0
linux-vdso.so.1 (0x00007fffd3075000)
libnghttp2.so.14 => /lib64/libnghttp2.so.14 (0x00007f992097a000)
libidn2.so.0 => /lib64/libidn2.so.0 (0x00007f992075c000)
libssh.so.4 => /lib64/libssh.so.4 (0x00007f99204ec000)
libpsl.so.5 => /lib64/libpsl.so.5 (0x00007f99202db000)
libssl.so.1.1 => /lib64/libssl.so.1.1 (0x00007f9920046000)
libcrypto.so.1.1 => /lib64/libcrypto.so.1.1 (0x00007f991fb5b000)
libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f991f906000)
libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f991f61b000)
libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f991f404000)
libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f991f200000)
libldap-2.4.so.2 => /lib64/libldap-2.4.so.2 (0x00007f991efb1000)
liblber-2.4.so.2 => /lib64/liblber-2.4.so.2 (0x00007f991eda1000)
libbrotlidec.so.1 => /lib64/libbrotlidec.so.1 (0x00007f991eb94000)
libz.so.1 => /lib64/libz.so.1 (0x00007f991e97c000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f991e75c000)
libc.so.6 => /lib64/libc.so.6 (0x00007f991e386000)
libunistring.so.2 => /lib64/libunistring.so.2 (0x00007f991e005000)
librt.so.1 => /lib64/librt.so.1 (0x00007f991ddfd000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9920e30000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f991dbf9000)
libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f991d9e8000)
libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f991d7e4000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f991d5cc000)
libsasl2.so.3 => /lib64/libsasl2.so.3 (0x00007f991d3ae000)
libm.so.6 => /lib64/libm.so.6 (0x00007f991d02c000)
libbrotlicommon.so.1 => /lib64/libbrotlicommon.so.1 (0x00007f991ce0b000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f991cbe0000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f991c9b7000)
libpcre2-8.so.0 => /lib64/libpcre2-8.so.0 (0x00007f991c733000)

* Given libcurl's very squishy portfolio:

libcurl is a free and easy-to-use client-side URL transfer library, supporting
FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, IMAP,
SMTP, POP3 and RTSP. libcurl supports SSL certificates, HTTP POST, HTTP PUT,
FTP uploading, HTTP form based upload, proxies, cookies, user+password
authentication (Basic, Digest, NTLM, Negotiate, Kerberos4), file transfer
resume, http proxy tunneling and more.

it's not exactly hard to imagine them growing a desire to handle
"postgresql://" URLs, which they would surely do by invoking libpq.
Then we'll have circular build dependencies and circular runtime
dependencies, not to mention inter-library recursion at runtime.

This is not quite a hill that I wish to die on, but I will
flatly predict that we will regret this.

regards, tom lane

#255Bruce Momjian
bruce@momjian.us
In reply to: Tom Lane (#254)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Mar 19, 2025 at 12:57:29AM -0400, Tom Lane wrote:

* Given libcurl's very squishy portfolio:

libcurl is a free and easy-to-use client-side URL transfer library, supporting
FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, IMAP,
SMTP, POP3 and RTSP. libcurl supports SSL certificates, HTTP POST, HTTP PUT,
FTP uploading, HTTP form based upload, proxies, cookies, user+password
authentication (Basic, Digest, NTLM, Negotiate, Kerberos4), file transfer
resume, http proxy tunneling and more.

it's not exactly hard to imagine them growing a desire to handle
"postgresql://" URLs, which they would surely do by invoking libpq.
Then we'll have circular build dependencies and circular runtime
dependencies, not to mention inter-library recursion at runtime.

This is not quite a hill that I wish to die on, but I will
flatly predict that we will regret this.

I regularly see curl security fixes in my Debian updates, so there is a
security issue that any serious curl bug could also make Postgres
vulnerable. I might be willing to die on that hill.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#256Daniel Gustafsson
daniel@yesql.se
In reply to: Tom Lane (#254)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 19 Mar 2025, at 05:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:

BTW, I was pretty seriously disheartened just now to realize that
this feature was implemented by making libpq depend on libcurl.
I'd misread the relevant commit messages to say that libcurl was
just being used as test infrastructure; but nope, it's a genuine
build and runtime dependency. I wonder how much big-picture
thinking went into that.

A considerable amount.

libcurl is not a dependency for OAuth support in libpq, the support was
designed to be exensible such that clients can hook in their own flow
implementations. This part does not require libcurl. It is however a
dependency for the RFC 8628 implementation which is included when building with
--with-libcurl, this in order to ship something which can be used out of the
box (for actual connections *and* testing) without clients being forced to
provide their own implementation.

This obviously means that the RFC8628 part could be moved to contrib/, but I
fear we wouldn't make life easier for packagers by doing that.

* Given libcurl's very squishy portfolio:
...
it's not exactly hard to imagine them growing a desire to handle
"postgresql://" URLs,

While there is no guarantee that such a pull request wont be submitted,
speaking as a (admittedly not very active at the moment) libcurl maintainer I
consider it highly unlikely that it would be accepted. A postgres connnection
does not fit into what libcurl/curl is and wants to be.

--
Daniel Gustafsson

#257Andres Freund
andres@anarazel.de
In reply to: Thomas Munro (#253)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-03-19 17:34:18 +1300, Thomas Munro wrote:

On Wed, Mar 19, 2025 at 5:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

fe-auth-oauth-curl.c:1392:49: error: data argument not used by format string [-Werror,-Wformat-extra-args]
1392 | actx_error(actx, "deleting kqueue timer: %m", timeout);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
fe-auth-oauth-curl.c:324:59: note: expanded from macro 'actx_error'
324 | appendPQExpBuffer(&(ACTX)->errbuf, libpq_gettext(FMT), ##__VA_ARGS__)
| ~~~ ^

"timeout" isn't being used anymore.

Yeah. Thanks, fixed.

Now I'm wondering about teaching CI to fail on compiler warnings, ie
not just the special warnings task but also in the Mac etc builds.
The reason it doesn't is because it's sort of annoying to stop the
main tests because of a format string snafu, but we must be able to
put a new step at the end after all tests that scans the build logs
for warning and then raises the alarm...

The best way would probably be to tee the output of the build to a log file
and then have a script at the end to check for errors in that.

Greetings,

Andres Freund

#258Thomas Munro
thomas.munro@gmail.com
In reply to: Daniel Gustafsson (#256)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Mar 20, 2025 at 2:38 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On 19 Mar 2025, at 05:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:

BTW, I was pretty seriously disheartened just now to realize that
this feature was implemented by making libpq depend on libcurl.
I'd misread the relevant commit messages to say that libcurl was
just being used as test infrastructure; but nope, it's a genuine
build and runtime dependency. I wonder how much big-picture
thinking went into that.

A considerable amount.

libcurl is not a dependency for OAuth support in libpq, the support was
designed to be exensible such that clients can hook in their own flow
implementations. This part does not require libcurl. It is however a
dependency for the RFC 8628 implementation which is included when building with
--with-libcurl, this in order to ship something which can be used out of the
box (for actual connections *and* testing) without clients being forced to
provide their own implementation.

This obviously means that the RFC8628 part could be moved to contrib/, but I
fear we wouldn't make life easier for packagers by doing that.

How feasible/fragile/weird would it be to dlopen() it on demand?
Looks like it'd take ~20 function pointers:

U curl_easy_cleanup
U curl_easy_escape
U curl_easy_getinfo
U curl_easy_init
U curl_easy_setopt
U curl_easy_strerror
U curl_free
U curl_global_init
U curl_multi_add_handle
U curl_multi_cleanup
U curl_multi_info_read
U curl_multi_init
U curl_multi_remove_handle
U curl_multi_setopt
U curl_multi_socket_action
U curl_multi_socket_all
U curl_multi_strerror
U curl_slist_append
U curl_slist_free_all
U curl_version_info

#259Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#258)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Thomas Munro <thomas.munro@gmail.com> writes:

On Thu, Mar 20, 2025 at 2:38 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On 19 Mar 2025, at 05:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:

BTW, I was pretty seriously disheartened just now to realize that
this feature was implemented by making libpq depend on libcurl.

How feasible/fragile/weird would it be to dlopen() it on demand?

FWIW, that would not really move the needle one bit so far as
my worries are concerned. What I'm unhappy about is the very
sizable expansion of our build dependency footprint as well
as the sizable expansion of the 'package requires' footprint.
The fact that the new dependencies are mostly indirect doesn't
soften that blow at all.

To address that (without finding some less kitchen-sink-y OAuth
implementation to depend on), we'd need to shove the whole thing
into a separately-built, separately-installable package.

What I expect is likely to happen is that packagers will try to do
that themselves to avoid the dependency bloat. AFAICT our current
setup will make that quite painful for them, and in any case I
don't believe it's work we should make them do. If they fail to
do that, the burden of the extra dependencies will fall on end
users. Either way, it's not going to make us look good.

regards, tom lane

#260Thomas Munro
thomas.munro@gmail.com
In reply to: Tom Lane (#259)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Mar 20, 2025 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Thomas Munro <thomas.munro@gmail.com> writes:

How feasible/fragile/weird would it be to dlopen() it on demand?

FWIW, that would not really move the needle one bit so far as
my worries are concerned. What I'm unhappy about is the very
sizable expansion of our build dependency footprint as well
as the sizable expansion of the 'package requires' footprint.
The fact that the new dependencies are mostly indirect doesn't
soften that blow at all.

To address that (without finding some less kitchen-sink-y OAuth
implementation to depend on), we'd need to shove the whole thing
into a separately-built, separately-installable package.

What I expect is likely to happen is that packagers will try to do
that themselves to avoid the dependency bloat. AFAICT our current
setup will make that quite painful for them, and in any case I
don't believe it's work we should make them do. If they fail to
do that, the burden of the extra dependencies will fall on end
users. Either way, it's not going to make us look good.

It would increase the build dependencies, assuming a package
maintainer wants to enable as many features as possible, but it would
*not* increase the 'package requires' footprint, merely the 'package
suggests' footprint (as Debian calls it), and it's up to the user
whether they install suggested extra packages, no?

#261Thomas Munro
thomas.munro@gmail.com
In reply to: Thomas Munro (#260)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Mar 20, 2025 at 11:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:

On Thu, Mar 20, 2025 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Thomas Munro <thomas.munro@gmail.com> writes:

How feasible/fragile/weird would it be to dlopen() it on demand?

. o O { There may also be security reasons to reject the idea, would
need to look into that... }

#262Tom Lane
tgl@sss.pgh.pa.us
In reply to: Thomas Munro (#260)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Thomas Munro <thomas.munro@gmail.com> writes:

It would increase the build dependencies, assuming a package
maintainer wants to enable as many features as possible, but it would
*not* increase the 'package requires' footprint, merely the 'package
suggests' footprint (as Debian calls it), and it's up to the user
whether they install suggested extra packages, no?

Maybe I'm confused, but what I saw was a hard dependency on libcurl,
as well as several of its dependencies:

$ ./configure --with-libcurl
...
$ make
...
$ ldd src/interfaces/libpq/libpq.so.5.18
linux-vdso.so.1 (0x00007ffc145fe000)
libcurl.so.4 => /lib64/libcurl.so.4 (0x00007f2c2fa36000)
libm.so.6 => /lib64/libm.so.6 (0x00007f2c2f95b000)
libc.so.6 => /lib64/libc.so.6 (0x00007f2c2f600000)
libnghttp2.so.14 => /lib64/libnghttp2.so.14 (0x00007f2c2f931000)
libidn2.so.0 => /lib64/libidn2.so.0 (0x00007f2c2f910000)
libssh.so.4 => /lib64/libssh.so.4 (0x00007f2c2f89b000)
libpsl.so.5 => /lib64/libpsl.so.5 (0x00007f2c2f885000)
libssl.so.3 => /lib64/libssl.so.3 (0x00007f2c2f51a000)
libcrypto.so.3 => /lib64/libcrypto.so.3 (0x00007f2c2f000000)
libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f2c2f82f000)
libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f2c2ef26000)
libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f2c2f816000)
libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f2c2f80d000)
libldap.so.2 => /lib64/libldap.so.2 (0x00007f2c2eebf000)
liblber.so.2 => /lib64/liblber.so.2 (0x00007f2c2eead000)
libbrotlidec.so.1 => /lib64/libbrotlidec.so.1 (0x00007f2c2ee9f000)
libz.so.1 => /lib64/libz.so.1 (0x00007f2c2ee85000)
/lib64/ld-linux-x86-64.so.2 (0x00007f2c2fb43000)
libunistring.so.2 => /lib64/libunistring.so.2 (0x00007f2c2ed00000)
libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f2c2ecef000)
libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f2c2ece8000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f2c2ecd4000)
libevent-2.1.so.7 => /lib64/libevent-2.1.so.7 (0x00007f2c2ec7b000)
libsasl2.so.3 => /lib64/libsasl2.so.3 (0x00007f2c2ec5b000)
libbrotlicommon.so.1 => /lib64/libbrotlicommon.so.1 (0x00007f2c2ec38000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f2c2ec0b000)
libcrypt.so.2 => /lib64/libcrypt.so.2 (0x00007f2c2ebd1000)
libpcre2-8.so.0 => /lib64/libpcre2-8.so.0 (0x00007f2c2eb35000)

I don't think that will be satisfied by 'package suggests'.
Even if it somehow manages to load, the result of trying to
use OAuth would be a segfault rather than any useful message.

regards, tom lane

#263Thomas Munro
thomas.munro@gmail.com
In reply to: Tom Lane (#262)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Mar 20, 2025 at 11:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Thomas Munro <thomas.munro@gmail.com> writes:

It would increase the build dependencies, assuming a package
maintainer wants to enable as many features as possible, but it would
*not* increase the 'package requires' footprint, merely the 'package
suggests' footprint (as Debian calls it), and it's up to the user
whether they install suggested extra packages, no?

Maybe I'm confused, but what I saw was a hard dependency on libcurl,
as well as several of its dependencies:

I don't think that will be satisfied by 'package suggests'.
Even if it somehow manages to load, the result of trying to
use OAuth would be a segfault rather than any useful message.

I was imagining that it would just error out if you try to use that
stuff and it fails to open libcurl. Then it's up to end users: if
they want to use libpq + OAuth, they have to install both libpq5 and
libcurl packages, and if they don't their connections will just fail,
presumably with some error message explaining why. Or something like
that...

#264Bruce Momjian
bruce@momjian.us
In reply to: Thomas Munro (#263)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Mar 20, 2025 at 11:28:50AM +1300, Thomas Munro wrote:

On Thu, Mar 20, 2025 at 11:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Thomas Munro <thomas.munro@gmail.com> writes:

It would increase the build dependencies, assuming a package
maintainer wants to enable as many features as possible, but it would
*not* increase the 'package requires' footprint, merely the 'package
suggests' footprint (as Debian calls it), and it's up to the user
whether they install suggested extra packages, no?

Maybe I'm confused, but what I saw was a hard dependency on libcurl,
as well as several of its dependencies:

I don't think that will be satisfied by 'package suggests'.
Even if it somehow manages to load, the result of trying to
use OAuth would be a segfault rather than any useful message.

I was imagining that it would just error out if you try to use that
stuff and it fails to open libcurl. Then it's up to end users: if
they want to use libpq + OAuth, they have to install both libpq5 and
libcurl packages, and if they don't their connections will just fail,
presumably with some error message explaining why. Or something like
that...

Am I understanding that curl is being used just to honor the RFC and it
is only for testing? That seems like a small reason to add such a
dependency.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#265Bruce Momjian
bruce@momjian.us
In reply to: Daniel Gustafsson (#256)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Mar 19, 2025 at 02:38:08PM +0100, Daniel Gustafsson wrote:

On 19 Mar 2025, at 05:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:

BTW, I was pretty seriously disheartened just now to realize that
this feature was implemented by making libpq depend on libcurl.
I'd misread the relevant commit messages to say that libcurl was
just being used as test infrastructure; but nope, it's a genuine
build and runtime dependency. I wonder how much big-picture
thinking went into that.

A considerable amount.

libcurl is not a dependency for OAuth support in libpq, the support was
designed to be exensible such that clients can hook in their own flow
implementations. This part does not require libcurl. It is however a
dependency for the RFC 8628 implementation which is included when building with
--with-libcurl, this in order to ship something which can be used out of the
box (for actual connections *and* testing) without clients being forced to
provide their own implementation.

This obviously means that the RFC8628 part could be moved to contrib/, but I
fear we wouldn't make life easier for packagers by doing that.

I see it now ---- without having RFC 8628 built into the server, clients
have to implement it. Do we know what percentage would need to do that?
The spec:

https://datatracker.ietf.org/doc/html/rfc8628

Do we think packagers will use the --with-libcurl configure option?

It does kind of make sense for curl to handle OAUTH since curl has to
simulate a browser. I assume we can't call a shell to invoke curl from
the command line.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#266Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Bruce Momjian (#265)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi all,

With the understanding that the patchset is no longer just "my" baby...

= Dependencies =

I like seeing risk/reward discussions. I agonized over the choice of
HTTP dependency, and I transitioned from an "easier" OAuth library
over to Curl early on because of the same tradeoffs.

That said... Tom, I think the dependency list you've presented is not
quite fair, because it doesn't show what libpq's dependency list was
before adding Curl. From your email, and my local Rocky 9 install, I
think these are the net-new dependencies that we (and packagers) need
to worry about:

libcurl.so.4
libnghttp2.so.14
libidn2.so.0
libssh.so.4
libpsl.so.5
libunistring.so.2
libbrotlidec.so.1
libbrotlicommon.so.1
libz.so.1

That's more than I'd like, to be perfectly honest. I'm least happy
about libssh, because we're not using SFTP but we have to pay for it.
And the Deb-alikes add librtmp, which I'm not thrilled about either.

The rest are, IMO, natural dependencies of a mature HTTP client: the
HTTP/1 and HTTP/2 engines, Punycode, the Public Suffix List, UTF
handling, and common response compression types. Those are kind of
part and parcel of communicating on the web. (If we find an HTTP
client that does all those things itself, awesome, but then we have to
ask how well they did it.)

So one question for the collective is -- putting Curl itself aside --
is having a basic-but-usable OAuth flow, out of the box, worth the
costs of a generic HTTP client? A non-trivial footprint *will* be
there, whether it's one library or several, whether we delay-load it
or not, whether we have the unused SFTP/RTMP dependencies or not. But
we could still find ways to reduce that cost for people who aren't
using it, if necessary.

= Asides =

I would also like to point out: End users opt into this by
preregistering a client ID with an OAuth issuer ID, then providing
that pair of IDs in the connection string. We will not just start
crawling the web because a server tells us to. I don't want to
downplay the additional risk of having it in the address space, but
the design goal is that vulnerabilities in the HTTP logic should not
affect users who have not explicitly consented to the use of OAuth.

There were some other questions/statements made upthread that I want
to clarify too:

On Wed, Mar 19, 2025 at 4:11 PM Bruce Momjian <bruce@momjian.us> wrote:

Am I understanding that curl is being used just to honor the RFC and it
is only for testing?

No. (I see you found it later, but to state clearly for the record:
it's not just for testing.) libcurl is used for the Device
Authorization flow implementation. You don't have to use Device
Authorization to use OAuth, but we don't provide any alternative flows
in-tree; you'd have to use the libpq API to insert your own flow.

I see it now ---- without having RFC 8628 built into the server,

(libpq, not the server. We do not ship server-side plugins at all, yet.)

clients
have to implement it. Do we know what percentage would need to do that?

For version 1 of the feature, Device Authorization is the only option
for our utilities (psql et al). I can't really speculate on
percentages; it depends on what percentage want to use OAuth and don't
like (or can't use) our builtin flow. Obviously the percentage goes up
to 100% if we don't provide one. Plus we lose significant testability,
plus no one can use it from psql.

Do we think packagers will use the --with-libcurl configure option?

Well, hopefully, yes. The tradeoffs of the builtin flow were chosen
explicitly so that existing clients could use it with minimal-to-no
code changes.

Thanks!
--Jacob

#267Bruce Momjian
bruce@momjian.us
In reply to: Jacob Champion (#266)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Mar 20, 2025 at 01:33:26PM -0700, Jacob Champion wrote:

That's more than I'd like, to be perfectly honest. I'm least happy
about libssh, because we're not using SFTP but we have to pay for it.
And the Deb-alikes add librtmp, which I'm not thrilled about either.

The rest are, IMO, natural dependencies of a mature HTTP client: the
HTTP/1 and HTTP/2 engines, Punycode, the Public Suffix List, UTF
handling, and common response compression types. Those are kind of
part and parcel of communicating on the web. (If we find an HTTP
client that does all those things itself, awesome, but then we have to
ask how well they did it.)

So one question for the collective is -- putting Curl itself aside --
is having a basic-but-usable OAuth flow, out of the box, worth the
costs of a generic HTTP client? A non-trivial footprint *will* be
there, whether it's one library or several, whether we delay-load it
or not, whether we have the unused SFTP/RTMP dependencies or not. But
we could still find ways to reduce that cost for people who aren't
using it, if necessary.

One observation is that security scanning tools are going to see the
curl dependency and look at any CSVs related to them and ask us, whether
they are using OAUTH or not.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#268Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#267)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Bruce Momjian <bruce@momjian.us> writes:

On Thu, Mar 20, 2025 at 01:33:26PM -0700, Jacob Champion wrote:

So one question for the collective is -- putting Curl itself aside --
is having a basic-but-usable OAuth flow, out of the box, worth the
costs of a generic HTTP client?

One observation is that security scanning tools are going to see the
curl dependency and look at any CSVs related to them and ask us, whether
they are using OAUTH or not.

Yes. Also, none of this has addressed my complaint about the extent
of the build and install dependencies. Yes, simply not selecting
--with-libcurl removes the problem ... but most packagers are under
very heavy pressure to enable all features of a package.

From what's been said here, only a small minority of users are likely
to have any interest in this feature. So my answer to "is it worth
the cost" is no, and would be no even if I had a lower estimate of
the costs.

I don't have any problem with making a solution available to those
users who want it --- but I really do NOT want this to be part of
stock libpq nor done as part of the core Postgres build. I do not
think that the costs of that have been fully accounted for, especially
not the fact that almost all of those costs fall on people other than
us.

I'd like to see this moved out to some separate package that has to be
explicitly linked in and then hooks into libpq's custom-provider API.

regards, tom lane

#269Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#268)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-03-20 17:08:54 -0400, Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

On Thu, Mar 20, 2025 at 01:33:26PM -0700, Jacob Champion wrote:

So one question for the collective is -- putting Curl itself aside --
is having a basic-but-usable OAuth flow, out of the box, worth the
costs of a generic HTTP client?

One observation is that security scanning tools are going to see the
curl dependency and look at any CSVs related to them and ask us, whether
they are using OAUTH or not.

Yes. Also, none of this has addressed my complaint about the extent
of the build and install dependencies. Yes, simply not selecting
--with-libcurl removes the problem ... but most packagers are under
very heavy pressure to enable all features of a package.

How about we provide the current libpq.so without linking to curl and also a
libpq-oauth.so that has curl support? If we do it right libpq-oauth.so would
itself link to libpq.so, making libpq-oauth.so a fairly small library.

That way packagers can split libpq-oauth.so into a separate package, while
still just building once.

That'd be a bit of work on the buildsystem side, but it seems doable.

From what's been said here, only a small minority of users are likely
to have any interest in this feature. So my answer to "is it worth
the cost" is no, and would be no even if I had a lower estimate of
the costs.

I think this is likely going to be rather widely used, way more widely than
e.g. kerberos or ldap support in libpq. My understanding is that there's a
fair bit of pressure in lots of companies to centralize authentication towards
centralized systems, even for server applications.

I don't have any problem with making a solution available to those
users who want it --- but I really do NOT want this to be part of
stock libpq nor done as part of the core Postgres build. I do not
think that the costs of that have been fully accounted for, especially
not the fact that almost all of those costs fall on people other than
us.

I am on board with not having it as part of stock libpq, but I don't see what
we gain by not building it as part of postgres (if the dependencies are
available, of course).

Greetings,

Andres Freund

#270Andrew Dunstan
andrew@dunslane.net
In reply to: Andres Freund (#269)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 2025-03-20 Th 7:26 PM, Andres Freund wrote:

Hi,

On 2025-03-20 17:08:54 -0400, Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

On Thu, Mar 20, 2025 at 01:33:26PM -0700, Jacob Champion wrote:

So one question for the collective is -- putting Curl itself aside --
is having a basic-but-usable OAuth flow, out of the box, worth the
costs of a generic HTTP client?

One observation is that security scanning tools are going to see the
curl dependency and look at any CSVs related to them and ask us, whether
they are using OAUTH or not.

Yes. Also, none of this has addressed my complaint about the extent
of the build and install dependencies. Yes, simply not selecting
--with-libcurl removes the problem ... but most packagers are under
very heavy pressure to enable all features of a package.

How about we provide the current libpq.so without linking to curl and also a
libpq-oauth.so that has curl support? If we do it right libpq-oauth.so would
itself link to libpq.so, making libpq-oauth.so a fairly small library.

That way packagers can split libpq-oauth.so into a separate package, while
still just building once.

That'd be a bit of work on the buildsystem side, but it seems doable.

That certainly seems worth exploring.

From what's been said here, only a small minority of users are likely
to have any interest in this feature. So my answer to "is it worth
the cost" is no, and would be no even if I had a lower estimate of
the costs.

I think this is likely going to be rather widely used, way more widely than
e.g. kerberos or ldap support in libpq. My understanding is that there's a
fair bit of pressure in lots of companies to centralize authentication towards
centralized systems, even for server applications.

Indeed. There is still work to do on OAUTH2 but the demand you mention
is just going to keep increasing.

I don't have any problem with making a solution available to those
users who want it --- but I really do NOT want this to be part of
stock libpq nor done as part of the core Postgres build. I do not
think that the costs of that have been fully accounted for, especially
not the fact that almost all of those costs fall on people other than
us.

I am on board with not having it as part of stock libpq, but I don't see what
we gain by not building it as part of postgres (if the dependencies are
available, of course).

+1.

cheers

andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com

#271Daniel Gustafsson
daniel@yesql.se
In reply to: Andrew Dunstan (#270)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 21 Mar 2025, at 13:40, Andrew Dunstan <andrew@dunslane.net> wrote:
On 2025-03-20 Th 7:26 PM, Andres Freund wrote:

How about we provide the current libpq.so without linking to curl and also a
libpq-oauth.so that has curl support? If we do it right libpq-oauth.so would
itself link to libpq.so, making libpq-oauth.so a fairly small library.

That way packagers can split libpq-oauth.so into a separate package, while
still just building once.

That'd be a bit of work on the buildsystem side, but it seems doable.

That certainly seems worth exploring.

This is being worked on.

--
Daniel Gustafsson

#272Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#271)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Mar 21, 2025 at 11:02 AM Daniel Gustafsson <daniel@yesql.se> wrote:

How about we provide the current libpq.so without linking to curl and also a
libpq-oauth.so that has curl support? If we do it right libpq-oauth.so would
itself link to libpq.so, making libpq-oauth.so a fairly small library.

That way packagers can split libpq-oauth.so into a separate package, while
still just building once.

That'd be a bit of work on the buildsystem side, but it seems doable.

That certainly seems worth exploring.

This is being worked on.

Attached is a proof of concept, with code from Daniel and myself,
which passes the CI as a starting point.

Roughly speaking, some things to debate are
- the module API itself
- how much to duplicate from libpq vs how much to export
- is this even what you had in mind

libpq-oauth.so is dlopen'd when needed. If it's not found or it
doesn't have the right symbols, builtin OAuth will not happen. Right
now we have an SO version of 1; maybe we want to remove the SO version
entirely to better indicate that it shouldn't be linked?

Two symbols are exported for the async authentication callbacks. Since
the module understands PGconn internals, maybe we could simplify this
to a single callback that manipulates the connection directly.

To keep the diff small to start, the current patch probably exports
too much. I think appendPQExpBufferVA makes sense, considering we
export much of the PQExpBuffer API already, but I imagine we won't
want to expose the pg_g_threadlock symbol. (libpq could maybe push
that pointer into the libpq-oauth module at load time, instead of
having the module pull it.) And we could probably go either way with
the PQauthDataHook; I prefer having a getter and setter for future
flexibility, but it would be simpler to just export the hook directly.

The following functions are duplicated from libpq:
- libpq_block_sigpipe
- libpq_reset_sigpipe
- libpq_binddomain
- libpq_[n]gettext
- libpq_append_conn_error
- oauth_unsafe_debugging_enabled

Those don't seem too bad to me, though maybe there's a good way to
deduplicate. But i18n needs further work. It builds right now, but I
don't think it works yet.

WDYT?

Thanks,
--Jacob

Attachments:

0001-WIP-split-Device-Authorization-flow-into-dlopen-d-mo.patchapplication/octet-stream; name=0001-WIP-split-Device-Authorization-flow-into-dlopen-d-mo.patchDownload
From 47fc34de68fe61b796f532f755a17331dff111e3 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH] WIP: split Device Authorization flow into dlopen'd module

See notes on mailing list.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 meson.build                                   |  12 +-
 src/interfaces/Makefile                       |   9 +
 src/interfaces/libpq-oauth/Makefile           |  53 +++++
 src/interfaces/libpq-oauth/exports.txt        |   3 +
 .../fe-auth-oauth-curl.c                      | 187 +++++++++++++++++-
 .../libpq-oauth/fe-auth-oauth-curl.h          |  23 +++
 src/interfaces/libpq-oauth/meson.build        |  64 ++++++
 src/interfaces/libpq-oauth/po/meson.build     |   3 +
 src/interfaces/libpq/Makefile                 |   4 -
 src/interfaces/libpq/exports.txt              |   3 +
 src/interfaces/libpq/fe-auth-oauth.c          |  52 ++++-
 src/interfaces/libpq/fe-auth-oauth.h          |   4 +-
 src/interfaces/libpq/fe-auth.h                |   3 -
 src/interfaces/libpq/libpq-fe.h               |   1 +
 src/interfaces/libpq/meson.build              |   4 -
 15 files changed, 389 insertions(+), 36 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 rename src/interfaces/{libpq => libpq-oauth}/fe-auth-oauth-curl.c (94%)
 create mode 100644 src/interfaces/libpq-oauth/fe-auth-oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 create mode 100644 src/interfaces/libpq-oauth/po/meson.build

diff --git a/meson.build b/meson.build
index 7cf518a2765..69e91529259 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -3136,17 +3137,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..322a498823d 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,16 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..d623a4157e6
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,53 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization oauth support"
+NAME = pq-oauth
+SO_MAJOR_VERSION = 1
+SO_MINOR_VERSION = $(MAJORVERSION)
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES) \
+	fe-auth-oauth-curl.o
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = -lcurl
+SHLIB_PREREQS = submake-libpq
+
+SHLIB_EXPORTS = exports.txt
+
+PKG_CONFIG_REQUIRES_PRIVATE = libpq
+#
+# Make dependencies on pg_config_paths.h visible in all builds.
+fe-auth-oauth-curl.o: fe-auth-oauth-curl.c $(top_builddir)/src/port/pg_config_paths.h
+
+$(top_builddir)/src/port/pg_config_paths.h:
+	$(MAKE) -C $(top_builddir)/src/port pg_config_paths.h
+
+all: all-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..ac9333763c4
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,3 @@
+# src/interfaces/libpq-oauth/exports.txt
+pg_fe_run_oauth_flow      1
+pg_fe_cleanup_oauth_flow  2
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/fe-auth-oauth-curl.c
similarity index 94%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/fe-auth-oauth-curl.c
index 9e0e8a9f2be..556e436ee93 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/fe-auth-oauth-curl.c
@@ -29,8 +29,10 @@
 #include "common/jsonapi.h"
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
+#include "fe-auth-oauth-curl.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "pg_config_paths.h"
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -230,6 +232,173 @@ struct async_ctx
 	bool		debugging;		/* can we give unsafe developer assistance? */
 };
 
+#ifdef ENABLE_NLS
+
+static void
+libpq_binddomain(void)
+{
+	/*
+	 * At least on Windows, there are gettext implementations that fail if
+	 * multiple threads call bindtextdomain() concurrently.  Use a mutex and
+	 * flag variable to ensure that we call it just once per process.  It is
+	 * not known that similar bugs exist on non-Windows platforms, but we
+	 * might as well do it the same way everywhere.
+	 */
+	static volatile bool already_bound = false;
+	static pthread_mutex_t binddomain_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+	if (!already_bound)
+	{
+		/* bindtextdomain() does not preserve errno */
+#ifdef WIN32
+		int			save_errno = GetLastError();
+#else
+		int			save_errno = errno;
+#endif
+
+		(void) pthread_mutex_lock(&binddomain_mutex);
+
+		if (!already_bound)
+		{
+			const char *ldir;
+
+			/*
+			 * No relocatable lookup here because the calling executable could
+			 * be anywhere
+			 */
+			ldir = getenv("PGLOCALEDIR");
+			if (!ldir)
+				ldir = LOCALEDIR;
+			bindtextdomain(PG_TEXTDOMAIN("libpq"), ldir);
+			already_bound = true;
+		}
+
+		(void) pthread_mutex_unlock(&binddomain_mutex);
+
+#ifdef WIN32
+		SetLastError(save_errno);
+#else
+		errno = save_errno;
+#endif
+	}
+}
+
+char *
+libpq_gettext(const char *msgid)
+{
+	libpq_binddomain();
+	return dgettext(PG_TEXTDOMAIN("libpq"), msgid);
+}
+
+char *
+libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
+{
+	libpq_binddomain();
+	return dngettext(PG_TEXTDOMAIN("libpq"), msgid, msgid_plural, n);
+}
+
+#endif							/* ENABLE_NLS */
+
+static void __libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  A newline is automatically appended; the
+ * format should not end with a newline.
+ */
+static void
+__libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(&conn->errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(&conn->errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(&conn->errorMessage, '\n');
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+static bool
+__oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+static int
+__pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+static void
+__pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
 /*
  * Tears down the Curl handles and frees the async_ctx.
  */
@@ -252,7 +421,7 @@ free_async_ctx(PGconn *conn, struct async_ctx *actx)
 		CURLMcode	err = curl_multi_remove_handle(actx->curlm, actx->curl);
 
 		if (err)
-			libpq_append_conn_error(conn,
+			__libpq_append_conn_error(conn,
 									"libcurl easy handle removal failed: %s",
 									curl_multi_strerror(err));
 	}
@@ -272,7 +441,7 @@ free_async_ctx(PGconn *conn, struct async_ctx *actx)
 		CURLMcode	err = curl_multi_cleanup(actx->curlm);
 
 		if (err)
-			libpq_append_conn_error(conn,
+			__libpq_append_conn_error(conn,
 									"libcurl multi handle cleanup failed: %s",
 									curl_multi_strerror(err));
 	}
@@ -2556,7 +2725,7 @@ initialize_curl(PGconn *conn)
 		goto done;
 	else if (init_successful == PG_BOOL_NO)
 	{
-		libpq_append_conn_error(conn,
+		__libpq_append_conn_error(conn,
 								"curl_global_init previously failed during OAuth setup");
 		goto done;
 	}
@@ -2575,7 +2744,7 @@ initialize_curl(PGconn *conn)
 	 */
 	if (curl_global_init(CURL_GLOBAL_ALL & ~CURL_GLOBAL_WIN32) != CURLE_OK)
 	{
-		libpq_append_conn_error(conn,
+		__libpq_append_conn_error(conn,
 								"curl_global_init failed during OAuth setup");
 		init_successful = PG_BOOL_NO;
 		goto done;
@@ -2597,7 +2766,7 @@ initialize_curl(PGconn *conn)
 		 * In a downgrade situation, the damage is already done. Curl global
 		 * state may be corrupted. Be noisy.
 		 */
-		libpq_append_conn_error(conn, "libcurl is no longer thread-safe\n"
+		__libpq_append_conn_error(conn, "libcurl is no longer thread-safe\n"
 								"\tCurl initialization was reported thread-safe when libpq\n"
 								"\twas compiled, but the currently installed version of\n"
 								"\tlibcurl reports that it is not. Recompile libpq against\n"
@@ -2649,7 +2818,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 		actx = calloc(1, sizeof(*actx));
 		if (!actx)
 		{
-			libpq_append_conn_error(conn, "out of memory");
+			__libpq_append_conn_error(conn, "out of memory");
 			return PGRES_POLLING_FAILED;
 		}
 
@@ -2657,7 +2826,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 		actx->timerfd = -1;
 
 		/* Should we enable unsafe features? */
-		actx->debugging = oauth_unsafe_debugging_enabled();
+		actx->debugging = __oauth_unsafe_debugging_enabled();
 
 		state->async_ctx = actx;
 
@@ -2895,7 +3064,7 @@ pg_fe_run_oauth_flow(PGconn *conn)
 	 * difficult corner case to exercise in practice, and unfortunately it's
 	 * not really clear whether it's necessary in all cases.
 	 */
-	masked = (pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
+	masked = (__pq_block_sigpipe(&osigset, &sigpipe_pending) == 0);
 #endif
 
 	result = pg_fe_run_oauth_flow_impl(conn);
@@ -2907,7 +3076,7 @@ pg_fe_run_oauth_flow(PGconn *conn)
 		 * Undo the SIGPIPE mask. Assume we may have gotten EPIPE (we have no
 		 * way of knowing at this level).
 		 */
-		pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
+		__pq_reset_sigpipe(&osigset, sigpipe_pending, true /* EPIPE, maybe */ );
 	}
 #endif
 
diff --git a/src/interfaces/libpq-oauth/fe-auth-oauth-curl.h b/src/interfaces/libpq-oauth/fe-auth-oauth-curl.h
new file mode 100644
index 00000000000..907f360d9d1
--- /dev/null
+++ b/src/interfaces/libpq-oauth/fe-auth-oauth-curl.h
@@ -0,0 +1,23 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/fe-auth-oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef FE_AUTH_OAUTH_CURL_H
+#define FE_AUTH_OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* FE_AUTH_OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..bd348a0afc4
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,64 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+if not libcurl.found() or host_system == 'windows'
+  subdir_done()
+endif
+
+libpq_sources = files(
+  'fe-auth-oauth-curl.c',
+)
+libpq_so_sources = [] # for shared lib, in addition to the above
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+libpq_c_args = ['-DSO_MAJOR_VERSION=1']
+
+# Not using both_libraries() here as
+# 1) resource files should only be in the shared library
+# 2) we want the .pc file to include a dependency to {pgport,common}_static for
+#    libpq_st, and {pgport,common}_shlib for libpq_sh
+#
+# We could try to avoid building the source files twice, but it probably adds
+# more complexity than its worth (reusing object files requires also linking
+# to the library on windows or breaks precompiled headers).
+libpq_oauth_st = static_library('libpq-oauth',
+  libpq_sources,
+  include_directories: [libpq_oauth_inc],
+  c_args: libpq_c_args,
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_stlib_code, libpq_deps],
+  kwargs: default_lib_args,
+)
+
+libpq_oauth_so = shared_library('libpq-oauth',
+  libpq_sources + libpq_so_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_args: libpq_c_args,
+  c_pch: pch_postgres_fe_h,
+  version: '1.' + pg_version_major.to_string(),
+  soversion: host_system != 'windows' ? '1' : '',
+  darwin_versions: ['1', '1.' + pg_version_major.to_string()],
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
+
+libpq_oauth = declare_dependency(
+  link_with: [libpq_oauth_so],
+  include_directories: [include_directories('.')]
+)
+
+pkgconfig.generate(
+  name: 'libpq-oauth',
+  description: 'PostgreSQL libpq library, device authorization oauth support',
+  url: pg_url,
+  libraries: libpq_oauth,
+  libraries_private: [frontend_stlib_code, libpq_oauth_deps],
+)
+
+subdir('po', if_found: libintl)
diff --git a/src/interfaces/libpq-oauth/po/meson.build b/src/interfaces/libpq-oauth/po/meson.build
new file mode 100644
index 00000000000..1ca1faaf726
--- /dev/null
+++ b/src/interfaces/libpq-oauth/po/meson.build
@@ -0,0 +1,3 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+nls_targets += [i18n.gettext('libpq-oauth' + '1' + '-' + pg_version_major.to_string())]
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..8cf8d9e54d8 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -64,10 +64,6 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
-
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..bc0ed85482a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,6 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
+pg_g_threadlock           212
+PQauthDataHook            213
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cf1a25e2ccc..55f980f3d05 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -15,6 +15,10 @@
 
 #include "postgres_fe.h"
 
+#ifndef WIN32
+#include <dlfcn.h>
+#endif
+
 #include "common/base64.h"
 #include "common/hmac.h"
 #include "common/jsonapi.h"
@@ -721,6 +725,44 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+static bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+#ifdef WIN32
+	return false;
+#else
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+
+	state->builtin_flow = dlopen(
+#if defined(__darwin__)
+								 "libpq-oauth.1.dylib",
+#else
+								 "libpq-oauth.so.1",
+#endif
+								 RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		fprintf(stderr, "failed dlopen: %s\n", dlerror()); // XXX
+		return false;
+	}
+
+	flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow");
+	cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow");
+
+	if (!(flow && cleanup))
+	{
+		fprintf(stderr, "failed dlsym: %s\n", dlerror()); // XXX
+		return false;
+	}
+
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+
+	return true;
+#endif							/* !WIN32 */
+}
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +834,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
 		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..699ba42acc2 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -33,10 +33,10 @@ typedef struct
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
 
diff --git a/src/interfaces/libpq/fe-auth.h b/src/interfaces/libpq/fe-auth.h
index de98e0d20c4..1d4991f8996 100644
--- a/src/interfaces/libpq/fe-auth.h
+++ b/src/interfaces/libpq/fe-auth.h
@@ -18,9 +18,6 @@
 #include "libpq-int.h"
 
 
-extern PQauthDataHook_type PQauthDataHook;
-
-
 /* Prototypes for functions in fe-auth.c */
 extern int	pg_fe_sendauth(AuthRequest areq, int payloadlen, PGconn *conn,
 						   bool *async);
diff --git a/src/interfaces/libpq/libpq-fe.h b/src/interfaces/libpq/libpq-fe.h
index 7d3a9df6fd5..696a6587dd4 100644
--- a/src/interfaces/libpq/libpq-fe.h
+++ b/src/interfaces/libpq/libpq-fe.h
@@ -812,6 +812,7 @@ typedef int (*PQauthDataHook_type) (PGauthData type, PGconn *conn, void *data);
 extern void PQsetAuthDataHook(PQauthDataHook_type hook);
 extern PQauthDataHook_type PQgetAuthDataHook(void);
 extern int	PQdefaultAuthDataHook(PGauthData type, PGconn *conn, void *data);
+extern PQauthDataHook_type PQauthDataHook;
 
 /* === in encnames.c === */
 
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 19f4a52a97a..02a88408e34 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -38,10 +38,6 @@ if gssapi.found()
   )
 endif
 
-if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
-endif
-
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
-- 
2.34.1

#273Christoph Berg
myon@debian.org
In reply to: Andres Freund (#269)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Andres Freund

Yes. Also, none of this has addressed my complaint about the extent
of the build and install dependencies. Yes, simply not selecting
--with-libcurl removes the problem ... but most packagers are under
very heavy pressure to enable all features of a package.

And this feature is kind of only useful if it's available anywhere. If
only half of your clients are able to use SSO, you'd probably stick
with passwords anyway. So it needs to be enabled by default.

How about we provide the current libpq.so without linking to curl and also a
libpq-oauth.so that has curl support? If we do it right libpq-oauth.so would
itself link to libpq.so, making libpq-oauth.so a fairly small library.

That way packagers can split libpq-oauth.so into a separate package, while
still just building once.

That's definitely a good plan. The blast radius of build dependencies
isn't really a problem, the install/run-time is.

Perhaps we could do the same with libldap and libgssapi? (Though
admittedly I have never seen any complaints or nagging questions from
security people about these.)

Christoph

#274Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#273)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Mar 31, 2025 at 7:06 AM Christoph Berg <myon@debian.org> wrote:

Perhaps we could do the same with libldap and libgssapi? (Though
admittedly I have never seen any complaints or nagging questions from
security people about these.)

If we end up happy with how the Curl indirection works, that seems
like it'd be kind of nice in theory. I'm not sure how many people
would notice, though.

On Wed, Mar 26, 2025 at 12:09 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Right
now we have an SO version of 1; maybe we want to remove the SO version
entirely to better indicate that it shouldn't be linked?

Maybe a better idea would be to ship an SONAME of
`libpq-oauth.so.0.<major>`, without any symlinks, so that there's
never any ambiguity about which module belongs with which libpq.

--Jacob

#275Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Thomas Munro (#251)
2 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Mar 18, 2025 at 9:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:

All pushed (wasn't sure if Daniel was going to but once I got tangled
up in all that kqueue stuff he probably quite reasonably assumed that
I would :-)).

Attached are two more followups, separate from the libcurl split:
- 0001 is a patch originally proposed at [1]/messages/by-id/CAOYmi+=4898tXuTvb2LstorRo9JsAnBcn8LE=qrgVPiPW8ZfCw@mail.gmail.com. Christoph pointed out
that the build fails on a platform that tries to enable Curl without
having either epoll() or kqueue(), due to a silly mistake I made in
the #ifdefs.
- 0002 should fix some timeouts in 002_client.pl reported by Andres on
Discord. I allowed a short connect_timeout to propagate into tests
which should not have it.

(The goal of 0001 is to get things building for now. After I finish
splitting the implementation into its own module, it'll make more
sense to simply not build that module on platforms that can't
implement a useful flow.)

Thanks,
--Jacob

[1]: /messages/by-id/CAOYmi+=4898tXuTvb2LstorRo9JsAnBcn8LE=qrgVPiPW8ZfCw@mail.gmail.com

Attachments:

0002-oauth-Remove-unneeded-timeouts-from-t-002_client.patchapplication/octet-stream; name=0002-oauth-Remove-unneeded-timeouts-from-t-002_client.patchDownload
From 65c03c649084f9a7b54d172dc14f442e68b3aab0 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Thu, 3 Apr 2025 10:12:45 -0700
Subject: [PATCH 2/2] oauth: Remove unneeded timeouts from t/002_client

The connect_timeout=1 setting for the --hang-forever test was kept in
place for later tests, causing unexpected timeouts on slower buildfarm
animals. Remove it.

Reported-by: Andres Freund <andres@anarazel.de>
---
 src/test/modules/oauth_validator/t/002_client.pl | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
index ab83258d736..54769f12f57 100644
--- a/src/test/modules/oauth_validator/t/002_client.pl
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -122,6 +122,9 @@ test(
 	flags => ["--hang-forever"],
 	expected_stderr => qr/failed: timeout expired/);
 
+# Remove the timeout for later tests.
+$common_connstr = "$base_connstr oauth_issuer=$issuer oauth_client_id=myID";
+
 # Test various misbehaviors of the client hook.
 my @cases = (
 	{
-- 
2.34.1

0001-oauth-Fix-build-on-platforms-without-epoll-kqueue.patchapplication/octet-stream; name=0001-oauth-Fix-build-on-platforms-without-epoll-kqueue.patchDownload
From a1da0ea92c77fdc59c4f14e3af3b5b0f93cfe4df Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 31 Mar 2025 16:07:33 -0700
Subject: [PATCH 1/2] oauth: Fix build on platforms without epoll/kqueue

register_socket() missed a variable declaration if neither
HAVE_SYS_EPOLL_H nor HAVE_SYS_EVENT_H was defined.

While we're fixing that, adjust the tests to check pg_config.h for one
of the multiplexer implementations, rather than assuming that Windows is
the only platform without support. (Christoph reported this on
hurd-amd64, an experimental Debian.)

Reported-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/Z-sPFl27Y0ZC-VBl%40msg.df7cb.de
---
 src/interfaces/libpq/fe-auth-oauth-curl.c        | 4 ++--
 src/test/modules/oauth_validator/t/001_server.pl | 6 ++++--
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq/fe-auth-oauth-curl.c
index 9e0e8a9f2be..cd9c0323bb6 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq/fe-auth-oauth-curl.c
@@ -1172,8 +1172,9 @@ static int
 register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 				void *socketp)
 {
-#ifdef HAVE_SYS_EPOLL_H
 	struct async_ctx *actx = ctx;
+
+#ifdef HAVE_SYS_EPOLL_H
 	struct epoll_event ev = {0};
 	int			res;
 	int			op = EPOLL_CTL_ADD;
@@ -1231,7 +1232,6 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	return 0;
 #endif
 #ifdef HAVE_SYS_EVENT_H
-	struct async_ctx *actx = ctx;
 	struct kevent ev[2] = {0};
 	struct kevent ev_out[2];
 	struct timespec timeout = {0};
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index 30295364ebd..d88994abc24 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -26,9 +26,11 @@ if (!$ENV{PG_TEST_EXTRA} || $ENV{PG_TEST_EXTRA} !~ /\boauth\b/)
 	  'Potentially unsafe test oauth not enabled in PG_TEST_EXTRA';
 }
 
-if ($windows_os)
+unless (check_pg_config("#define HAVE_SYS_EVENT_H 1")
+	or check_pg_config("#define HAVE_SYS_EPOLL_H 1"))
 {
-	plan skip_all => 'OAuth server-side tests are not supported on Windows';
+	plan skip_all =>
+	  'OAuth server-side tests are not supported on this platform';
 }
 
 if ($ENV{with_libcurl} ne 'yes')
-- 
2.34.1

#276Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#275)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 3 Apr 2025, at 20:02, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Tue, Mar 18, 2025 at 9:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:

All pushed (wasn't sure if Daniel was going to but once I got tangled
up in all that kqueue stuff he probably quite reasonably assumed that
I would :-)).

Attached are two more followups, separate from the libcurl split:
- 0001 is a patch originally proposed at [1]. Christoph pointed out
that the build fails on a platform that tries to enable Curl without
having either epoll() or kqueue(), due to a silly mistake I made in
the #ifdefs.
- 0002 should fix some timeouts in 002_client.pl reported by Andres on
Discord. I allowed a short connect_timeout to propagate into tests
which should not have it.

Thanks, both LGTM so pushed.

--
Daniel Gustafsson

#277Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#274)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Apr 3, 2025 at 12:50 PM Daniel Gustafsson <daniel@yesql.se> wrote:

Thanks, both LGTM so pushed.

Thank you!

On Tue, Apr 1, 2025 at 3:40 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Maybe a better idea would be to ship an SONAME of
`libpq-oauth.so.0.<major>`, without any symlinks, so that there's
never any ambiguity about which module belongs with which libpq.

While I was looking into this I found that Debian's going to use the
existence of an SONAME to check other things, which I assume will make
Christoph's life harder. I have switched over to
'libpq-oauth-<major>.so', without any SONAME or symlinks.

v2 simplifies quite a few things and breaks out the new duplicated
code into its own file. I pared down the exports from libpq, by having
it push the pg_g_threadlock pointer directly into the module when
needed. I think a future improvement would be to combine the dlopen
with the libcurl initialization, so that everything is done exactly
once and the module doesn't need to know about threadlocks at all.

i18n is still not working correctly on my machine. I've gotten `make
init-po` to put the files into the right places now, but if I fake a
.po file and install the generated .mo, the translations still don't
seem to be found at runtime. Is anyone able to take a quick look to
see if I'm missing something obvious?

I still need to disable the module entirely on Windows (and other
platforms without support), and potentially rename the --with-libcurl
option.

Thanks,
--Jacob

Attachments:

v2-0001-WIP-split-Device-Authorization-flow-into-dlopen-d.patchapplication/octet-stream; name=v2-0001-WIP-split-Device-Authorization-flow-into-dlopen-d.patchDownload
From 20b4fbe435d31c4e784ce56c887a8d5c365d8ea5 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH v2] WIP: split Device Authorization flow into dlopen'd module

See notes on mailing list.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 meson.build                                   |  12 +-
 src/interfaces/Makefile                       |   9 +
 src/interfaces/libpq-oauth/Makefile           |  56 ++++++
 src/interfaces/libpq-oauth/README             |  18 ++
 src/interfaces/libpq-oauth/exports.txt        |   4 +
 src/interfaces/libpq-oauth/meson.build        |  32 +++
 src/interfaces/libpq-oauth/nls.mk             |  15 ++
 .../oauth-curl.c}                             |   9 +-
 src/interfaces/libpq-oauth/oauth-curl.h       |  23 +++
 src/interfaces/libpq-oauth/oauth-utils.c      | 190 ++++++++++++++++++
 src/interfaces/libpq-oauth/oauth-utils.h      |  22 ++
 src/interfaces/libpq-oauth/po/LINGUAS         |   0
 src/interfaces/libpq-oauth/po/meson.build     |   3 +
 src/interfaces/libpq/Makefile                 |   4 -
 src/interfaces/libpq/exports.txt              |   1 +
 src/interfaces/libpq/fe-auth-oauth.c          |  52 ++++-
 src/interfaces/libpq/fe-auth-oauth.h          |   4 +-
 src/interfaces/libpq/meson.build              |   4 -
 18 files changed, 431 insertions(+), 27 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/README
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 create mode 100644 src/interfaces/libpq-oauth/nls.mk
 rename src/interfaces/{libpq/fe-auth-oauth-curl.c => libpq-oauth/oauth-curl.c} (99%)
 create mode 100644 src/interfaces/libpq-oauth/oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.c
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.h
 create mode 100644 src/interfaces/libpq-oauth/po/LINGUAS
 create mode 100644 src/interfaces/libpq-oauth/po/meson.build

diff --git a/meson.build b/meson.build
index 454ed81f5ea..5620d959056 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -3215,17 +3216,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..322a498823d 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,16 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..461c44b59c1
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,56 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization oauth support"
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION. (We still put the major version into the name, to make it
+# obvious where the library belongs.)
+NAME = libpq-oauth-$(MAJORVERSION)
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES) \
+	oauth-curl.o \
+	oauth-utils.o
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = -lcurl
+SHLIB_PREREQS = submake-libpq
+
+SHLIB_EXPORTS = exports.txt
+
+PKG_CONFIG_REQUIRES_PRIVATE = libpq
+#
+# Make dependencies on pg_config_paths.h visible in all builds.
+oauth-curl.o: oauth-curl.c $(top_builddir)/src/port/pg_config_paths.h
+
+$(top_builddir)/src/port/pg_config_paths.h:
+	$(MAKE) -C $(top_builddir)/src/port pg_config_paths.h
+
+all: all-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/interfaces/libpq-oauth/README b/src/interfaces/libpq-oauth/README
new file mode 100644
index 00000000000..5006f405080
--- /dev/null
+++ b/src/interfaces/libpq-oauth/README
@@ -0,0 +1,18 @@
+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
+later split out as its own shared library in order to isolate its dependency on
+libcurl. (End users who don't want the Curl dependency can simply choose not to
+install this module.)
+
+If a connection string allows the use of OAuth, the server asks for it, and a
+libpq client has not installed its own custom OAuth flow, libpq will attempt to
+delay-load this module using dlopen() and the following ABI. Failure to load
+results in a failed connection.
+
+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+without warning, even during minor releases (however unlikely). The compiled
+version of libpq-oauth should always match the compiled version of libpq.
+
+TODO
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..3787b388e04
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+pg_fe_run_oauth_flow      1
+pg_fe_cleanup_oauth_flow  2
+pg_g_threadlock           3
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..1834afbf7a5
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,32 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+if not libcurl.found() or host_system == 'windows'
+  subdir_done()
+endif
+
+libpq_oauth_sources = files(
+  'oauth-curl.c',
+  'oauth-utils.c',
+)
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION. (We still put the major version into the name, to make it
+# obvious where the library belongs.)
+libpq_oauth_so = shared_module('libpq-oauth-' + pg_version_major.to_string(),
+  libpq_oauth_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
+
+subdir('po', if_found: libintl)
diff --git a/src/interfaces/libpq-oauth/nls.mk b/src/interfaces/libpq-oauth/nls.mk
new file mode 100644
index 00000000000..eab3347ef60
--- /dev/null
+++ b/src/interfaces/libpq-oauth/nls.mk
@@ -0,0 +1,15 @@
+# src/interfaces/libpq-oauth/nls.mk
+CATALOG_NAME     = libpq-oauth
+GETTEXT_FILES    = oauth-curl.c \
+                   oauth-utils.c
+GETTEXT_TRIGGERS = actx_error:2 \
+                   libpq_append_conn_error:2 \
+                   libpq_append_error:2 \
+                   libpq_gettext \
+                   libpq_ngettext:1,2
+GETTEXT_FLAGS    = actx_error:2:c-format \
+                   libpq_append_conn_error:2:c-format \
+                   libpq_append_error:2:c-format \
+                   libpq_gettext:1:pass-c-format \
+                   libpq_ngettext:1:pass-c-format \
+                   libpq_ngettext:2:pass-c-format
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/oauth-curl.c
similarity index 99%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/oauth-curl.c
index cd9c0323bb6..11d17ec1597 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/oauth-curl.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * fe-auth-oauth-curl.c
+ * oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *	  src/interfaces/libpq-oauth/oauth-curl.c
  *
  *-------------------------------------------------------------------------
  */
@@ -31,6 +31,8 @@
 #include "fe-auth-oauth.h"
 #include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "oauth-curl.h"
+#include "oauth-utils.h"
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -2487,8 +2489,9 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 		.verification_uri_complete = actx->authz.verification_uri_complete,
 		.expires_in = actx->authz.expires_in,
 	};
+	PQauthDataHook_type hook = PQgetAuthDataHook();
 
-	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+	res = hook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
 
 	if (!res)
 	{
diff --git a/src/interfaces/libpq-oauth/oauth-curl.h b/src/interfaces/libpq-oauth/oauth-curl.h
new file mode 100644
index 00000000000..bcc1e737dcd
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-curl.h
@@ -0,0 +1,23 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_CURL_H
+#define OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.c b/src/interfaces/libpq-oauth/oauth-utils.c
new file mode 100644
index 00000000000..81f9c6dc247
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.c
@@ -0,0 +1,190 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.c
+ *
+ *	  "Glue" helpers providing a copy of some internal APIs from libpq. At
+ *	  some point in the future, we might be able to deduplicate.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq-oauth/oauth-utils.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include "libpq-int.h"
+#include "oauth-utils.h"
+#include "pg_config_paths.h"
+
+pgthreadlock_t pg_g_threadlock;
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  A newline is automatically appended; the
+ * format should not end with a newline.
+ */
+void
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(&conn->errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(&conn->errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(&conn->errorMessage, '\n');
+}
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+int
+pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+
+void
+pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
+
+#ifdef ENABLE_NLS
+
+static void
+libpq_binddomain(void)
+{
+	/*
+	 * At least on Windows, there are gettext implementations that fail if
+	 * multiple threads call bindtextdomain() concurrently.  Use a mutex and
+	 * flag variable to ensure that we call it just once per process.  It is
+	 * not known that similar bugs exist on non-Windows platforms, but we
+	 * might as well do it the same way everywhere.
+	 */
+	static volatile bool already_bound = false;
+	static pthread_mutex_t binddomain_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+	if (!already_bound)
+	{
+		/* bindtextdomain() does not preserve errno */
+#ifdef WIN32
+		int			save_errno = GetLastError();
+#else
+		int			save_errno = errno;
+#endif
+
+		(void) pthread_mutex_lock(&binddomain_mutex);
+
+		if (!already_bound)
+		{
+			const char *ldir;
+
+			/*
+			 * No relocatable lookup here because the calling executable could
+			 * be anywhere
+			 */
+			ldir = getenv("PGLOCALEDIR");
+			if (!ldir)
+				ldir = LOCALEDIR;
+			bindtextdomain(PG_TEXTDOMAIN("libpq-oauth"), ldir);
+			already_bound = true;
+		}
+
+		(void) pthread_mutex_unlock(&binddomain_mutex);
+
+#ifdef WIN32
+		SetLastError(save_errno);
+#else
+		errno = save_errno;
+#endif
+	}
+}
+
+char *
+libpq_gettext(const char *msgid)
+{
+	libpq_binddomain();
+	return dgettext(PG_TEXTDOMAIN("libpq-oauth"), msgid);
+}
+
+char *
+libpq_ngettext(const char *msgid, const char *msgid_plural, unsigned long n)
+{
+	libpq_binddomain();
+	return dngettext(PG_TEXTDOMAIN("libpq-oauth"), msgid, msgid_plural, n);
+}
+
+#endif							/* ENABLE_NLS */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.h b/src/interfaces/libpq-oauth/oauth-utils.h
new file mode 100644
index 00000000000..e5bd6b28b11
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.h
@@ -0,0 +1,22 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.h
+ *
+ *	  Definitions providing missing libpq internal APIs
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "libpq-int.h"
+
+extern PGDLLEXPORT pgthreadlock_t pg_g_threadlock;
+
+void		libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+bool		oauth_unsafe_debugging_enabled(void);
+int			pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
+void		pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
diff --git a/src/interfaces/libpq-oauth/po/LINGUAS b/src/interfaces/libpq-oauth/po/LINGUAS
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/src/interfaces/libpq-oauth/po/meson.build b/src/interfaces/libpq-oauth/po/meson.build
new file mode 100644
index 00000000000..61b3807ac68
--- /dev/null
+++ b/src/interfaces/libpq-oauth/po/meson.build
@@ -0,0 +1,3 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+nls_targets += [i18n.gettext('libpq-oauth-' + pg_version_major.to_string())]
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..8cf8d9e54d8 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -64,10 +64,6 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
-
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..0625cf39e9a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,4 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cf1a25e2ccc..ce15a5e8de1 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -15,6 +15,10 @@
 
 #include "postgres_fe.h"
 
+#ifndef WIN32
+#include <dlfcn.h>
+#endif
+
 #include "common/base64.h"
 #include "common/hmac.h"
 #include "common/jsonapi.h"
@@ -721,6 +725,44 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+static bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+#ifdef WIN32
+	return false;
+#else
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+	pgthreadlock_t *threadlock_copy;
+
+	/* libpq-oauth is versioned in lockstep; we don't export a stable ABI. */
+	state->builtin_flow = dlopen("libpq-oauth-" PG_MAJORVERSION DLSUFFIX,
+								 RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		fprintf(stderr, "failed dlopen: %s\n", dlerror()); // XXX
+		return false;
+	}
+
+	flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow");
+	cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow");
+	threadlock_copy = dlsym(state->builtin_flow, "pg_g_threadlock");
+
+	if (!(flow && cleanup && threadlock_copy))
+	{
+		fprintf(stderr, "failed dlsym: %s\n", dlerror()); // XXX
+		dlclose(state->builtin_flow);
+		return false;
+	}
+
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+	*threadlock_copy = pg_g_threadlock;
+
+	return true;
+#endif							/* !WIN32 */
+}
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +834,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
 		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..699ba42acc2 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -33,10 +33,10 @@ typedef struct
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
 
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 292fecf3320..47d38e9378f 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -38,10 +38,6 @@ if gssapi.found()
   )
 endif
 
-if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
-endif
-
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
-- 
2.34.1

#278Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#277)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Apr 3, 2025 at 12:50 PM Daniel Gustafsson <daniel@yesql.se> wrote:

Thanks, both LGTM so pushed.

Ack, the build there worked now. (Albeit without running any tests,
but let's not care too much about this snowflake architecture.)

On Tue, Apr 1, 2025 at 3:40 PM Jacob Champion
While I was looking into this I found that Debian's going to use the
existence of an SONAME to check other things, which I assume will make
Christoph's life harder. I have switched over to
'libpq-oauth-<major>.so', without any SONAME or symlinks.

Since this is a plugin for libpq and nothing external is linking
directly to it, using a formal SONAME wouldn't gain anything, right.

Thanks,
Christoph

#279Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#277)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 05.04.25 02:27, Jacob Champion wrote:

On Tue, Apr 1, 2025 at 3:40 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Maybe a better idea would be to ship an SONAME of
`libpq-oauth.so.0.<major>`, without any symlinks, so that there's
never any ambiguity about which module belongs with which libpq.

While I was looking into this I found that Debian's going to use the
existence of an SONAME to check other things, which I assume will make
Christoph's life harder. I have switched over to
'libpq-oauth-<major>.so', without any SONAME or symlinks.

Yes, this is correct. We want a shared module, not a shared library, in
meson parlance.

But the installation directory of a shared module should not be directly
libdir. That is reserved for libraries that you can link at build-time.
Here are some examples I found of other packages that have a library
that itself has plugins:

https://packages.debian.org/bookworm/amd64/libaspell15/filelist
https://packages.debian.org/bookworm/amd64/libkrb5-3/filelist
https://packages.debian.org/bookworm/amd64/libmagickcore-6.q16-6/filelist

v2 simplifies quite a few things and breaks out the new duplicated
code into its own file. I pared down the exports from libpq, by having
it push the pg_g_threadlock pointer directly into the module when
needed. I think a future improvement would be to combine the dlopen
with the libcurl initialization, so that everything is done exactly
once and the module doesn't need to know about threadlocks at all.

Looks mostly ok. I need the following patch to get it to build on macOS:

diff --git a/src/interfaces/libpq-oauth/Makefile 
b/src/interfaces/libpq-oauth/Makefile
index 461c44b59c1..d5469ca0e11 100644
--- a/src/interfaces/libpq-oauth/Makefile
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -28,8 +28,9 @@ OBJS = \
     oauth-utils.o
  SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
-SHLIB_LINK = -lcurl
+SHLIB_LINK = -lcurl $(filter -lintl, $(LIBS))
  SHLIB_PREREQS = submake-libpq
+BE_DLLLIBS =

SHLIB_EXPORTS = exports.txt

(The second change is not strictly required, but it disables the use of
-bundle_loader postgres, since this is not a backend-loadable module.)

I don't know whether we need an exports file for libpq-oauth. The other
shared modules don't provide an export file, and I'm not sure whether
this is even supported for shared modules. I guess it doesn't hurt?

The PKG_CONFIG_REQUIRES_PRIVATE setting in libpq-oauth/Makefile is
meaningless and can be removed.

In fe-auth-oauth.c, I note that dlerror() is not necessarily
thread-safe. Since there isn't really an alternative, I guess it's ok
to leave it like that, but I figured it should be mentioned.

i18n is still not working correctly on my machine. I've gotten `make
init-po` to put the files into the right places now, but if I fake a
.po file and install the generated .mo, the translations still don't
seem to be found at runtime. Is anyone able to take a quick look to
see if I'm missing something obvious?

Not sure, the code looks correct at first glance. However, you could
also just keep the libpq-oauth strings in the libpq catalog. There
isn't really a need to make a separate one, since the versions you end
up installing are locked to each other. So you could for example in
libpq's nls.mk just add

../libpq-oauth/oauth-curl.c

etc. to the files.

Maybe it would also make sense to make libpq-oauth a subdirectory of the
libpq directory instead of a peer.

#280Andres Freund
andres@anarazel.de
In reply to: Jacob Champion (#277)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-04-04 17:27:46 -0700, Jacob Champion wrote:

+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+without warning, even during minor releases (however unlikely). The compiled
+version of libpq-oauth should always match the compiled version of libpq.

Shouldn't we then include the *minor* version in the soname? I think otherwise
we run into the danger of the wrong library version being loaded in some
cases. Imagine a program being told with libpq to use via rpath. But then we
load the oauth module via a dlopen without a specified path - it'll just
search the global library locations.

Which actually makes me wonder if we ought to instead load the library from a
specific location...

+TODO
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..3787b388e04
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+pg_fe_run_oauth_flow      1
+pg_fe_cleanup_oauth_flow  2
+pg_g_threadlock           3

The pg_g_threadlock thing seems pretty ugly. Can't we just pass that to the
relevant functions? But more fundamentally, are we actually using
pg_g_threadlock anywhere? I removed all the releant code and the tests still
seem to pass?

diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..1834afbf7a5
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,32 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+if not libcurl.found() or host_system == 'windows'
+  subdir_done()
+endif

Why does this not work on windows? I don't see similar code in the removed
lines?

Greetings,

Andres Freund

#281Andres Freund
andres@anarazel.de
In reply to: Peter Eisentraut (#279)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-04-07 15:59:19 +0200, Peter Eisentraut wrote:

On 05.04.25 02:27, Jacob Champion wrote:

On Tue, Apr 1, 2025 at 3:40 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Maybe a better idea would be to ship an SONAME of
`libpq-oauth.so.0.<major>`, without any symlinks, so that there's
never any ambiguity about which module belongs with which libpq.

While I was looking into this I found that Debian's going to use the
existence of an SONAME to check other things, which I assume will make
Christoph's life harder. I have switched over to
'libpq-oauth-<major>.so', without any SONAME or symlinks.

Yes, this is correct. We want a shared module, not a shared library, in
meson parlance.

It's not entirely obvious to me that we do want that.

There recently was a breakage of building with PG on macos with meson, due to
the meson folks implementing a feature request to move away from using
bundles, as
1) bundles apparently aren't supported on iOS
2) there apparently aren't any restrictions left that require using bundles,
and there haven't been for a while.

They've now reverted these changes, due to the postgres build failures that
caused as well as recognizing they probably moved too fast, but the iOS
portion seems like it could be relevant for us?

Afaict this library doesn't have unresolved symbols, due to just linking to
libpq. So I don't think we really need this to be a shared module?

But the installation directory of a shared module should not be directly
libdir.

Agreed. However, it seems like relocatability would be much more important for
something like libpq than server modules... Particularly on windows it'll
often just be shipped alongside the executable - which won't work if we load
from pklibdir or such.

I don't know whether we need an exports file for libpq-oauth. The other
shared modules don't provide an export file, and I'm not sure whether this
is even supported for shared modules. I guess it doesn't hurt?

It does seem just using PGDLLEXPORT would suffice here.

The PKG_CONFIG_REQUIRES_PRIVATE setting in libpq-oauth/Makefile is
meaningless and can be removed.

In fe-auth-oauth.c, I note that dlerror() is not necessarily thread-safe.

I sometimes really really hate posix.

Greetings,

Andres Freund

#282Peter Eisentraut
peter@eisentraut.org
In reply to: Andres Freund (#281)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 07.04.25 16:43, Andres Freund wrote:

While I was looking into this I found that Debian's going to use the
existence of an SONAME to check other things, which I assume will make
Christoph's life harder. I have switched over to
'libpq-oauth-<major>.so', without any SONAME or symlinks.

Yes, this is correct. We want a shared module, not a shared library, in
meson parlance.

It's not entirely obvious to me that we do want that.

There recently was a breakage of building with PG on macos with meson, due to
the meson folks implementing a feature request to move away from using
bundles, as
1) bundles apparently aren't supported on iOS
2) there apparently aren't any restrictions left that require using bundles,
and there haven't been for a while.

They've now reverted these changes, due to the postgres build failures that
caused as well as recognizing they probably moved too fast, but the iOS
portion seems like it could be relevant for us?

Um, interesting. AFAICT, the change you mention was reverted from the
1.7 branch because it was accidentally backpatched, but it remains in
master.

(For those just catching up:

https://github.com/mesonbuild/meson/issues/14240
https://github.com/mesonbuild/meson/pull/14340
https://github.com/mesonbuild/meson/commit/fa3f7e10b47d1f2f438f216f6c44f56076a01bfc
)

Overall, this seems like a good idea, as it removes a historical
platform-specific particularity. (I found a historical analysis at
<https://stackoverflow.com/questions/2339679/&gt;.)

But it does break existing users that add -bundle_loader, because
-bundle_loader only works with -bundle and is rejected with -dynamiclib.

To test, I patched the makefiles to use -dynamiclib instead of -bundle,
which also required removing -bundle_loader, and it also required adding
-Wl,-undefined,dynamic_lookup. This built correctly and generally worked.

But then you also run into a new variant of this issue:

/messages/by-id/E1o4HOv-001Oyi-5n@gemulon.postgresql.org

Because there is no -bundle_loader, the symbol search order appears to
be different, and so hash_search() gets found in the OS library first.

So this is all going to be a mess at some point sooner or later. :-(

Afaict this library doesn't have unresolved symbols, due to just linking to
libpq. So I don't think we really need this to be a shared module?

Apart from the hard distinction on macOS, in terms of the build system,
the distinction between "library" and "module" is mainly whether the
resulting library gets a soname, version symlinks, and what directory it
is installed in, so in that sense the discussion so far indicates that
it should be a module. I suppose on macOS we could link it like a
library and install it like a module, but that would effectively create
a third category, and I don't see why that would be worth it.

#283Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Andres Freund (#281)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi all,

Thanks for all the feedback! I'll combine them all into one email:

On Mon, Apr 7, 2025 at 6:59 AM Peter Eisentraut <peter@eisentraut.org> wrote:

Looks mostly ok. I need the following patch to get it to build on macOS:
[...]
(The second change is not strictly required, but it disables the use of
-bundle_loader postgres, since this is not a backend-loadable module.)

Hm, okay. I'll take a closer look at this, thanks.

The PKG_CONFIG_REQUIRES_PRIVATE setting in libpq-oauth/Makefile is
meaningless and can be removed.

Ah, right. Will do.

In fe-auth-oauth.c, I note that dlerror() is not necessarily
thread-safe. Since there isn't really an alternative, I guess it's ok
to leave it like that, but I figured it should be mentioned.

Yeah. The XXX comments there are a reminder to me to lock the stderr
printing behind debug mode, since I hope most non-packaging people are
not going to be troubleshooting load-time errors. But see the
threadlock discussions below.

Not sure, the code looks correct at first glance. However, you could
also just keep the libpq-oauth strings in the libpq catalog. There
isn't really a need to make a separate one, since the versions you end
up installing are locked to each other. So you could for example in
libpq's nls.mk just add

../libpq-oauth/oauth-curl.c

etc. to the files.

Oh, that's an interesting idea. Thanks, I'll give it a try.

Maybe it would also make sense to make libpq-oauth a subdirectory of the
libpq directory instead of a peer.

Works for me.

On Mon, Apr 7, 2025 at 7:21 AM Andres Freund <andres@anarazel.de> wrote:

On 2025-04-04 17:27:46 -0700, Jacob Champion wrote:

+This module ABI is an internal implementation detail, so it's subject to change
+without warning, even during minor releases (however unlikely). The compiled
+version of libpq-oauth should always match the compiled version of libpq.

Shouldn't we then include the *minor* version in the soname?

No objection here.

I think otherwise
we run into the danger of the wrong library version being loaded in some
cases. Imagine a program being told with libpq to use via rpath. But then we
load the oauth module via a dlopen without a specified path - it'll just
search the global library locations.

Ah, you mean if the RPATH'd build doesn't have a flow, but the
globally installed one (with a different ABI) does? Yeah, that would
be a problem.

Which actually makes me wonder if we ought to instead load the library from a
specific location...

We could hardcode the disk location for version 1, I guess. I kind of
liked giving packagers the flexibility they're used to having from the
ld.so architecture, though. See below.

+# src/interfaces/libpq-oauth/exports.txt
+pg_fe_run_oauth_flow      1
+pg_fe_cleanup_oauth_flow  2
+pg_g_threadlock           3

The pg_g_threadlock thing seems pretty ugly. Can't we just pass that to the
relevant functions?

We can do it however we want, honestly, especially since the ABI isn't
public/stable. I chose this way just to ease review.

But more fundamentally, are we actually using
pg_g_threadlock anywhere? I removed all the releant code and the tests still
seem to pass?

If you have an older Curl installation, it is used. Newer libcurls
don't need it.

A future simplification could be to pull the use of the threadlock
back into libpq, and have it perform a one-time
dlopen-plus-Curl-initialization under the lock... That would also get
rid of the dlerror() thread safety problems. But that's an awful lot
of moving parts under a mutex, which makes me a little nervous.

diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
+if not libcurl.found() or host_system == 'windows'
+  subdir_done()
+endif

Why does this not work on windows? I don't see similar code in the removed
lines?

The Device Authorization flow is not currently implemented on Windows.

On Mon, Apr 7, 2025 at 7:43 AM Andres Freund <andres@anarazel.de> wrote:

Yes, this is correct. We want a shared module, not a shared library, in
meson parlance.

It's not entirely obvious to me that we do want that.

There recently was a breakage of building with PG on macos with meson, due to
the meson folks implementing a feature request to move away from using
bundles, as
1) bundles apparently aren't supported on iOS
2) there apparently aren't any restrictions left that require using bundles,
and there haven't been for a while.

Could you explain how this is related to .app bundles? I thought I was
just building a standard dylib.

Afaict this library doesn't have unresolved symbols, due to just linking to
libpq. So I don't think we really need this to be a shared module?

Correct, no unresolved symbols. My naive understanding was that
distros were going to impose restrictions on an SONAME'd library that
we may not want to deal with.

But the installation directory of a shared module should not be directly
libdir.

Agreed. However, it seems like relocatability would be much more important for
something like libpq than server modules... Particularly on windows it'll
often just be shipped alongside the executable - which won't work if we load
from pklibdir or such.

Yeah, I really like the simplicity of "use the standard runtime
loader, just on-demand." Seems more friendly to the ecosystem.

Are there technical downsides of putting it into $libdir? I understand
there are "people" downsides, since we don't really want to signal
that this is a publicly linkable thing... but surely if you go through
the process of linking our library (which has no SONAME, includes the
major/minor version in its -l option, and has no pkgconfig) and
shoving a private pointer to a threadlock into it, you can keep all
the pieces when they break?

I don't know whether we need an exports file for libpq-oauth. The other
shared modules don't provide an export file, and I'm not sure whether this
is even supported for shared modules. I guess it doesn't hurt?

It does seem just using PGDLLEXPORT would suffice here.

My motivation was to strictly identify the ABI that we intend libpq to
use, to try to future-proof things for everybody. Especially since
we're duplicating functions from libpq that we'd rather not be. (The
use of RTLD_LOCAL maybe makes that more of a belt-and-suspenders
thing.)

Are there any downsides to the exports file?

Thanks,
--Jacob

#284Andres Freund
andres@anarazel.de
In reply to: Peter Eisentraut (#282)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-04-07 18:38:19 +0200, Peter Eisentraut wrote:

On 07.04.25 16:43, Andres Freund wrote:

There recently was a breakage of building with PG on macos with meson, due to
the meson folks implementing a feature request to move away from using
bundles, as
1) bundles apparently aren't supported on iOS
2) there apparently aren't any restrictions left that require using bundles,
and there haven't been for a while.

They've now reverted these changes, due to the postgres build failures that
caused as well as recognizing they probably moved too fast, but the iOS
portion seems like it could be relevant for us?

Um, interesting. AFAICT, the change you mention was reverted from the 1.7
branch because it was accidentally backpatched, but it remains in master.

I think the plan is to either redesign it in master or to revert it.

(For those just catching up:

https://github.com/mesonbuild/meson/issues/14240
https://github.com/mesonbuild/meson/pull/14340
https://github.com/mesonbuild/meson/commit/fa3f7e10b47d1f2f438f216f6c44f56076a01bfc
)

Overall, this seems like a good idea, as it removes a historical
platform-specific particularity. (I found a historical analysis at
<https://stackoverflow.com/questions/2339679/&gt;.)

But it does break existing users that add -bundle_loader, because
-bundle_loader only works with -bundle and is rejected with -dynamiclib.

Seems hard to imagine that somebody would inject -bundle_loader separately
from src/makefiles/Makefile.darwin?

To test, I patched the makefiles to use -dynamiclib instead of -bundle,
which also required removing -bundle_loader, and it also required adding
-Wl,-undefined,dynamic_lookup. This built correctly and generally worked.

But then you also run into a new variant of this issue:

/messages/by-id/E1o4HOv-001Oyi-5n@gemulon.postgresql.org

Because there is no -bundle_loader, the symbol search order appears to be
different, and so hash_search() gets found in the OS library first.

So this is all going to be a mess at some point sooner or later. :-(

Yikes, that is depressing / scary. I wonder if we ought to rename our
hash_search with some macro magic or such regardless of using -bundle or not.

Afaict this library doesn't have unresolved symbols, due to just linking to
libpq. So I don't think we really need this to be a shared module?

Apart from the hard distinction on macOS, in terms of the build system, the
distinction between "library" and "module" is mainly whether the resulting
library gets a soname, version symlinks, and what directory it is installed
in, so in that sense the discussion so far indicates that it should be a
module.

I don't think that happens if you don't specify a soname etc. And we'd need to
adjust the install dir either way, I think?

I suppose on macOS we could link it like a library and install it
like a module, but that would effectively create a third category, and I
don't see why that would be worth it.

I think there are postgres clients for iphone, not sure if they use
libpq. Today libpq might actually cross-build successfully for iOS [1]I couldn't immediately quickly figure out how to install additional SDKs on the commandline on macos an then gave up before attaching a monitor to my mac mini.. But if
we use shared_module() that won't be the case for libpq-oauth.

Anyway, I don't have a strong position on this, I just wanted to bring it up
for consideration.

Greetings,

Andres Freund

[1]: I couldn't immediately quickly figure out how to install additional SDKs on the commandline on macos an then gave up before attaching a monitor to my mac mini.
on the commandline on macos an then gave up before attaching a monitor to my
mac mini.

#285Andres Freund
andres@anarazel.de
In reply to: Jacob Champion (#283)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-04-07 09:41:25 -0700, Jacob Champion wrote:

On Mon, Apr 7, 2025 at 7:21 AM Andres Freund <andres@anarazel.de> wrote:

On 2025-04-04 17:27:46 -0700, Jacob Champion wrote:
I think otherwise
we run into the danger of the wrong library version being loaded in some
cases. Imagine a program being told with libpq to use via rpath. But then we
load the oauth module via a dlopen without a specified path - it'll just
search the global library locations.

Ah, you mean if the RPATH'd build doesn't have a flow, but the
globally installed one (with a different ABI) does? Yeah, that would
be a problem.

That and more: Even if the RPATH libpq does have oauth support, the
libpq-oauth won't necessarily be at the front of the global library search
path. So afaict you'll often get a different libpq-oauth.

Except perhaps on macos, where all this stuff works differently again. But I
managed to unload the required knowledge out of my brain and don't want to
further think about that :)

+# src/interfaces/libpq-oauth/exports.txt
+pg_fe_run_oauth_flow      1
+pg_fe_cleanup_oauth_flow  2
+pg_g_threadlock           3

The pg_g_threadlock thing seems pretty ugly. Can't we just pass that to the
relevant functions?

We can do it however we want, honestly, especially since the ABI isn't
public/stable. I chose this way just to ease review.

I found it rather confusing that libpq looks up a symbol and then sets
libpq-oauth's symbol to a pointer in libpq's namespace.

But more fundamentally, are we actually using
pg_g_threadlock anywhere? I removed all the releant code and the tests still
seem to pass?

If you have an older Curl installation, it is used. Newer libcurls
don't need it.

Oh, wow. We hide the references to pg_g_threadlock behind a friggin macro?
That's just ...

Not your fault, I know...

A future simplification could be to pull the use of the threadlock
back into libpq, and have it perform a one-time
dlopen-plus-Curl-initialization under the lock... That would also get
rid of the dlerror() thread safety problems. But that's an awful lot
of moving parts under a mutex, which makes me a little nervous.

I still think we should simply reject at configure time if curl init isn't
threadsafe ;)

On Mon, Apr 7, 2025 at 7:43 AM Andres Freund <andres@anarazel.de> wrote:

Yes, this is correct. We want a shared module, not a shared library, in
meson parlance.

It's not entirely obvious to me that we do want that.

There recently was a breakage of building with PG on macos with meson, due to
the meson folks implementing a feature request to move away from using
bundles, as
1) bundles apparently aren't supported on iOS
2) there apparently aren't any restrictions left that require using bundles,
and there haven't been for a while.

Could you explain how this is related to .app bundles? I thought I was
just building a standard dylib.

The other kind of bundles (what on earth apple was thinking with the naming
here I don't know). Stuff liked with ld -bundle.

I don't know whether we need an exports file for libpq-oauth. The other
shared modules don't provide an export file, and I'm not sure whether this
is even supported for shared modules. I guess it doesn't hurt?

It does seem just using PGDLLEXPORT would suffice here.

My motivation was to strictly identify the ABI that we intend libpq to
use, to try to future-proof things for everybody. Especially since
we're duplicating functions from libpq that we'd rather not be. (The
use of RTLD_LOCAL maybe makes that more of a belt-and-suspenders
thing.)

PGDLLEXPORT serves that purpose too, fwiw. These days we use compiler flags
that restrict function visibility of everything not annotated with
PGDLLEXPORT.

However - that's all in c.h, port/win32.h,port/cygwin.h, , which libpq headers
might not want to include.

Are there any downsides to the exports file?

I think it's fine either way.

Greetings,

Andres Freund

#286Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Andres Freund (#285)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Apr 7, 2025 at 10:05 AM Andres Freund <andres@anarazel.de> wrote:

On 2025-04-07 09:41:25 -0700, Jacob Champion wrote:

Ah, you mean if the RPATH'd build doesn't have a flow, but the
globally installed one (with a different ABI) does? Yeah, that would
be a problem.

That and more: Even if the RPATH libpq does have oauth support, the
libpq-oauth won't necessarily be at the front of the global library search
path. So afaict you'll often get a different libpq-oauth.

ldopen() should respect RPATH, though? Either way, I agree with
pushing the minor version into the name (or else deciding that we will
keep the ABI completely stable across minor version bumps; not sure I
want to guarantee that just yet).

We can do it however we want, honestly, especially since the ABI isn't
public/stable. I chose this way just to ease review.

I found it rather confusing that libpq looks up a symbol and then sets
libpq-oauth's symbol to a pointer in libpq's namespace.

Yeah, I think a one-time init call would make this nicer.

A future simplification could be to pull the use of the threadlock
back into libpq, and have it perform a one-time
dlopen-plus-Curl-initialization under the lock... That would also get
rid of the dlerror() thread safety problems. But that's an awful lot
of moving parts under a mutex, which makes me a little nervous.

I still think we should simply reject at configure time if curl init isn't
threadsafe ;)

Practically speaking, I don't think that's a choice we can make. For
example, RHEL won't have threadsafe Curl until 10.

Could you explain how this is related to .app bundles? I thought I was
just building a standard dylib.

The other kind of bundles (what on earth apple was thinking with the naming
here I don't know). Stuff liked with ld -bundle.

Ah, some new corner-case magic to learn...

These days we use compiler flags
that restrict function visibility of everything not annotated with
PGDLLEXPORT.

Hm, I missed/forgot that. That is nice. Personally I like having a
single file document the exports, so I'll keep it that way for now
unless there are objections, but it's good to know it's not necessary.

Thanks,
--Jacob

#287Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#283)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

+This module ABI is an internal implementation detail, so it's subject to change
+without warning, even during minor releases (however unlikely). The compiled
+version of libpq-oauth should always match the compiled version of libpq.

Shouldn't we then include the *minor* version in the soname?

No objection here.

Mmmmh. Since we are currently only talking about 3 symbols, it doesn't
sound very likely that we'd have to bump this in a major branch.
Putting the minor version into the filename would make looking at
package diffs harder when upgrading. Do we really need this as opposed
to some hardcoded number like libpq.so.5.18 ?

Perhaps reusing the number from libpq.so.5.18 also for this lib would
be the way to go?

Which actually makes me wonder if we ought to instead load the library from a
specific location...

We could hardcode the disk location for version 1, I guess. I kind of
liked giving packagers the flexibility they're used to having from the
ld.so architecture, though. See below.

pkglibdir or a subdirectory of it might make sense.

Though for Debian, I'd actually prefer
/usr/lib/$DEB_HOST_MULTIARCH/libpq/libpq-oauth...
since the libpq packaging is independent from the major version
packaging.

Are there technical downsides of putting it into $libdir? I understand
there are "people" downsides, since we don't really want to signal
that this is a publicly linkable thing... but surely if you go through
the process of linking our library (which has no SONAME, includes the
major/minor version in its -l option, and has no pkgconfig) and
shoving a private pointer to a threadlock into it, you can keep all
the pieces when they break?

The Debian Policy expectation is that everything in libdir is a proper
library that could be linked to, and that random private stuff should
be elsewhere. But if being able to use the default lib search path is
an important argument, we could put it there.

Christoph

#288Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#287)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Apr 7, 2025 at 11:47 AM Christoph Berg <myon@debian.org> wrote:

Mmmmh. Since we are currently only talking about 3 symbols, it doesn't
sound very likely that we'd have to bump this in a major branch.

The ABI extends to the pointers we're using, though. This module uses
PGconn* internals and libpq-int.h. [1]Future work ideas in this area include allowing other people to compile their own loadable flow plugin, so that the utilities can use it. (Only Device Authorization can be used by psql et al, for 18.) At that point, developers will need a limited API to twiddle the connection handle, and our builtin flow(s?) could use the same API. But that's not work we can tackle for 18.

Putting the minor version into the filename would make looking at
package diffs harder when upgrading. Do we really need this as opposed
to some hardcoded number like libpq.so.5.18 ?

Perhaps reusing the number from libpq.so.5.18 also for this lib would
be the way to go?

That doesn't address Andres' concern, though; if multiple
installations all use libpq.so.5.18, they still can't necessarily mix
and match.

In fact you can't mix and match across different settings of
ENABLE_SSL/GSS/SSPI, either. So I guess that nudges me towards
pkglibdir/<some subdirectory>, to avoid major pain for some unlucky
end user.

Though for Debian, I'd actually prefer
/usr/lib/$DEB_HOST_MULTIARCH/libpq/libpq-oauth...
since the libpq packaging is independent from the major version
packaging.

Not sure I understand this. Do you mean you'd patch our lookup for
Debian, to find it there instead of pkglibdir? I don't think we can
adopt that ourselves, for the same reasons as above; the two sides
have to be in lockstep.

The Debian Policy expectation is that everything in libdir is a proper
library that could be linked to, and that random private stuff should
be elsewhere. But if being able to use the default lib search path is
an important argument, we could put it there.

I was hoping the default lib search would make your life (and ours)
easier. If it doesn't, I can lock it down.

Thanks,
--Jacob

[1]: Future work ideas in this area include allowing other people to compile their own loadable flow plugin, so that the utilities can use it. (Only Device Authorization can be used by psql et al, for 18.) At that point, developers will need a limited API to twiddle the connection handle, and our builtin flow(s?) could use the same API. But that's not work we can tackle for 18.
compile their own loadable flow plugin, so that the utilities can use
it. (Only Device Authorization can be used by psql et al, for 18.) At
that point, developers will need a limited API to twiddle the
connection handle, and our builtin flow(s?) could use the same API.
But that's not work we can tackle for 18.

#289Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#288)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

Putting the minor version into the filename would make looking at
package diffs harder when upgrading. Do we really need this as opposed
to some hardcoded number like libpq.so.5.18 ?

Perhaps reusing the number from libpq.so.5.18 also for this lib would
be the way to go?

That doesn't address Andres' concern, though; if multiple
installations all use libpq.so.5.18, they still can't necessarily mix
and match.

Well the whole world is linking against libpq5.so and not breaking
either. Why is this module worse? (I guess the answer is internal data
structures... but does it have to be worse?)

Though for Debian, I'd actually prefer
/usr/lib/$DEB_HOST_MULTIARCH/libpq/libpq-oauth...
since the libpq packaging is independent from the major version
packaging.

Not sure I understand this. Do you mean you'd patch our lookup for
Debian, to find it there instead of pkglibdir? I don't think we can
adopt that ourselves, for the same reasons as above; the two sides
have to be in lockstep.

Because pkglibdir would be something like /usr/lib/postgresql/17/lib,
even when there is only one libpq5 package for all major server
versions on Debian. So if you have postgresql-16 installed, you'd end
up with

/usr/lib/postgresql/16/{bin,lib} everything from PG 16
/usr/lib/x86_64-linux-gnu/libpq* libpq5
/usr/lib/postgresql/17/lib/libpq-oauth.so

... which is weird.

[1] Future work ideas in this area include allowing other people to
compile their own loadable flow plugin, so that the utilities can use
it. (Only Device Authorization can be used by psql et al, for 18.) At
that point, developers will need a limited API to twiddle the
connection handle, and our builtin flow(s?) could use the same API.
But that's not work we can tackle for 18.

Perhaps keep things simple for PG18 and choose a simple filename and
location. If future extensions need something more elaborate, we can
still switch later.

Christoph

#290Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#289)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Apr 7, 2025 at 2:58 PM Christoph Berg <myon@debian.org> wrote:

Why is this module worse? (I guess the answer is internal data
structures... but does it have to be worse?)

It doesn't have to be, in general, and the coupling surface is small
enough (libpq_append_conn_error) that we have a relatively easy path
toward decoupling it in the future. But for 18, I suspect no one will
be happy with me if I try to turn that inside out right this instant.
The goal was just to turn an internal implementation detail into a
delay-loaded internal implementation detail.

Because pkglibdir would be something like /usr/lib/postgresql/17/lib,
even when there is only one libpq5 package for all major server
versions on Debian. So if you have postgresql-16 installed, you'd end
up with

/usr/lib/postgresql/16/{bin,lib} everything from PG 16
/usr/lib/x86_64-linux-gnu/libpq* libpq5
/usr/lib/postgresql/17/lib/libpq-oauth.so

... which is weird.

Weird, sure -- but it's correct, right? Because you have PG17's OAuth
flow installed.

If someone comes to the list with a flow bug in three years, and I ask
them what version they have installed, and they tell me "PG16, and
it's loading /usr/lib/aarch64-linux-gnu/libpq/libpq-oauth.so." That
won't be incredibly helpful IMHO.

[1] Future work ideas in this area include allowing other people to
compile their own loadable flow plugin, so that the utilities can use
it. (Only Device Authorization can be used by psql et al, for 18.) At
that point, developers will need a limited API to twiddle the
connection handle, and our builtin flow(s?) could use the same API.
But that's not work we can tackle for 18.

Perhaps keep things simple for PG18 and choose a simple filename and
location. If future extensions need something more elaborate, we can
still switch later.

Sounds good. Any opinions from the gallery on what a "libpq plugin
subdirectory" in pkglibdir should be called? ("client", "modules",
"plugins"...?) Is there still a good reason to put any explicit
versioning into the filename if we do that?

Thanks,
--Jacob

#291Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#290)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Apr 7, 2025 at 3:26 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Sounds good. Any opinions from the gallery on what a "libpq plugin
subdirectory" in pkglibdir should be called? ("client", "modules",
"plugins"...?)

Hm, one immediate consequence of hardcoding pkglibdir is that we can
no longer rely on LD_LIBRARY_PATH for pre-installation testing.
(Contrast with the server, which is able to relocate extension paths
based on its executable location.)

--Jacob

#292Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#291)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 8 Apr 2025, at 04:10, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Mon, Apr 7, 2025 at 3:26 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Sounds good. Any opinions from the gallery on what a "libpq plugin
subdirectory" in pkglibdir should be called? ("client", "modules",
"plugins"...?)

Hm, one immediate consequence of hardcoding pkglibdir is that we can
no longer rely on LD_LIBRARY_PATH for pre-installation testing.
(Contrast with the server, which is able to relocate extension paths
based on its executable location.)

That strikes me as a signifant drawback.

--
Daniel Gustafsson

#293Bruce Momjian
bruce@momjian.us
In reply to: Daniel Gustafsson (#292)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 12:34:02PM +0200, Daniel Gustafsson wrote:

On 8 Apr 2025, at 04:10, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Mon, Apr 7, 2025 at 3:26 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Sounds good. Any opinions from the gallery on what a "libpq plugin
subdirectory" in pkglibdir should be called? ("client", "modules",
"plugins"...?)

Hm, one immediate consequence of hardcoding pkglibdir is that we can
no longer rely on LD_LIBRARY_PATH for pre-installation testing.
(Contrast with the server, which is able to relocate extension paths
based on its executable location.)

That strikes me as a signifant drawback.

Uh, where are we on the inclusion of curl in our build? Maybe it was
explained but I have not seen it. I still see
src/interfaces/libpq/fe-auth-oauth-curl.c.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#294Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Bruce Momjian (#293)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 7:32 AM Bruce Momjian <bruce@momjian.us> wrote:

Uh, where are we on the inclusion of curl in our build? Maybe it was
explained but I have not seen it.

The above is discussing a patch to split this into its own loadable
module. Andres and Christoph's feedback has been shaping where we put
that module, exactly.

Thanks,
--Jacob

#295Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#294)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 8 Apr 2025, at 17:04, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Tue, Apr 8, 2025 at 7:32 AM Bruce Momjian <bruce@momjian.us> wrote:

Uh, where are we on the inclusion of curl in our build? Maybe it was
explained but I have not seen it.

The above is discussing a patch to split this into its own loadable
module. Andres and Christoph's feedback has been shaping where we put
that module, exactly.

There is also an open item registered for this.

--
Daniel Gustafsson

#296Bruce Momjian
bruce@momjian.us
In reply to: Jacob Champion (#294)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 08:04:22AM -0700, Jacob Champion wrote:

On Tue, Apr 8, 2025 at 7:32 AM Bruce Momjian <bruce@momjian.us> wrote:

Uh, where are we on the inclusion of curl in our build? Maybe it was
explained but I have not seen it.

The above is discussing a patch to split this into its own loadable
module. Andres and Christoph's feedback has been shaping where we put
that module, exactly.

Uh, I was afraid that was the case, which is why I asked. We have just
hit feature freeze, so it is not good that we are still "shaping" the
patch. Should we consider reverting it? It is true we still "adjust"
patches after feature freeze.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#297Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#296)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-04-08 11:13:51 -0400, Bruce Momjian wrote:

On Tue, Apr 8, 2025 at 08:04:22AM -0700, Jacob Champion wrote:

On Tue, Apr 8, 2025 at 7:32 AM Bruce Momjian <bruce@momjian.us> wrote:

Uh, where are we on the inclusion of curl in our build? Maybe it was
explained but I have not seen it.

The above is discussing a patch to split this into its own loadable
module. Andres and Christoph's feedback has been shaping where we put
that module, exactly.

Uh, I was afraid that was the case, which is why I asked. We have just
hit feature freeze, so it is not good that we are still "shaping" the
patch. Should we consider reverting it? It is true we still "adjust"
patches after feature freeze.

You brought the dependency concern up well after the feature was merged, after
it had been in development for a *long* time. It wasn't a secret that it had a
dependency on curl. I don't think it's fair to penalize a feature's authors
to not finish implementing a complicated and completely new requirement within
17 days.

Greetings,

Andres Freund

#298Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#297)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 11:20:11AM -0400, Andres Freund wrote:

Hi,

On 2025-04-08 11:13:51 -0400, Bruce Momjian wrote:

On Tue, Apr 8, 2025 at 08:04:22AM -0700, Jacob Champion wrote:

On Tue, Apr 8, 2025 at 7:32 AM Bruce Momjian <bruce@momjian.us> wrote:

Uh, where are we on the inclusion of curl in our build? Maybe it was
explained but I have not seen it.

The above is discussing a patch to split this into its own loadable
module. Andres and Christoph's feedback has been shaping where we put
that module, exactly.

Uh, I was afraid that was the case, which is why I asked. We have just
hit feature freeze, so it is not good that we are still "shaping" the
patch. Should we consider reverting it? It is true we still "adjust"
patches after feature freeze.

You brought the dependency concern up well after the feature was merged, after
it had been in development for a *long* time. It wasn't a secret that it had a
dependency on curl. I don't think it's fair to penalize a feature's authors
to not finish implementing a complicated and completely new requirement within
17 days.

Fair point --- I was just asking.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#299Wolfgang Walther
walther@technowledgy.de
In reply to: Jacob Champion (#294)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion:

The above is discussing a patch to split this into its own loadable
module.

Wasn't sure where to put this exactly, the thread is long and I couldn't
find any discussion around it:

How does the proposal with a loadable module affect a static libpq.a?

I have not tried, yet, but is my assumption correct, that I could build
a libpq.a with oauth/curl support on current HEAD?

If yes, would that still be an option after the split?

Thanks,

Wolfgang

#300Bruce Momjian
bruce@momjian.us
In reply to: Wolfgang Walther (#299)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 06:01:42PM +0200, Wolfgang Walther wrote:

Jacob Champion:

The above is discussing a patch to split this into its own loadable
module.

Wasn't sure where to put this exactly, the thread is long and I couldn't
find any discussion around it:

How does the proposal with a loadable module affect a static libpq.a?

I have not tried, yet, but is my assumption correct, that I could build a
libpq.a with oauth/curl support on current HEAD?

If yes, would that still be an option after the split?

How does this patch help us avoid having to handle curl CVEs and its
curl's additional dependencies? As I understand the patch, it makes
libpq _not_ have additional dependencies but moves the dependencies to a
special loadable library that libpq can use.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#301Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Wolfgang Walther (#299)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 9:02 AM Wolfgang Walther <walther@technowledgy.de> wrote:

How does the proposal with a loadable module affect a static libpq.a?

The currently proposed patch would have you package and install a
separate .so module implementing OAuth, which the staticlib would load
once when needed. Similarly to how you still have to somehow
dynamically link your static app against Curl.

As a staticlib user, how do you feel about that?

I have not tried, yet, but is my assumption correct, that I could build
a libpq.a with oauth/curl support on current HEAD?

Yes.

--Jacob

#302Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Bruce Momjian (#300)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 9:14 AM Bruce Momjian <bruce@momjian.us> wrote:

How does this patch help us avoid having to handle curl CVEs and its
curl's additional dependencies? As I understand the patch, it makes
libpq _not_ have additional dependencies but moves the dependencies to a
special loadable library that libpq can use.

It allows packagers to ship the OAuth library separately, so end users
that don't want the additional exposure don't have to install it at
all.

--Jacob

#303Wolfgang Walther
walther@technowledgy.de
In reply to: Jacob Champion (#301)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion:

The currently proposed patch would have you package and install a
separate .so module implementing OAuth, which the staticlib would load
once when needed. Similarly to how you still have to somehow
dynamically link your static app against Curl.

As a staticlib user, how do you feel about that?

When linking statically, I am producing entirely statically linked
single binaries. Those contain libpq, all other dependencies, and would
also contain curl.

The "entirely statically linked" thing is actually enforced by the build
system (NixOS' pkgsStatic here), so dlopen() might just not be possible.
Not exactly sure right now, whether it's stubbed out or just not
available at all.

This means that shipping another .so file will not happen with this
approach. Assuming OAuth will be picked up by some of the bigger
providers, that would... make me feel quite bad about it, actually.

I'm not seeing the overall problem, yet. When I build with
--enable-curl... ofc, I have a dependency on cURL. That's kind of the
point. When I don't want that, then I just disable it. And that should
also not be a problem for distributions - they could offer a libpq and a
libpq_oauth package, where only one of them can be installed at the same
time, I guess? *

Best,

Wolfgang

* Currently, the two build systems don't handle the "please build only
libpq" scenario well. If that was supported better, building a second
package with oauth support could be much easier.

#304Bruce Momjian
bruce@momjian.us
In reply to: Jacob Champion (#302)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 09:17:03AM -0700, Jacob Champion wrote:

On Tue, Apr 8, 2025 at 9:14 AM Bruce Momjian <bruce@momjian.us> wrote:

How does this patch help us avoid having to handle curl CVEs and its
curl's additional dependencies? As I understand the patch, it makes
libpq _not_ have additional dependencies but moves the dependencies to a
special loadable library that libpq can use.

It allows packagers to ship the OAuth library separately, so end users
that don't want the additional exposure don't have to install it at
all.

Okay, so how would they do that? I understand how that would happen if
it was an external extension, but how if it is under /src or /contrib.

FYI, I see a good number of curl CVEs:

https://curl.se/docs/security.html

Would we have to put out minor releases for curl CVEs? I don't think we
have to for OpenSSL so would curl be the same?

I am asking these questions now so we can save time in getting this
closed.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#305Wolfgang Walther
walther@technowledgy.de
In reply to: Jacob Champion (#302)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion:

It allows packagers to ship the OAuth library separately, so end users
that don't want the additional exposure don't have to install it at
all.

Ah, this came in after I sent my other mail, with this foot-note:

Currently, the two build systems don't handle the "please build only

libpq" scenario well. If that was supported better, building a second
package with oauth support could be much easier.

I think we should rather improve the build systems to handle this case,
to give packagers more flexibility.

Best,

Wolfgang

#306Daniel Gustafsson
daniel@yesql.se
In reply to: Bruce Momjian (#304)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 8 Apr 2025, at 18:33, Bruce Momjian <bruce@momjian.us> wrote:

Would we have to put out minor releases for curl CVEs? I don't think we
have to for OpenSSL so would curl be the same?

Why do you envision this being different from all other dependencies we have?
For OpenSSL we also happily build against a version (and until recently,
several versions) which is EOL and don't even recieve security fixes.

--
Daniel Gustafsson

#307Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Bruce Momjian (#304)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 9:33 AM Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Apr 8, 2025 at 09:17:03AM -0700, Jacob Champion wrote:

It allows packagers to ship the OAuth library separately, so end users
that don't want the additional exposure don't have to install it at
all.

Okay, so how would they do that? I understand how that would happen if
it was an external extension, but how if it is under /src or /contrib.

By adding the new .so to a different package. For example, RPM specs
would just let you say "hey, this .so I just built doesn't go into the
main client package, it goes into an add-on that depends on the client
package." It's the same way separate client and server packages get
generated from the same single build of Postgres.

Would we have to put out minor releases for curl CVEs?

In general, no.

Thanks,
--Jacob

#308Bruce Momjian
bruce@momjian.us
In reply to: Daniel Gustafsson (#306)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 06:42:19PM +0200, Daniel Gustafsson wrote:

On 8 Apr 2025, at 18:33, Bruce Momjian <bruce@momjian.us> wrote:

Would we have to put out minor releases for curl CVEs? I don't think we
have to for OpenSSL so would curl be the same?

Why do you envision this being different from all other dependencies we have?
For OpenSSL we also happily build against a version (and until recently,
several versions) which is EOL and don't even receive security fixes.

I don't know. I know people scan our downloads and report when the
scanners detect something, but I am unclear what those scanners are
doing. Would they see some new risks with curl?

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#309Bruce Momjian
bruce@momjian.us
In reply to: Jacob Champion (#307)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 09:43:01AM -0700, Jacob Champion wrote:

On Tue, Apr 8, 2025 at 9:33 AM Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Apr 8, 2025 at 09:17:03AM -0700, Jacob Champion wrote:

It allows packagers to ship the OAuth library separately, so end users
that don't want the additional exposure don't have to install it at
all.

Okay, so how would they do that? I understand how that would happen if
it was an external extension, but how if it is under /src or /contrib.

By adding the new .so to a different package. For example, RPM specs
would just let you say "hey, this .so I just built doesn't go into the
main client package, it goes into an add-on that depends on the client
package." It's the same way separate client and server packages get
generated from the same single build of Postgres.

Do we have any idea how many packagers are interested in doing this?

Would we have to put out minor releases for curl CVEs?

In general, no.

Good.

FYI, I saw bug bounty dollar amounts next to each curl CVE:

https://curl.se/docs/security.html

No wonder some people ask for bounties.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#310Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Wolfgang Walther (#303)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 9:32 AM Wolfgang Walther <walther@technowledgy.de> wrote:

And that should also not be a problem for distributions - they could offer a libpq and a libpq_oauth package, where only one of them can be installed at the same time, I guess? *

My outsider understanding is that maintaining this sort of thing
becomes a major headache, because of combinatorics. You don't really
want to ship a libpq and libpq-with-gss and libpq-with-oauth and
libpq-with-oauth-and-gss and ...

--Jacob

#311Wolfgang Walther
walther@technowledgy.de
In reply to: Jacob Champion (#310)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion:

On Tue, Apr 8, 2025 at 9:32 AM Wolfgang Walther <walther@technowledgy.de> wrote:

And that should also not be a problem for distributions - they could offer a libpq and a libpq_oauth package, where only one of them can be installed at the same time, I guess? *

My outsider understanding is that maintaining this sort of thing
becomes a major headache, because of combinatorics. You don't really
want to ship a libpq and libpq-with-gss and libpq-with-oauth and
libpq-with-oauth-and-gss and ...

That would only be the case, if you were to consider those other
dependencies as "dangerous" as cURL. But we already depend on them. So
if it's really the case that cURL is that much worse, that we consider
loading it as a module... then the combinatorics should not be a problem
either.

However, if the other deps are considered problematic as well, then the
ship has already sailed, and there is not point for a special case here
anymore.

Best,

Wolfgang

#312Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Bruce Momjian (#309)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 9:49 AM Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Apr 8, 2025 at 09:43:01AM -0700, Jacob Champion wrote:

By adding the new .so to a different package. For example, RPM specs
would just let you say "hey, this .so I just built doesn't go into the
main client package, it goes into an add-on that depends on the client
package." It's the same way separate client and server packages get
generated from the same single build of Postgres.

Do we have any idea how many packagers are interested in doing this?

I'm not sure how to answer this. The primary drivers from the dev side
are you and Tom, I think. Christoph seems to be on board with a split
as long as we don't make his life harder. Wolfgang appears to be a
packager who would not make use of a split (and in fact cannot).

--Jacob

#313Bruce Momjian
bruce@momjian.us
In reply to: Wolfgang Walther (#311)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 06:57:18PM +0200, Wolfgang Walther wrote:

Jacob Champion:

On Tue, Apr 8, 2025 at 9:32 AM Wolfgang Walther <walther@technowledgy.de> wrote:

And that should also not be a problem for distributions - they could offer a libpq and a libpq_oauth package, where only one of them can be installed at the same time, I guess? *

My outsider understanding is that maintaining this sort of thing
becomes a major headache, because of combinatorics. You don't really
want to ship a libpq and libpq-with-gss and libpq-with-oauth and
libpq-with-oauth-and-gss and ...

That would only be the case, if you were to consider those other
dependencies as "dangerous" as cURL. But we already depend on them. So if
it's really the case that cURL is that much worse, that we consider loading
it as a module... then the combinatorics should not be a problem either.

However, if the other deps are considered problematic as well, then the ship
has already sailed, and there is not point for a special case here anymore.

Yes, I think this is what I am asking too. For me it was curl's
security reputation and whether that would taint the security reputation
of libpq. For Tom, I think it was the dependency additions.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#314Bruce Momjian
bruce@momjian.us
In reply to: Jacob Champion (#312)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 10:00:56AM -0700, Jacob Champion wrote:

On Tue, Apr 8, 2025 at 9:49 AM Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Apr 8, 2025 at 09:43:01AM -0700, Jacob Champion wrote:

By adding the new .so to a different package. For example, RPM specs
would just let you say "hey, this .so I just built doesn't go into the
main client package, it goes into an add-on that depends on the client
package." It's the same way separate client and server packages get
generated from the same single build of Postgres.

Do we have any idea how many packagers are interested in doing this?

I'm not sure how to answer this. The primary drivers from the dev side
are you and Tom, I think. Christoph seems to be on board with a split
as long as we don't make his life harder. Wolfgang appears to be a
packager who would not make use of a split (and in fact cannot).

Okay, I have just posted a more detailed email about my security
concern, so let's look at that. I am ready to admit I am wrong.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#315Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Wolfgang Walther (#311)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 9:57 AM Wolfgang Walther <walther@technowledgy.de> wrote:

if it's really the case that cURL is that much worse

(it is not, but I am sympathetic to the argument that if you don't use
it, you don't need it in the process space)

However, if the other deps are considered problematic as well, then the
ship has already sailed, and there is not point for a special case here
anymore.

I think this line of argument is unlikely to find traction. Upthread
there were people asking if we could maybe split out other
possibly-unused dependencies in the future, like Kerberos.

--Jacob

#316Wolfgang Walther
walther@technowledgy.de
In reply to: Jacob Champion (#315)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion:

However, if the other deps are considered problematic as well, then the
ship has already sailed, and there is not point for a special case here
anymore.

I think this line of argument is unlikely to find traction. Upthread
there were people asking if we could maybe split out other
possibly-unused dependencies in the future, like Kerberos.

Well, yes, that's kind of what I'm saying. There shouldn't be a special
case for cURL, but those other deps should be handled equally as well.

And if that means making libpq modular at run-time, then this should be
planned and built with all deps, and other use-cases (like static
linking) in mind - and not like it is right now.

Best,

Wolfgang

#317Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Wolfgang Walther (#316)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 10:10 AM Wolfgang Walther
<walther@technowledgy.de> wrote:

And if that means making libpq modular at run-time, then this should be planned and built with all deps, and other use-cases (like static linking) in mind - and not like it is right now.

I think that'd be neat in concept, but specifically this thread is
discussing a PG18 open item. For future releases, if we're happy with
how Curl gets split out, maybe that would be fuel for other
delay-loaded client dependencies. I'm not sure.

--Jacob

#318Bruce Momjian
bruce@momjian.us
In reply to: Jacob Champion (#317)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 10:13:46AM -0700, Jacob Champion wrote:

On Tue, Apr 8, 2025 at 10:10 AM Wolfgang Walther
<walther@technowledgy.de> wrote:

And if that means making libpq modular at run-time, then this should be planned and built with all deps, and other use-cases (like static linking) in mind - and not like it is right now.

I think that'd be neat in concept, but specifically this thread is
discussing a PG18 open item. For future releases, if we're happy with
how Curl gets split out, maybe that would be fuel for other
delay-loaded client dependencies. I'm not sure.

Well, if we think we are going to do that, it seems we would need a
different architecture than the one being proposed for PG 18, which
could lead to a lot of user/developer API churn.

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#319Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Bruce Momjian (#318)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 10:15 AM Bruce Momjian <bruce@momjian.us> wrote:

Well, if we think we are going to do that, it seems we would need a
different architecture than the one being proposed for PG 18, which
could lead to a lot of user/developer API churn.

A major goal of the current patch proposal is to explicitly hide this
from the end-user and public APIs. So it can be changed without public
breakage. It can't be hidden from packagers, of course, but that's the
point of the feature request.

--Jacob

#320Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#292)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 3:34 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On 8 Apr 2025, at 04:10, Jacob Champion <jacob.champion@enterprisedb.com> wrote:
Hm, one immediate consequence of hardcoding pkglibdir is that we can
no longer rely on LD_LIBRARY_PATH for pre-installation testing.
(Contrast with the server, which is able to relocate extension paths
based on its executable location.)

That strikes me as a signifant drawback.

Yeah, but it's one of those things that feels like it must have been
solved by the others in the space. Once it's installed, the concern
goes away (unless you demand absolute relocatability without
recompilation). I'll take a look at how libkrb/libmagick do their
testing.

If it somehow turns out to be impossible, one option might be to shove
a more detailed ABI identifier into the name. In other words, builds
without ENABLE_SSL/GSS/SSPI or whatever get different names on disk.
That doesn't scale at all, but it's a short-term option that would put
more pressure on a medium-term stable ABI.

Thanks,
--Jacob

#321Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Wolfgang Walther (#303)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 9:32 AM Wolfgang Walther <walther@technowledgy.de> wrote:

This means that shipping another .so file will not happen with this approach. Assuming OAuth will be picked up by some of the bigger providers, that would... make me feel quite bad about it, actually.

It occurs to me that I didn't respond to this point explicitly. I
would like to avoid making your life harder.

Would anybody following along be opposed to a situation where
- dynamiclib builds go through the dlopen() shim
- staticlib builds always rely on statically linked symbols

Or do we need to be able to mix and match?

--Jacob

#322Andres Freund
andres@anarazel.de
In reply to: Bruce Momjian (#313)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-04-08 13:02:11 -0400, Bruce Momjian wrote:

On Tue, Apr 8, 2025 at 06:57:18PM +0200, Wolfgang Walther wrote:

Jacob Champion:

On Tue, Apr 8, 2025 at 9:32 AM Wolfgang Walther <walther@technowledgy.de> wrote:

And that should also not be a problem for distributions - they could offer a libpq and a libpq_oauth package, where only one of them can be installed at the same time, I guess? *

My outsider understanding is that maintaining this sort of thing
becomes a major headache, because of combinatorics. You don't really
want to ship a libpq and libpq-with-gss and libpq-with-oauth and
libpq-with-oauth-and-gss and ...

That would only be the case, if you were to consider those other
dependencies as "dangerous" as cURL. But we already depend on them. So if
it's really the case that cURL is that much worse, that we consider loading
it as a module... then the combinatorics should not be a problem either.

However, if the other deps are considered problematic as well, then the ship
has already sailed, and there is not point for a special case here anymore.

Yes, I think this is what I am asking too. For me it was curl's
security reputation and whether that would taint the security reputation
of libpq. For Tom, I think it was the dependency additions.

I'd say that curl's security reputation is higher than most of our other
dependencies. We have dependencies for libraries with regular security issues,
with those issues at times not getting addressed for prolonged amounts of
time.

I do agree that there's an issue increasing libpq's indirect footprint
substantially, but I don't think that's due to curl's reputation or
anything. It's just needing a significantly higher number of shared libraries,
which comes with runtime and distribution overhead.

Greetings,

Andres Freund

#323Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#322)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 02:11:19PM -0400, Andres Freund wrote:

Hi,

On 2025-04-08 13:02:11 -0400, Bruce Momjian wrote:

On Tue, Apr 8, 2025 at 06:57:18PM +0200, Wolfgang Walther wrote:

Jacob Champion:

On Tue, Apr 8, 2025 at 9:32 AM Wolfgang Walther <walther@technowledgy.de> wrote:

And that should also not be a problem for distributions - they could offer a libpq and a libpq_oauth package, where only one of them can be installed at the same time, I guess? *

My outsider understanding is that maintaining this sort of thing
becomes a major headache, because of combinatorics. You don't really
want to ship a libpq and libpq-with-gss and libpq-with-oauth and
libpq-with-oauth-and-gss and ...

That would only be the case, if you were to consider those other
dependencies as "dangerous" as cURL. But we already depend on them. So if
it's really the case that cURL is that much worse, that we consider loading
it as a module... then the combinatorics should not be a problem either.

However, if the other deps are considered problematic as well, then the ship
has already sailed, and there is not point for a special case here anymore.

Yes, I think this is what I am asking too. For me it was curl's
security reputation and whether that would taint the security reputation
of libpq. For Tom, I think it was the dependency additions.

I'd say that curl's security reputation is higher than most of our other
dependencies. We have dependencies for libraries with regular security issues,
with those issues at times not getting addressed for prolonged amounts of
time.

I see curl CVEs regularly as part of Debian minor updates, which is why
I had concerns, but if it is similar to OpenSSL, and better than other
libraries that don't even get CVEs, I guess it okay. However, is this
true for libpq libraries or database server libraries. Does it matter?

--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com

Do not let urgent matters crowd out time for investment in the future.

#324Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#320)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 10:36 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Yeah, but it's one of those things that feels like it must have been
solved by the others in the space. Once it's installed, the concern
goes away (unless you demand absolute relocatability without
recompilation). I'll take a look at how libkrb/libmagick do their
testing.

Perhaps unsurprisingly, they inject different lookup paths via
envvars. We could do the same (I have FUD about the security
characteristics)...

If it somehow turns out to be impossible, one option might be to shove
a more detailed ABI identifier into the name.

...but I wonder if I can invert the dependency on
libpq_append_conn_error entirely, and remove that part of the ABI
surface, then revisit the discussion on `-<major>.so` vs
`-<major>-<minor>.so`.

--Jacob

#325Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Bruce Momjian (#323)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 11:25 AM Bruce Momjian <bruce@momjian.us> wrote:

However, is this
true for libpq libraries or database server libraries. Does it matter?

The dependency on Curl is through libpq. We have some server-side
features that pull in libpq and would transitively depend on Curl. But
for Curl to be initialized server-side, the two peers still have to
agree on the use of OAuth.

It seems unlikely that users would opt into that for, say,
postgres_fdw in PG18, because the Device Authorization flow is the
only one we currently ship, and it's intended for end users. A flow
that prints a code to stderr is not very helpful for your proxy.

--Jacob

#326Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#283)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Apr 7, 2025 at 9:41 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Not sure, the code looks correct at first glance. However, you could
also just keep the libpq-oauth strings in the libpq catalog. There
isn't really a need to make a separate one, since the versions you end
up installing are locked to each other. So you could for example in
libpq's nls.mk just add

../libpq-oauth/oauth-curl.c

etc. to the files.

Oh, that's an interesting idea. Thanks, I'll give it a try.

A consequence of this is that our copy of libpq_binddomain isn't using
the same mutex as libpq's copy to protect the "libpq-18" message
domain. We could discuss whether or not it matters, since we don't
support Windows, but it doesn't feel architecturally sound to me. If
we want to reuse the same domain, I think the module should be using
libpq's libpq_gettext(). (Which we could do, again through the magic
of dependency injection.)

Maybe it would also make sense to make libpq-oauth a subdirectory of the
libpq directory instead of a peer.

Works for me.

It does not, however, work for our $(recurse) setup in the makefiles
-- a shared library depending on a parent directory's shared library
leads to infinite recursion, with the current tools -- so I'll keep it
at the current directory level for now.

Thanks,
--Jacob

#327Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#326)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 8, 2025 at 2:32 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

I think the module should be using
libpq's libpq_gettext(). (Which we could do, again through the magic
of dependency injection.)

To illustrate what I mean, v3 introduces an initialization function
that names the three internal dependencies (libpq_gettext,
pg_g_threadlock, and conn->errorMessage) explicitly. I decided not to
attempt injecting the variadic libpq_append_conn_error function, and
instead focus a level below it, since we must depend directly on
libpq_gettext anyway.

This is maybe overkill, if it's decided that the two halves must
always come from the same build, but I think it should decouple the
two sides enough that this is now a question of user experience rather
than ABI correctness.

Is it acceptable/desirable for a build, which has not been configured
--with-libcurl, to still pick up a compatible OAuth implementation
installed by the distro? If so, we can go with a "bare" dlopen(). If
that's not okay, I think we will probably need to use pkglibdir or
some derivative, and introduce a way for tests (and users?) to
override that directory selection. Unless someone has a good idea on
how we can split the difference.

--Jacob

Attachments:

v3-0001-WIP-split-Device-Authorization-flow-into-dlopen-d.patchapplication/octet-stream; name=v3-0001-WIP-split-Device-Authorization-flow-into-dlopen-d.patchDownload
From 19e6887b43a3d907465b9e36816ef6ef235207a6 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH v3] WIP: split Device Authorization flow into dlopen'd module

See notes on mailing list.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 meson.build                                   |  12 +-
 src/interfaces/Makefile                       |  12 ++
 src/interfaces/libpq-oauth/Makefile           |  55 +++++
 src/interfaces/libpq-oauth/README             |  30 +++
 src/interfaces/libpq-oauth/exports.txt        |   4 +
 src/interfaces/libpq-oauth/meson.build        |  31 +++
 .../oauth-curl.c}                             |  10 +-
 src/interfaces/libpq-oauth/oauth-curl.h       |  24 +++
 src/interfaces/libpq-oauth/oauth-utils.c      | 201 ++++++++++++++++++
 src/interfaces/libpq-oauth/oauth-utils.h      |  35 +++
 src/interfaces/libpq/Makefile                 |   4 -
 src/interfaces/libpq/exports.txt              |   1 +
 src/interfaces/libpq/fe-auth-oauth.c          |  94 +++++++-
 src/interfaces/libpq/fe-auth-oauth.h          |   4 +-
 src/interfaces/libpq/meson.build              |   4 -
 src/interfaces/libpq/nls.mk                   |  12 +-
 16 files changed, 502 insertions(+), 31 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/README
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 rename src/interfaces/{libpq/fe-auth-oauth-curl.c => libpq-oauth/oauth-curl.c} (99%)
 create mode 100644 src/interfaces/libpq-oauth/oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.c
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.h

diff --git a/meson.build b/meson.build
index 27717ad8976..6f1a8ea55ef 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -3251,17 +3252,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..e6822caa206 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,19 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+else
+ALWAYS_SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
+$(recurse_always)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..f44766dd549
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,55 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization OAuth support"
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+NAME = libpq-oauth-$(MAJORVERSION)
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES) \
+	oauth-curl.o \
+	oauth-utils.o
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = -lcurl $(filter -lintl, $(LIBS))
+SHLIB_PREREQS = submake-libpq
+SHLIB_EXPORTS = exports.txt
+
+# Disable -bundle_loader on macOS.
+BE_DLLLIBS =
+
+# Make dependencies on pg_config_paths.h visible in all builds.
+oauth-curl.o: oauth-curl.c $(top_builddir)/src/port/pg_config_paths.h
+
+$(top_builddir)/src/port/pg_config_paths.h:
+	$(MAKE) -C $(top_builddir)/src/port pg_config_paths.h
+
+all: all-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/interfaces/libpq-oauth/README b/src/interfaces/libpq-oauth/README
new file mode 100644
index 00000000000..ef746617c71
--- /dev/null
+++ b/src/interfaces/libpq-oauth/README
@@ -0,0 +1,30 @@
+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
+later split out as its own shared library in order to isolate its dependency on
+libcurl. (End users who don't want the Curl dependency can simply choose not to
+install this module.)
+
+If a connection string allows the use of OAuth, the server asks for it, and a
+libpq client has not installed its own custom OAuth flow, libpq will attempt to
+delay-load this module using dlopen() and the following ABI. Failure to load
+results in a failed connection.
+
+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+without warning, even during minor releases (however unlikely). The compiled
+version of libpq-oauth should always match the compiled version of libpq.
+
+- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+- void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+pg_fe_run_oauth_flow and pg_fe_cleanup_oauth_flow are implementations of
+conn->async_auth and conn->cleanup_async_auth, respectively.
+
+- void libpq_oauth_init(pgthreadlock_t threadlock,
+						libpq_gettext_func gettext_impl,
+						conn_errorMessage_func errmsg_impl);
+
+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
+libpq_gettext(), which must be injected by libpq before the flow is run. It also
+relies on libpq to expose conn->errorMessage, via an errmsg_impl.
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..6891a83dbf9
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+libpq_oauth_init          1
+pg_fe_run_oauth_flow      2
+pg_fe_cleanup_oauth_flow  3
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..79916e7aa62
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,31 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+if not libcurl.found() or host_system == 'windows'
+  subdir_done()
+endif
+
+libpq_oauth_sources = files(
+  'oauth-curl.c',
+  'oauth-utils.c',
+)
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+libpq_oauth_name = 'libpq-oauth-@0@'.format(pg_version_major)
+
+libpq_oauth_so = shared_module(libpq_oauth_name,
+  libpq_oauth_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/oauth-curl.c
similarity index 99%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/oauth-curl.c
index cd9c0323bb6..759cd494aae 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/oauth-curl.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * fe-auth-oauth-curl.c
+ * oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *	  src/interfaces/libpq-oauth/oauth-curl.c
  *
  *-------------------------------------------------------------------------
  */
@@ -29,8 +29,9 @@
 #include "common/jsonapi.h"
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
-#include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "oauth-curl.h"
+#include "oauth-utils.h"
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -2487,8 +2488,9 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 		.verification_uri_complete = actx->authz.verification_uri_complete,
 		.expires_in = actx->authz.expires_in,
 	};
+	PQauthDataHook_type hook = PQgetAuthDataHook();
 
-	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+	res = hook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
 
 	if (!res)
 	{
diff --git a/src/interfaces/libpq-oauth/oauth-curl.h b/src/interfaces/libpq-oauth/oauth-curl.h
new file mode 100644
index 00000000000..248d0424ad0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-curl.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_CURL_H
+#define OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+/* Exported async-auth callbacks. */
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.c b/src/interfaces/libpq-oauth/oauth-utils.c
new file mode 100644
index 00000000000..7a0949c071b
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.c
@@ -0,0 +1,201 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.c
+ *
+ *	  "Glue" helpers providing a copy of some internal APIs from libpq. At
+ *	  some point in the future, we might be able to deduplicate.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq-oauth/oauth-utils.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <signal.h>
+
+#include "libpq-int.h"
+#include "oauth-utils.h"
+
+pgthreadlock_t pg_g_threadlock;
+libpq_gettext_func libpq_gettext_impl;
+conn_errorMessage_func conn_errorMessage;
+
+/*-
+ * Initializes libpq-oauth by setting necessary callbacks.
+ *
+ * The current implementation relies on the following private implementation
+ * details of libpq:
+ *
+ * - pg_g_threadlock: protects libcurl initialization if the underlying Curl
+ *   installation is not threadsafe
+ *
+ * - libpq_gettext: translates error messages using libpq's message domain
+ *
+ * - conn->errorMessage: holds translated errors for the connection. This is
+ *   handled through a translation shim, which avoids either depending on the
+ *   offset of the errorMessage in PGconn, or needing to export the variadic
+ *   libpq_append_conn_error().
+ */
+void
+libpq_oauth_init(pgthreadlock_t threadlock_impl,
+				 libpq_gettext_func gettext_impl,
+				 conn_errorMessage_func errmsg_impl)
+{
+	pg_g_threadlock = threadlock_impl;
+	libpq_gettext_impl = gettext_impl;
+	conn_errorMessage = errmsg_impl;
+}
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  This is a copy of libpq's internal API.
+ */
+void
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+	PQExpBuffer errorMessage = conn_errorMessage(conn);
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(errorMessage, '\n');
+}
+
+#ifdef ENABLE_NLS
+
+/*
+ * A shim that defers to the actual libpq_gettext().
+ */
+char *
+libpq_gettext(const char *msgid)
+{
+	if (!libpq_gettext_impl)
+	{
+		/*
+		 * Possible if the libpq build doesn't enable NLS. That's a concerning
+		 * mismatch, but in this particular case we can handle it. Try to warn
+		 * a developer with an assertion, though.
+		 */
+		Assert(false);
+
+		/*
+		 * Note that callers of libpq_gettext() have to treat the return value
+		 * as if it were const, because builds without NLS simply pass through
+		 * their argument.
+		 */
+		return unconstify(char *, msgid);
+	}
+
+	return libpq_gettext_impl(msgid);
+}
+
+#endif							/* ENABLE_NLS */
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+/*
+ * Duplicate SOCK_ERRNO* definitions from libpq-int.h, for use by
+ * pq_block/reset_sigpipe().
+ */
+#ifdef WIN32
+#define SOCK_ERRNO (WSAGetLastError())
+#define SOCK_ERRNO_SET(e) WSASetLastError(e)
+#else
+#define SOCK_ERRNO errno
+#define SOCK_ERRNO_SET(e) (errno = (e))
+#endif
+
+/*
+ *	Block SIGPIPE for this thread. This is a copy of libpq's internal API.
+ */
+int
+pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+
+/*
+ *	Discard any pending SIGPIPE and reset the signal mask. This is a copy of
+ *	libpq's internal API.
+ */
+void
+pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
diff --git a/src/interfaces/libpq-oauth/oauth-utils.h b/src/interfaces/libpq-oauth/oauth-utils.h
new file mode 100644
index 00000000000..279fc113248
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.h
@@ -0,0 +1,35 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.h
+ *
+ *	  Definitions providing missing libpq internal APIs
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_UTILS_H
+#define OAUTH_UTILS_H
+
+#include "libpq-fe.h"
+#include "pqexpbuffer.h"
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/* Initializes libpq-oauth. */
+extern PGDLLEXPORT void libpq_oauth_init(pgthreadlock_t threadlock,
+										 libpq_gettext_func gettext_impl,
+										 conn_errorMessage_func errmsg_impl);
+
+/* Duplicated APIs, copied from libpq. */
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+extern bool oauth_unsafe_debugging_enabled(void);
+extern int	pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
+extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
+
+#endif							/* OAUTH_UTILS_H */
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..8cf8d9e54d8 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -64,10 +64,6 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
-
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..0625cf39e9a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,4 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cf1a25e2ccc..416964ac335 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -15,6 +15,10 @@
 
 #include "postgres_fe.h"
 
+#ifndef WIN32
+#include <dlfcn.h>
+#endif
+
 #include "common/base64.h"
 #include "common/hmac.h"
 #include "common/jsonapi.h"
@@ -22,6 +26,7 @@
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
 #include "mb/pg_wchar.h"
+#include "pg_config_paths.h"
 
 /* The exported OAuth callback mechanism. */
 static void *oauth_init(PGconn *conn, const char *password,
@@ -721,6 +726,85 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/*
+ * This shim is injected into libpq-oauth so that it doesn't depend on the
+ * offset of conn->errorMessage.
+ *
+ * TODO: look into exporting libpq_append_conn_error or a comparable API from
+ * libpq, instead.
+ */
+static PQExpBuffer
+conn_errorMessage(PGconn *conn)
+{
+	return &conn->errorMessage;
+}
+
+static bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+#ifdef WIN32
+	return false;
+#else
+	void		(*init) (pgthreadlock_t threadlock,
+						 libpq_gettext_func gettext_impl,
+						 conn_errorMessage_func errmsg_impl);
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+
+	state->builtin_flow = dlopen("libpq-oauth-" PG_MAJORVERSION DLSUFFIX,
+								 RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		/*
+		 * For end users, this probably isn't an error condition, it just
+		 * means the flow isn't installed. Developers and package maintainers
+		 * may want to debug this via the PGOAUTHDEBUG envvar, though.
+		 *
+		 * Note that POSIX dlerror() isn't guaranteed to be threadsafe.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlopen for libpq-oauth: %s\n", dlerror());
+
+		return false;
+	}
+
+	if ((init = dlsym(state->builtin_flow, "libpq_oauth_init")) == NULL
+		|| (flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow")) == NULL
+		|| (cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow")) == NULL)
+	{
+		/*
+		 * This is more of an error condition than the one above, but due to
+		 * the dlerror() threadsafety issue, lock it behind PGOAUTHDEBUG too.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlsym for libpq-oauth: %s\n", dlerror());
+
+		dlclose(state->builtin_flow);
+		return false;
+	}
+
+	/*
+	 * Inject necessary function pointers into the module.
+	 */
+	init(pg_g_threadlock,
+#ifdef ENABLE_NLS
+		 libpq_gettext,
+#else
+		 NULL,
+#endif
+		 conn_errorMessage);
+
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+
+	return true;
+#endif							/* !WIN32 */
+}
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +876,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
 		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..699ba42acc2 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -33,10 +33,10 @@ typedef struct
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
 
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 292fecf3320..47d38e9378f 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -38,10 +38,6 @@ if gssapi.found()
   )
 endif
 
-if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
-endif
-
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/interfaces/libpq/nls.mk b/src/interfaces/libpq/nls.mk
index ae761265852..b87df277d93 100644
--- a/src/interfaces/libpq/nls.mk
+++ b/src/interfaces/libpq/nls.mk
@@ -13,15 +13,21 @@ GETTEXT_FILES    = fe-auth.c \
                    fe-secure-common.c \
                    fe-secure-gssapi.c \
                    fe-secure-openssl.c \
-                   win32.c
-GETTEXT_TRIGGERS = libpq_append_conn_error:2 \
+                   win32.c \
+                   ../libpq-oauth/oauth-curl.c \
+                   ../libpq-oauth/oauth-utils.c
+GETTEXT_TRIGGERS = actx_error:2 \
+                   libpq_append_conn_error:2 \
                    libpq_append_error:2 \
                    libpq_gettext \
                    libpq_ngettext:1,2 \
+                   oauth_parse_set_error:2 \
                    pqInternalNotice:2
-GETTEXT_FLAGS    = libpq_append_conn_error:2:c-format \
+GETTEXT_FLAGS    = actx_error:2:c-format \
+                   libpq_append_conn_error:2:c-format \
                    libpq_append_error:2:c-format \
                    libpq_gettext:1:pass-c-format \
                    libpq_ngettext:1:pass-c-format \
                    libpq_ngettext:2:pass-c-format \
+                   oauth_parse_set_error:2:c-format \
                    pqInternalNotice:2:c-format
-- 
2.34.1

#328Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#327)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

Is it acceptable/desirable for a build, which has not been configured
--with-libcurl, to still pick up a compatible OAuth implementation
installed by the distro? If so, we can go with a "bare" dlopen(). If
that's not okay, I think we will probably need to use pkglibdir or
some derivative, and introduce a way for tests (and users?) to
override that directory selection. Unless someone has a good idea on
how we can split the difference.

One design goal could be reproducible builds-alike, that is, have
libpq configured with or without libcurl be completely identical, and
the feature being present is simply the libpq-oauth.so file existing
or not. That might make using plain dlopen() more attractive.

Christoph

#329Jelte Fennema-Nio
postgres@jeltef.nl
In reply to: Jacob Champion (#327)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Apr 9, 2025, 10:58 Jacob Champion <jacob.champion@enterprisedb.com>
wrote:

Is it acceptable/desirable for a build, which has not been configured
--with-libcurl, to still pick up a compatible OAuth implementation
installed by the distro? If so, we can go with a "bare" dlopen(). If
that's not okay, I think we will probably need to use pkglibdir or
some derivative, and introduce a way for tests (and users?) to
override that directory selection. Unless someone has a good idea on
how we can split the difference.

That seems like it could cause some confusing situations and also making
local testing of different compilation options difficult. How about
ifdef-ing away the dlopen call if --with-libcurl is not specified. So to
have oauth support you need to compile libpq with --with-libcurl AND the
libpq-oauth.so file needs to be present.

(resent because I failed to reply to all from my phone)

Show quoted text
#330Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#328)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Apr 9, 2025 at 1:14 AM Christoph Berg <myon@debian.org> wrote:

One design goal could be reproducible builds-alike, that is, have
libpq configured with or without libcurl be completely identical, and
the feature being present is simply the libpq-oauth.so file existing
or not. That might make using plain dlopen() more attractive.

I think that's more or less what the current v3 does, but Jelte's
point (which is upthread for me but downthread for others :D) is that
making them identical might not actually be a desirable thing in this
situation, because if you don't compile --with-libcurl, when you test
that the feature is disabled you might potentially find that it is
not.

On Wed, Apr 9, 2025 at 1:39 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:

How about ifdef-ing away the dlopen call if --with-libcurl is not specified.

This sounds like a pretty decent, simple way to go. Christoph, does
that ring any alarm bells from your perspective?

Thanks,
--Jacob

#331Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#330)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

How about ifdef-ing away the dlopen call if --with-libcurl is not specified.

This sounds like a pretty decent, simple way to go. Christoph, does
that ring any alarm bells from your perspective?

Ok for me. The opposite that I said in the other mail was just a
suggestion.

Christoph

#332Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#331)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Apr 9, 2025 at 10:44 AM Christoph Berg <myon@debian.org> wrote:

Re: Jacob Champion

How about ifdef-ing away the dlopen call if --with-libcurl is not specified.

This sounds like a pretty decent, simple way to go. Christoph, does
that ring any alarm bells from your perspective?

Ok for me. The opposite that I said in the other mail was just a
suggestion.

Cool, thanks! v4 does it that way. It also errors out at configure
time if you demand libpq-oauth on a platform that does not support it.

The Autoconf side was still polluting LIBS and CPPFLAGS with Curl
stuff. I have isolated them in v4, with some additional m4
boilerplate. IMO this makes the subtle difference between USE_LIBCURL
(which means the libpq-oauth module is enabled to build) and
HAVE_LIBCURL (which means you have libcurl installed locally) even
more confusing.

Christoph noted that this was also confusing from the packaging side,
earlier, and Daniel proposed -Doauth-client/--with-oauth-client as the
feature switch name instead. Any objections? Unfortunately it would
mean a buildfarm email is in order, so we should get it locked in.

Next up: staticlibs.

Thanks,
--Jacob

Attachments:

v4-0001-WIP-split-Device-Authorization-flow-into-dlopen-d.patchapplication/octet-stream; name=v4-0001-WIP-split-Device-Authorization-flow-into-dlopen-d.patchDownload
From c500ad9f5653f02440449404ba1a975dab6ddfe6 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH v4] WIP: split Device Authorization flow into dlopen'd module

See notes on mailing list.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                            |  17 +-
 configure                                     |  50 ++++-
 configure.ac                                  |  26 ++-
 meson.build                                   |  22 +-
 src/Makefile.global.in                        |   3 +
 src/interfaces/Makefile                       |  12 ++
 src/interfaces/libpq-oauth/Makefile           |  55 +++++
 src/interfaces/libpq-oauth/README             |  30 +++
 src/interfaces/libpq-oauth/exports.txt        |   4 +
 src/interfaces/libpq-oauth/meson.build        |  42 ++++
 .../oauth-curl.c}                             |  60 +++---
 src/interfaces/libpq-oauth/oauth-curl.h       |  24 +++
 src/interfaces/libpq-oauth/oauth-utils.c      | 202 ++++++++++++++++++
 src/interfaces/libpq-oauth/oauth-utils.h      |  35 +++
 src/interfaces/libpq/Makefile                 |  10 +-
 src/interfaces/libpq/exports.txt              |   1 +
 src/interfaces/libpq/fe-auth-oauth.c          | 102 ++++++++-
 src/interfaces/libpq/fe-auth-oauth.h          |   4 +-
 src/interfaces/libpq/meson.build              |   4 -
 src/interfaces/libpq/nls.mk                   |  12 +-
 src/makefiles/meson.build                     |   2 +
 21 files changed, 635 insertions(+), 82 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/README
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 rename src/interfaces/{libpq/fe-auth-oauth-curl.c => libpq-oauth/oauth-curl.c} (98%)
 create mode 100644 src/interfaces/libpq-oauth/oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.c
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.h

diff --git a/config/programs.m4 b/config/programs.m4
index 0a07feb37cc..0ad1e58b48d 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -286,9 +286,20 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
   AC_CHECK_HEADER(curl/curl.h, [],
 				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
-  AC_CHECK_LIB(curl, curl_multi_init, [],
+  AC_CHECK_LIB(curl, curl_multi_init, [
+				 AC_DEFINE([HAVE_LIBCURL], [1], [Define to 1 if you have the `curl' library (-lcurl).])
+				 AC_SUBST(LIBCURL_LDLIBS, -lcurl)
+			   ],
 			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
@@ -338,4 +349,8 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 *** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
 *** to use it with libpq.])
   fi
+
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 8f4a5ab28ec..df1da549c4c 100755
--- a/configure
+++ b/configure
@@ -655,6 +655,7 @@ UUID_LIBS
 LDAP_LIBS_BE
 LDAP_LIBS_FE
 with_ssl
+LIBCURL_LDLIBS
 PTHREAD_CFLAGS
 PTHREAD_LIBS
 PTHREAD_CC
@@ -708,6 +709,8 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LDFLAGS
+LIBCURL_CPPFLAGS
 LIBCURL_LIBS
 LIBCURL_CFLAGS
 with_libcurl
@@ -9042,19 +9045,27 @@ $as_echo "yes" >&6; }
 
 fi
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+
+
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
@@ -12517,9 +12528,6 @@ fi
 
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
 
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
@@ -12567,17 +12575,26 @@ fi
 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
 $as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
 if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
-  cat >>confdefs.h <<_ACEOF
-#define HAVE_LIBCURL 1
-_ACEOF
 
-  LIBS="-lcurl $LIBS"
+
+$as_echo "#define HAVE_LIBCURL 1" >>confdefs.h
+
+				 LIBCURL_LDLIBS=-lcurl
+
 
 else
   as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
 fi
 
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
@@ -12681,6 +12698,10 @@ $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
 *** to use it with libpq." "$LINENO" 5
   fi
 
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
+
 fi
 
 if test "$with_gssapi" = yes ; then
@@ -14329,6 +14350,13 @@ done
 
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    as_fn_error $? "client OAuth is not supported on this platform" "$LINENO" 5
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/configure.ac b/configure.ac
index fc5f7475d07..218aeea1b3b 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1033,19 +1033,27 @@ if test "$with_libcurl" = yes ; then
   # to explicitly set TLS 1.3 ciphersuites).
   PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+  AC_SUBST(LIBCURL_CPPFLAGS)
+  AC_SUBST(LIBCURL_LDFLAGS)
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     AC_MSG_WARN([*** OAuth support tests require --with-python to run])
@@ -1340,9 +1348,6 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
   PGAC_CHECK_LIBCURL
 fi
@@ -1640,6 +1645,13 @@ if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    AC_MSG_ERROR([client OAuth is not supported on this platform])
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/meson.build b/meson.build
index 27717ad8976..0f0df9b1af4 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -2587,6 +2588,7 @@ header_checks = [
   'xlocale.h',
 ]
 
+header_macros = {}
 foreach header : header_checks
   varname = 'HAVE_' + header.underscorify().to_upper()
 
@@ -2595,6 +2597,15 @@ foreach header : header_checks
     include_directories: postgres_inc, args: test_c_args)
   cdata.set(varname, found ? 1 : false,
             description: 'Define to 1 if you have the <@0@> header file.'.format(header))
+
+  # Mixing 1/false in cdata means we can't perform equality checks using
+  # cdata.get(), though, so store our defined header macros for later lookup.
+  #
+  #     https://github.com/mesonbuild/meson/issues/11581
+  #
+  if found
+    header_macros += {varname: true}
+  endif
 endforeach
 
 
@@ -3251,17 +3262,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 737b2dd1869..eb9b5de75b4 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -343,6 +343,9 @@ perl_embed_ldflags	= @perl_embed_ldflags@
 
 AWK	= @AWK@
 LN_S	= @LN_S@
+LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@
+LIBCURL_LDFLAGS = @LIBCURL_LDFLAGS@
+LIBCURL_LDLIBS = @LIBCURL_LDLIBS@
 MSGFMT  = @MSGFMT@
 MSGFMT_FLAGS = @MSGFMT_FLAGS@
 MSGMERGE = @MSGMERGE@
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..e6822caa206 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,19 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+else
+ALWAYS_SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
+$(recurse_always)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..8ed5a6a39c3
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,55 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization OAuth support"
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+NAME = libpq-oauth-$(MAJORVERSION)
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(LIBCURL_CPPFLAGS) $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES) \
+	oauth-curl.o \
+	oauth-utils.o
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)
+SHLIB_PREREQS = submake-libpq
+SHLIB_EXPORTS = exports.txt
+
+# Disable -bundle_loader on macOS.
+BE_DLLLIBS =
+
+# Make dependencies on pg_config_paths.h visible in all builds.
+oauth-curl.o: oauth-curl.c $(top_builddir)/src/port/pg_config_paths.h
+
+$(top_builddir)/src/port/pg_config_paths.h:
+	$(MAKE) -C $(top_builddir)/src/port pg_config_paths.h
+
+all: all-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+install: all installdirs install-lib
+
+installdirs: installdirs-lib
+
+uninstall: uninstall-lib
+
+clean distclean: clean-lib
+	rm -f $(OBJS)
diff --git a/src/interfaces/libpq-oauth/README b/src/interfaces/libpq-oauth/README
new file mode 100644
index 00000000000..ef746617c71
--- /dev/null
+++ b/src/interfaces/libpq-oauth/README
@@ -0,0 +1,30 @@
+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
+later split out as its own shared library in order to isolate its dependency on
+libcurl. (End users who don't want the Curl dependency can simply choose not to
+install this module.)
+
+If a connection string allows the use of OAuth, the server asks for it, and a
+libpq client has not installed its own custom OAuth flow, libpq will attempt to
+delay-load this module using dlopen() and the following ABI. Failure to load
+results in a failed connection.
+
+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+without warning, even during minor releases (however unlikely). The compiled
+version of libpq-oauth should always match the compiled version of libpq.
+
+- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+- void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+pg_fe_run_oauth_flow and pg_fe_cleanup_oauth_flow are implementations of
+conn->async_auth and conn->cleanup_async_auth, respectively.
+
+- void libpq_oauth_init(pgthreadlock_t threadlock,
+						libpq_gettext_func gettext_impl,
+						conn_errorMessage_func errmsg_impl);
+
+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
+libpq_gettext(), which must be injected by libpq before the flow is run. It also
+relies on libpq to expose conn->errorMessage, via an errmsg_impl.
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..6891a83dbf9
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+libpq_oauth_init          1
+pg_fe_run_oauth_flow      2
+pg_fe_cleanup_oauth_flow  3
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..bf181fd5b96
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,42 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+oauth_flow_supported = (
+  libcurl.found()
+  and (header_macros.has_key('HAVE_SYS_EVENT_H')
+       or header_macros.has_key('HAVE_SYS_EPOLL_H'))
+)
+
+if libcurlopt.disabled()
+  subdir_done()
+elif not oauth_flow_supported
+  if libcurlopt.enabled()
+    error('client OAuth is not supported on this platform')
+  endif
+  subdir_done()
+endif
+
+libpq_oauth_sources = files(
+  'oauth-curl.c',
+  'oauth-utils.c',
+)
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+libpq_oauth_name = 'libpq-oauth-@0@'.format(pg_version_major)
+
+libpq_oauth_so = shared_module(libpq_oauth_name,
+  libpq_oauth_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/oauth-curl.c
similarity index 98%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/oauth-curl.c
index cd9c0323bb6..d52125415bc 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/oauth-curl.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * fe-auth-oauth-curl.c
+ * oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *	  src/interfaces/libpq-oauth/oauth-curl.c
  *
  *-------------------------------------------------------------------------
  */
@@ -17,20 +17,23 @@
 
 #include <curl/curl.h>
 #include <math.h>
-#ifdef HAVE_SYS_EPOLL_H
+#include <unistd.h>
+
+#if defined(HAVE_SYS_EPOLL_H)
 #include <sys/epoll.h>
 #include <sys/timerfd.h>
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 #include <sys/event.h>
+#else
+#error libpq-oauth is not supported on this platform
 #endif
-#include <unistd.h>
 
 #include "common/jsonapi.h"
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
-#include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "oauth-curl.h"
+#include "oauth-utils.h"
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -1110,7 +1113,7 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 static bool
 setup_multiplexer(struct async_ctx *actx)
 {
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {.events = EPOLLIN};
 
 	actx->mux = epoll_create1(EPOLL_CLOEXEC);
@@ -1134,8 +1137,7 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	actx->mux = kqueue();
 	if (actx->mux < 0)
 	{
@@ -1158,10 +1160,9 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
+#else
+#error setup_multiplexer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
-	return false;
 }
 
 /*
@@ -1174,7 +1175,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 {
 	struct async_ctx *actx = ctx;
 
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {0};
 	int			res;
 	int			op = EPOLL_CTL_ADD;
@@ -1230,8 +1231,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev[2] = {0};
 	struct kevent ev_out[2];
 	struct timespec timeout = {0};
@@ -1312,10 +1312,9 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
+#else
+#error register_socket is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
-	return -1;
 }
 
 /*
@@ -1334,7 +1333,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 static bool
 set_timer(struct async_ctx *actx, long timeout)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timeout < 0)
@@ -1363,8 +1362,7 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev;
 
 #ifdef __NetBSD__
@@ -1419,10 +1417,9 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
+#else
+#error set_timer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return false;
 }
 
 /*
@@ -1433,7 +1430,7 @@ set_timer(struct async_ctx *actx, long timeout)
 static int
 timer_expired(struct async_ctx *actx)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timerfd_gettime(actx->timerfd, &spec) < 0)
@@ -1453,8 +1450,7 @@ timer_expired(struct async_ctx *actx)
 	/* If the remaining time to expiration is zero, we're done. */
 	return (spec.it_value.tv_sec == 0
 			&& spec.it_value.tv_nsec == 0);
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	int			res;
 
 	/* Is the timer queue ready? */
@@ -1466,10 +1462,9 @@ timer_expired(struct async_ctx *actx)
 	}
 
 	return (res > 0);
+#else
+#error timer_expired is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return -1;
 }
 
 /*
@@ -2487,8 +2482,9 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 		.verification_uri_complete = actx->authz.verification_uri_complete,
 		.expires_in = actx->authz.expires_in,
 	};
+	PQauthDataHook_type hook = PQgetAuthDataHook();
 
-	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+	res = hook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
 
 	if (!res)
 	{
diff --git a/src/interfaces/libpq-oauth/oauth-curl.h b/src/interfaces/libpq-oauth/oauth-curl.h
new file mode 100644
index 00000000000..248d0424ad0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-curl.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_CURL_H
+#define OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+/* Exported async-auth callbacks. */
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.c b/src/interfaces/libpq-oauth/oauth-utils.c
new file mode 100644
index 00000000000..2bdbf904743
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.c
@@ -0,0 +1,202 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.c
+ *
+ *	  "Glue" helpers providing a copy of some internal APIs from libpq. At
+ *	  some point in the future, we might be able to deduplicate.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq-oauth/oauth-utils.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <signal.h>
+
+#include "libpq-int.h"
+#include "oauth-utils.h"
+
+static libpq_gettext_func libpq_gettext_impl;
+static conn_errorMessage_func conn_errorMessage;
+
+pgthreadlock_t pg_g_threadlock;
+
+/*-
+ * Initializes libpq-oauth by setting necessary callbacks.
+ *
+ * The current implementation relies on the following private implementation
+ * details of libpq:
+ *
+ * - pg_g_threadlock: protects libcurl initialization if the underlying Curl
+ *   installation is not threadsafe
+ *
+ * - libpq_gettext: translates error messages using libpq's message domain
+ *
+ * - conn->errorMessage: holds translated errors for the connection. This is
+ *   handled through a translation shim, which avoids either depending on the
+ *   offset of the errorMessage in PGconn, or needing to export the variadic
+ *   libpq_append_conn_error().
+ */
+void
+libpq_oauth_init(pgthreadlock_t threadlock_impl,
+				 libpq_gettext_func gettext_impl,
+				 conn_errorMessage_func errmsg_impl)
+{
+	pg_g_threadlock = threadlock_impl;
+	libpq_gettext_impl = gettext_impl;
+	conn_errorMessage = errmsg_impl;
+}
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  This is a copy of libpq's internal API.
+ */
+void
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+	PQExpBuffer errorMessage = conn_errorMessage(conn);
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(errorMessage, '\n');
+}
+
+#ifdef ENABLE_NLS
+
+/*
+ * A shim that defers to the actual libpq_gettext().
+ */
+char *
+libpq_gettext(const char *msgid)
+{
+	if (!libpq_gettext_impl)
+	{
+		/*
+		 * Possible if the libpq build doesn't enable NLS. That's a concerning
+		 * mismatch, but in this particular case we can handle it. Try to warn
+		 * a developer with an assertion, though.
+		 */
+		Assert(false);
+
+		/*
+		 * Note that callers of libpq_gettext() have to treat the return value
+		 * as if it were const, because builds without NLS simply pass through
+		 * their argument.
+		 */
+		return unconstify(char *, msgid);
+	}
+
+	return libpq_gettext_impl(msgid);
+}
+
+#endif							/* ENABLE_NLS */
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+/*
+ * Duplicate SOCK_ERRNO* definitions from libpq-int.h, for use by
+ * pq_block/reset_sigpipe().
+ */
+#ifdef WIN32
+#define SOCK_ERRNO (WSAGetLastError())
+#define SOCK_ERRNO_SET(e) WSASetLastError(e)
+#else
+#define SOCK_ERRNO errno
+#define SOCK_ERRNO_SET(e) (errno = (e))
+#endif
+
+/*
+ *	Block SIGPIPE for this thread. This is a copy of libpq's internal API.
+ */
+int
+pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+
+/*
+ *	Discard any pending SIGPIPE and reset the signal mask. This is a copy of
+ *	libpq's internal API.
+ */
+void
+pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
diff --git a/src/interfaces/libpq-oauth/oauth-utils.h b/src/interfaces/libpq-oauth/oauth-utils.h
new file mode 100644
index 00000000000..279fc113248
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.h
@@ -0,0 +1,35 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.h
+ *
+ *	  Definitions providing missing libpq internal APIs
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_UTILS_H
+#define OAUTH_UTILS_H
+
+#include "libpq-fe.h"
+#include "pqexpbuffer.h"
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/* Initializes libpq-oauth. */
+extern PGDLLEXPORT void libpq_oauth_init(pgthreadlock_t threadlock,
+										 libpq_gettext_func gettext_impl,
+										 conn_errorMessage_func errmsg_impl);
+
+/* Duplicated APIs, copied from libpq. */
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+extern bool oauth_unsafe_debugging_enabled(void);
+extern int	pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
+extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
+
+#endif							/* OAUTH_UTILS_H */
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..b5346181b9a 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -64,10 +64,6 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
-
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -86,7 +82,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -115,8 +111,6 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
-# libcurl registers an exit handler in the memory debugging code when running
-# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -124,7 +118,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..0625cf39e9a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,4 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cf1a25e2ccc..5bea1e059a2 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -15,6 +15,10 @@
 
 #include "postgres_fe.h"
 
+#ifndef WIN32
+#include <dlfcn.h>
+#endif
+
 #include "common/base64.h"
 #include "common/hmac.h"
 #include "common/jsonapi.h"
@@ -22,6 +26,7 @@
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
 #include "mb/pg_wchar.h"
+#include "pg_config_paths.h"
 
 /* The exported OAuth callback mechanism. */
 static void *oauth_init(PGconn *conn, const char *password,
@@ -721,6 +726,93 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+#ifdef USE_LIBCURL
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/*
+ * This shim is injected into libpq-oauth so that it doesn't depend on the
+ * offset of conn->errorMessage.
+ *
+ * TODO: look into exporting libpq_append_conn_error or a comparable API from
+ * libpq, instead.
+ */
+static PQExpBuffer
+conn_errorMessage(PGconn *conn)
+{
+	return &conn->errorMessage;
+}
+
+static bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	void		(*init) (pgthreadlock_t threadlock,
+						 libpq_gettext_func gettext_impl,
+						 conn_errorMessage_func errmsg_impl);
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+
+	state->builtin_flow = dlopen("libpq-oauth-" PG_MAJORVERSION DLSUFFIX,
+								 RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		/*
+		 * For end users, this probably isn't an error condition, it just
+		 * means the flow isn't installed. Developers and package maintainers
+		 * may want to debug this via the PGOAUTHDEBUG envvar, though.
+		 *
+		 * Note that POSIX dlerror() isn't guaranteed to be threadsafe.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlopen for libpq-oauth: %s\n", dlerror());
+
+		return false;
+	}
+
+	if ((init = dlsym(state->builtin_flow, "libpq_oauth_init")) == NULL
+		|| (flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow")) == NULL
+		|| (cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow")) == NULL)
+	{
+		/*
+		 * This is more of an error condition than the one above, but due to
+		 * the dlerror() threadsafety issue, lock it behind PGOAUTHDEBUG too.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlsym for libpq-oauth: %s\n", dlerror());
+
+		dlclose(state->builtin_flow);
+		return false;
+	}
+
+	/*
+	 * Inject necessary function pointers into the module.
+	 */
+	init(pg_g_threadlock,
+#ifdef ENABLE_NLS
+		 libpq_gettext,
+#else
+		 NULL,
+#endif
+		 conn_errorMessage);
+
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+
+	return true;
+}
+
+#else							/* !USE_LIBCURL */
+
+static bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	return false;
+}
+
+#endif							/* USE_LIBCURL */
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +884,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
 		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..699ba42acc2 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -33,10 +33,10 @@ typedef struct
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
 
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 292fecf3320..47d38e9378f 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -38,10 +38,6 @@ if gssapi.found()
   )
 endif
 
-if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
-endif
-
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
diff --git a/src/interfaces/libpq/nls.mk b/src/interfaces/libpq/nls.mk
index ae761265852..b87df277d93 100644
--- a/src/interfaces/libpq/nls.mk
+++ b/src/interfaces/libpq/nls.mk
@@ -13,15 +13,21 @@ GETTEXT_FILES    = fe-auth.c \
                    fe-secure-common.c \
                    fe-secure-gssapi.c \
                    fe-secure-openssl.c \
-                   win32.c
-GETTEXT_TRIGGERS = libpq_append_conn_error:2 \
+                   win32.c \
+                   ../libpq-oauth/oauth-curl.c \
+                   ../libpq-oauth/oauth-utils.c
+GETTEXT_TRIGGERS = actx_error:2 \
+                   libpq_append_conn_error:2 \
                    libpq_append_error:2 \
                    libpq_gettext \
                    libpq_ngettext:1,2 \
+                   oauth_parse_set_error:2 \
                    pqInternalNotice:2
-GETTEXT_FLAGS    = libpq_append_conn_error:2:c-format \
+GETTEXT_FLAGS    = actx_error:2:c-format \
+                   libpq_append_conn_error:2:c-format \
                    libpq_append_error:2:c-format \
                    libpq_gettext:1:pass-c-format \
                    libpq_ngettext:1:pass-c-format \
                    libpq_ngettext:2:pass-c-format \
+                   oauth_parse_set_error:2:c-format \
                    pqInternalNotice:2:c-format
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 46d8da070e8..f2ba5b38124 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -201,6 +201,8 @@ pgxs_empty = [
   'ICU_LIBS',
 
   'LIBURING_CFLAGS', 'LIBURING_LIBS',
+
+  'LIBCURL_CPPFLAGS', 'LIBCURL_LDFLAGS', 'LIBCURL_LDLIBS',
 ]
 
 if host_system == 'windows' and cc.get_argument_syntax() != 'msvc'
-- 
2.34.1

#333Jelte Fennema-Nio
postgres@jeltef.nl
In reply to: Jacob Champion (#332)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Apr 10, 2025, 07:08 Jacob Champion <jacob.champion@enterprisedb.com>
wrote:

Christoph noted that this was also confusing from the packaging side,
earlier, and Daniel proposed -Doauth-client/--with-oauth-client as the
feature switch name instead.

+1

Next up: staticlibs.

I think your suggestion of not using any .so files would best there (from w
user perspective). I'd be quite surprised if a static build still resulted
in me having to manage shared library files anyway.

Show quoted text
#334Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jelte Fennema-Nio (#333)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Apr 9, 2025 at 4:42 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:

I think your suggestion of not using any .so files would best there (from w user perspective). I'd be quite surprised if a static build still resulted in me having to manage shared library files anyway.

Done this way in v5. I had planned to separate the implementations by
a #define, but I ran into issues with Makefile.shlib, so I split the
shared and dynamic versions into separate files. I just now realized
that we do something about this exact problem in src/common, so I'll
see if I can copy its technique for the next go round.

In the next version, I'll try to add --with-oauth-client while keeping
--with-libcurl as an alias, to let the buildfarm migrate off of it
before it's removed.

Thanks!
--Jacob

Attachments:

v5-0001-WIP-split-Device-Authorization-flow-into-dlopen-d.patchapplication/octet-stream; name=v5-0001-WIP-split-Device-Authorization-flow-into-dlopen-d.patchDownload
From 1e500c077aef3dfd17c5423d9f63639a75c2fd80 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH v5] WIP: split Device Authorization flow into dlopen'd module

See notes on mailing list.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                            |  17 +-
 configure                                     |  50 ++++-
 configure.ac                                  |  26 ++-
 meson.build                                   |  22 +-
 src/Makefile.global.in                        |   3 +
 src/interfaces/Makefile                       |  12 ++
 src/interfaces/libpq-oauth/Makefile           |  65 ++++++
 src/interfaces/libpq-oauth/README             |  30 +++
 src/interfaces/libpq-oauth/exports.txt        |   4 +
 src/interfaces/libpq-oauth/meson.build        |  54 +++++
 .../oauth-curl.c}                             |  60 +++---
 src/interfaces/libpq-oauth/oauth-curl.h       |  24 +++
 src/interfaces/libpq-oauth/oauth-utils.c      | 202 ++++++++++++++++++
 src/interfaces/libpq-oauth/oauth-utils.h      |  35 +++
 src/interfaces/libpq/Makefile                 |  20 +-
 src/interfaces/libpq/exports.txt              |   1 +
 src/interfaces/libpq/fe-auth-oauth-dynamic.c  | 109 ++++++++++
 src/interfaces/libpq/fe-auth-oauth-static.c   |  40 ++++
 src/interfaces/libpq/fe-auth-oauth.c          |  26 ++-
 src/interfaces/libpq/fe-auth-oauth.h          |   5 +-
 src/interfaces/libpq/meson.build              |   8 +-
 src/interfaces/libpq/nls.mk                   |  12 +-
 src/makefiles/meson.build                     |   2 +
 23 files changed, 747 insertions(+), 80 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/README
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 rename src/interfaces/{libpq/fe-auth-oauth-curl.c => libpq-oauth/oauth-curl.c} (98%)
 create mode 100644 src/interfaces/libpq-oauth/oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.c
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.h
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-dynamic.c
 create mode 100644 src/interfaces/libpq/fe-auth-oauth-static.c

diff --git a/config/programs.m4 b/config/programs.m4
index 0a07feb37cc..0ad1e58b48d 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -286,9 +286,20 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
   AC_CHECK_HEADER(curl/curl.h, [],
 				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
-  AC_CHECK_LIB(curl, curl_multi_init, [],
+  AC_CHECK_LIB(curl, curl_multi_init, [
+				 AC_DEFINE([HAVE_LIBCURL], [1], [Define to 1 if you have the `curl' library (-lcurl).])
+				 AC_SUBST(LIBCURL_LDLIBS, -lcurl)
+			   ],
 			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
@@ -338,4 +349,8 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 *** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
 *** to use it with libpq.])
   fi
+
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 8f4a5ab28ec..df1da549c4c 100755
--- a/configure
+++ b/configure
@@ -655,6 +655,7 @@ UUID_LIBS
 LDAP_LIBS_BE
 LDAP_LIBS_FE
 with_ssl
+LIBCURL_LDLIBS
 PTHREAD_CFLAGS
 PTHREAD_LIBS
 PTHREAD_CC
@@ -708,6 +709,8 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LDFLAGS
+LIBCURL_CPPFLAGS
 LIBCURL_LIBS
 LIBCURL_CFLAGS
 with_libcurl
@@ -9042,19 +9045,27 @@ $as_echo "yes" >&6; }
 
 fi
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+
+
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
@@ -12517,9 +12528,6 @@ fi
 
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
 
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
@@ -12567,17 +12575,26 @@ fi
 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
 $as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
 if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
-  cat >>confdefs.h <<_ACEOF
-#define HAVE_LIBCURL 1
-_ACEOF
 
-  LIBS="-lcurl $LIBS"
+
+$as_echo "#define HAVE_LIBCURL 1" >>confdefs.h
+
+				 LIBCURL_LDLIBS=-lcurl
+
 
 else
   as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
 fi
 
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
@@ -12681,6 +12698,10 @@ $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
 *** to use it with libpq." "$LINENO" 5
   fi
 
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
+
 fi
 
 if test "$with_gssapi" = yes ; then
@@ -14329,6 +14350,13 @@ done
 
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    as_fn_error $? "client OAuth is not supported on this platform" "$LINENO" 5
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/configure.ac b/configure.ac
index fc5f7475d07..218aeea1b3b 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1033,19 +1033,27 @@ if test "$with_libcurl" = yes ; then
   # to explicitly set TLS 1.3 ciphersuites).
   PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+  AC_SUBST(LIBCURL_CPPFLAGS)
+  AC_SUBST(LIBCURL_LDFLAGS)
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     AC_MSG_WARN([*** OAuth support tests require --with-python to run])
@@ -1340,9 +1348,6 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
   PGAC_CHECK_LIBCURL
 fi
@@ -1640,6 +1645,13 @@ if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    AC_MSG_ERROR([client OAuth is not supported on this platform])
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/meson.build b/meson.build
index 27717ad8976..0f0df9b1af4 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -2587,6 +2588,7 @@ header_checks = [
   'xlocale.h',
 ]
 
+header_macros = {}
 foreach header : header_checks
   varname = 'HAVE_' + header.underscorify().to_upper()
 
@@ -2595,6 +2597,15 @@ foreach header : header_checks
     include_directories: postgres_inc, args: test_c_args)
   cdata.set(varname, found ? 1 : false,
             description: 'Define to 1 if you have the <@0@> header file.'.format(header))
+
+  # Mixing 1/false in cdata means we can't perform equality checks using
+  # cdata.get(), though, so store our defined header macros for later lookup.
+  #
+  #     https://github.com/mesonbuild/meson/issues/11581
+  #
+  if found
+    header_macros += {varname: true}
+  endif
 endforeach
 
 
@@ -3251,17 +3262,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 737b2dd1869..eb9b5de75b4 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -343,6 +343,9 @@ perl_embed_ldflags	= @perl_embed_ldflags@
 
 AWK	= @AWK@
 LN_S	= @LN_S@
+LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@
+LIBCURL_LDFLAGS = @LIBCURL_LDFLAGS@
+LIBCURL_LDLIBS = @LIBCURL_LDLIBS@
 MSGFMT  = @MSGFMT@
 MSGFMT_FLAGS = @MSGFMT_FLAGS@
 MSGMERGE = @MSGMERGE@
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..e6822caa206 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,19 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+else
+ALWAYS_SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
+$(recurse_always)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..5fd251a1d27
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,65 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization OAuth support"
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+NAME = pq-oauth-$(MAJORVERSION)
+
+# Force the name "libpq-oauth" for both the static and shared libraries.
+override shlib := lib$(NAME)$(DLSUFFIX)
+override stlib := lib$(NAME).a
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(LIBCURL_CPPFLAGS) $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES) \
+	oauth-curl.o
+
+# The shared library needs additional glue symbols.
+$(shlib): OBJS += oauth-utils.o
+$(shlib): oauth-utils.o
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)
+SHLIB_PREREQS = submake-libpq
+SHLIB_EXPORTS = exports.txt
+
+# Disable -bundle_loader on macOS.
+BE_DLLLIBS =
+
+# By default, a library without an SONAME doesn't get a static library, so we
+# add it to the build explicitly.
+all: all-lib all-static-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+# Ignore the standard rules for SONAME-less installation; we want both the
+# static and shared libraries to go into libdir.
+install: all installdirs $(stlib) $(shlib)
+	$(INSTALL_SHLIB) $(shlib) '$(DESTDIR)$(libdir)/$(shlib)'
+	$(INSTALL_STLIB) $(stlib) '$(DESTDIR)$(libdir)/$(stlib)'
+
+installdirs:
+	$(MKDIR_P) '$(DESTDIR)$(libdir)'
+
+uninstall:
+	rm -f '$(DESTDIR)$(libdir)/$(stlib)'
+	rm -f '$(DESTDIR)$(libdir)/$(shlib)'
+
+clean distclean: clean-lib
+	rm -f $(OBJS) oauth-utils.o
diff --git a/src/interfaces/libpq-oauth/README b/src/interfaces/libpq-oauth/README
new file mode 100644
index 00000000000..ef746617c71
--- /dev/null
+++ b/src/interfaces/libpq-oauth/README
@@ -0,0 +1,30 @@
+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
+later split out as its own shared library in order to isolate its dependency on
+libcurl. (End users who don't want the Curl dependency can simply choose not to
+install this module.)
+
+If a connection string allows the use of OAuth, the server asks for it, and a
+libpq client has not installed its own custom OAuth flow, libpq will attempt to
+delay-load this module using dlopen() and the following ABI. Failure to load
+results in a failed connection.
+
+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+without warning, even during minor releases (however unlikely). The compiled
+version of libpq-oauth should always match the compiled version of libpq.
+
+- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+- void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+pg_fe_run_oauth_flow and pg_fe_cleanup_oauth_flow are implementations of
+conn->async_auth and conn->cleanup_async_auth, respectively.
+
+- void libpq_oauth_init(pgthreadlock_t threadlock,
+						libpq_gettext_func gettext_impl,
+						conn_errorMessage_func errmsg_impl);
+
+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
+libpq_gettext(), which must be injected by libpq before the flow is run. It also
+relies on libpq to expose conn->errorMessage, via an errmsg_impl.
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..6891a83dbf9
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+libpq_oauth_init          1
+pg_fe_run_oauth_flow      2
+pg_fe_cleanup_oauth_flow  3
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..7634b7f6fb1
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,54 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+oauth_flow_supported = (
+  libcurl.found()
+  and (header_macros.has_key('HAVE_SYS_EVENT_H')
+       or header_macros.has_key('HAVE_SYS_EPOLL_H'))
+)
+
+if libcurlopt.disabled()
+  subdir_done()
+elif not oauth_flow_supported
+  if libcurlopt.enabled()
+    error('client OAuth is not supported on this platform')
+  endif
+  subdir_done()
+endif
+
+libpq_oauth_sources = files(
+  'oauth-curl.c',
+)
+
+# The shared library needs additional glue symbols.
+libpq_oauth_so_sources = files(
+  'oauth-utils.c',
+)
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+libpq_oauth_name = 'libpq-oauth-@0@'.format(pg_version_major)
+
+libpq_oauth_st = static_library(libpq_oauth_name,
+  libpq_oauth_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_stlib_code, libpq_oauth_deps],
+  kwargs: default_lib_args,
+)
+
+libpq_oauth_so = shared_module(libpq_oauth_name,
+  libpq_oauth_sources + libpq_oauth_so_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/oauth-curl.c
similarity index 98%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/oauth-curl.c
index cd9c0323bb6..d52125415bc 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/oauth-curl.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * fe-auth-oauth-curl.c
+ * oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *	  src/interfaces/libpq-oauth/oauth-curl.c
  *
  *-------------------------------------------------------------------------
  */
@@ -17,20 +17,23 @@
 
 #include <curl/curl.h>
 #include <math.h>
-#ifdef HAVE_SYS_EPOLL_H
+#include <unistd.h>
+
+#if defined(HAVE_SYS_EPOLL_H)
 #include <sys/epoll.h>
 #include <sys/timerfd.h>
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 #include <sys/event.h>
+#else
+#error libpq-oauth is not supported on this platform
 #endif
-#include <unistd.h>
 
 #include "common/jsonapi.h"
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
-#include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "oauth-curl.h"
+#include "oauth-utils.h"
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -1110,7 +1113,7 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 static bool
 setup_multiplexer(struct async_ctx *actx)
 {
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {.events = EPOLLIN};
 
 	actx->mux = epoll_create1(EPOLL_CLOEXEC);
@@ -1134,8 +1137,7 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	actx->mux = kqueue();
 	if (actx->mux < 0)
 	{
@@ -1158,10 +1160,9 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
+#else
+#error setup_multiplexer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
-	return false;
 }
 
 /*
@@ -1174,7 +1175,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 {
 	struct async_ctx *actx = ctx;
 
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {0};
 	int			res;
 	int			op = EPOLL_CTL_ADD;
@@ -1230,8 +1231,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev[2] = {0};
 	struct kevent ev_out[2];
 	struct timespec timeout = {0};
@@ -1312,10 +1312,9 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
+#else
+#error register_socket is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
-	return -1;
 }
 
 /*
@@ -1334,7 +1333,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 static bool
 set_timer(struct async_ctx *actx, long timeout)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timeout < 0)
@@ -1363,8 +1362,7 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev;
 
 #ifdef __NetBSD__
@@ -1419,10 +1417,9 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
+#else
+#error set_timer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return false;
 }
 
 /*
@@ -1433,7 +1430,7 @@ set_timer(struct async_ctx *actx, long timeout)
 static int
 timer_expired(struct async_ctx *actx)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timerfd_gettime(actx->timerfd, &spec) < 0)
@@ -1453,8 +1450,7 @@ timer_expired(struct async_ctx *actx)
 	/* If the remaining time to expiration is zero, we're done. */
 	return (spec.it_value.tv_sec == 0
 			&& spec.it_value.tv_nsec == 0);
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	int			res;
 
 	/* Is the timer queue ready? */
@@ -1466,10 +1462,9 @@ timer_expired(struct async_ctx *actx)
 	}
 
 	return (res > 0);
+#else
+#error timer_expired is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return -1;
 }
 
 /*
@@ -2487,8 +2482,9 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 		.verification_uri_complete = actx->authz.verification_uri_complete,
 		.expires_in = actx->authz.expires_in,
 	};
+	PQauthDataHook_type hook = PQgetAuthDataHook();
 
-	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+	res = hook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
 
 	if (!res)
 	{
diff --git a/src/interfaces/libpq-oauth/oauth-curl.h b/src/interfaces/libpq-oauth/oauth-curl.h
new file mode 100644
index 00000000000..248d0424ad0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-curl.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_CURL_H
+#define OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+/* Exported async-auth callbacks. */
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.c b/src/interfaces/libpq-oauth/oauth-utils.c
new file mode 100644
index 00000000000..2bdbf904743
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.c
@@ -0,0 +1,202 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.c
+ *
+ *	  "Glue" helpers providing a copy of some internal APIs from libpq. At
+ *	  some point in the future, we might be able to deduplicate.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq-oauth/oauth-utils.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <signal.h>
+
+#include "libpq-int.h"
+#include "oauth-utils.h"
+
+static libpq_gettext_func libpq_gettext_impl;
+static conn_errorMessage_func conn_errorMessage;
+
+pgthreadlock_t pg_g_threadlock;
+
+/*-
+ * Initializes libpq-oauth by setting necessary callbacks.
+ *
+ * The current implementation relies on the following private implementation
+ * details of libpq:
+ *
+ * - pg_g_threadlock: protects libcurl initialization if the underlying Curl
+ *   installation is not threadsafe
+ *
+ * - libpq_gettext: translates error messages using libpq's message domain
+ *
+ * - conn->errorMessage: holds translated errors for the connection. This is
+ *   handled through a translation shim, which avoids either depending on the
+ *   offset of the errorMessage in PGconn, or needing to export the variadic
+ *   libpq_append_conn_error().
+ */
+void
+libpq_oauth_init(pgthreadlock_t threadlock_impl,
+				 libpq_gettext_func gettext_impl,
+				 conn_errorMessage_func errmsg_impl)
+{
+	pg_g_threadlock = threadlock_impl;
+	libpq_gettext_impl = gettext_impl;
+	conn_errorMessage = errmsg_impl;
+}
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  This is a copy of libpq's internal API.
+ */
+void
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+	PQExpBuffer errorMessage = conn_errorMessage(conn);
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(errorMessage, '\n');
+}
+
+#ifdef ENABLE_NLS
+
+/*
+ * A shim that defers to the actual libpq_gettext().
+ */
+char *
+libpq_gettext(const char *msgid)
+{
+	if (!libpq_gettext_impl)
+	{
+		/*
+		 * Possible if the libpq build doesn't enable NLS. That's a concerning
+		 * mismatch, but in this particular case we can handle it. Try to warn
+		 * a developer with an assertion, though.
+		 */
+		Assert(false);
+
+		/*
+		 * Note that callers of libpq_gettext() have to treat the return value
+		 * as if it were const, because builds without NLS simply pass through
+		 * their argument.
+		 */
+		return unconstify(char *, msgid);
+	}
+
+	return libpq_gettext_impl(msgid);
+}
+
+#endif							/* ENABLE_NLS */
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+/*
+ * Duplicate SOCK_ERRNO* definitions from libpq-int.h, for use by
+ * pq_block/reset_sigpipe().
+ */
+#ifdef WIN32
+#define SOCK_ERRNO (WSAGetLastError())
+#define SOCK_ERRNO_SET(e) WSASetLastError(e)
+#else
+#define SOCK_ERRNO errno
+#define SOCK_ERRNO_SET(e) (errno = (e))
+#endif
+
+/*
+ *	Block SIGPIPE for this thread. This is a copy of libpq's internal API.
+ */
+int
+pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+
+/*
+ *	Discard any pending SIGPIPE and reset the signal mask. This is a copy of
+ *	libpq's internal API.
+ */
+void
+pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
diff --git a/src/interfaces/libpq-oauth/oauth-utils.h b/src/interfaces/libpq-oauth/oauth-utils.h
new file mode 100644
index 00000000000..279fc113248
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.h
@@ -0,0 +1,35 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.h
+ *
+ *	  Definitions providing missing libpq internal APIs
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_UTILS_H
+#define OAUTH_UTILS_H
+
+#include "libpq-fe.h"
+#include "pqexpbuffer.h"
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/* Initializes libpq-oauth. */
+extern PGDLLEXPORT void libpq_oauth_init(pgthreadlock_t threadlock,
+										 libpq_gettext_func gettext_impl,
+										 conn_errorMessage_func errmsg_impl);
+
+/* Duplicated APIs, copied from libpq. */
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+extern bool oauth_unsafe_debugging_enabled(void);
+extern int	pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
+extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
+
+#endif							/* OAUTH_UTILS_H */
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..852b1948ecb 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -64,10 +64,6 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
-
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
 endif
@@ -86,7 +82,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -107,6 +103,15 @@ all: all-lib libpq-refs-stamp
 include $(top_srcdir)/src/Makefile.shlib
 backend_src = $(top_srcdir)/src/backend
 
+# Add the correct implementations of the OAuth flow, if requested, for both
+# shared and static builds.
+ifeq ($(with_libcurl),yes)
+$(shlib): OBJS += fe-auth-oauth-dynamic.o
+$(shlib): fe-auth-oauth-dynamic.o
+$(stlib): OBJS += fe-auth-oauth-static.o
+$(stlib): fe-auth-oauth-static.o
+endif
+
 # Check for functions that libpq must not call, currently just exit().
 # (Ideally we'd reject abort() too, but there are various scenarios where
 # build toolchains insert abort() calls, e.g. to implement assert().)
@@ -115,8 +120,6 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
-# libcurl registers an exit handler in the memory debugging code when running
-# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -124,7 +127,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
@@ -172,5 +175,6 @@ clean distclean: clean-lib
 	$(MAKE) -C test $@
 	rm -rf tmp_check
 	rm -f $(OBJS) pthread.h libpq-refs-stamp
+	rm -f fe-auth-oauth-dynamic.o fe-auth-oauth-static.o
 # Might be left over from a Win32 client-only build
 	rm -f pg_config_paths.h
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..0625cf39e9a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,4 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
diff --git a/src/interfaces/libpq/fe-auth-oauth-dynamic.c b/src/interfaces/libpq/fe-auth-oauth-dynamic.c
new file mode 100644
index 00000000000..a84551a6307
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-dynamic.c
@@ -0,0 +1,109 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-dynamic.c
+ *
+ *     Implements the builtin flow by loading the libpq-oauth plugin.
+ *     See also fe-auth-oauth-static.c, for static builds.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-dynamic.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#ifndef USE_LIBCURL
+#error this should only be compiled when OAuth support is enabled
+#endif
+
+#include <dlfcn.h>
+
+#include "fe-auth-oauth.h"
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/*
+ * This shim is injected into libpq-oauth so that it doesn't depend on the
+ * offset of conn->errorMessage.
+ *
+ * TODO: look into exporting libpq_append_conn_error or a comparable API from
+ * libpq, instead.
+ */
+static PQExpBuffer
+conn_errorMessage(PGconn *conn)
+{
+	return &conn->errorMessage;
+}
+
+/*
+ * Loads the libpq-oauth plugin via dlopen(), initializes it, and plugs its
+ * callbacks into the connection's async auth handlers.
+ *
+ * Failure to load here results in a relatively quiet connection error, to
+ * handle the use case where the build supports loading a flow but a user does
+ * not want to install it. Troubleshooting of linker/loader failures can be done
+ * via PGOAUTHDEBUG.
+ */
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	void		(*init) (pgthreadlock_t threadlock,
+						 libpq_gettext_func gettext_impl,
+						 conn_errorMessage_func errmsg_impl);
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+
+	state->builtin_flow = dlopen("libpq-oauth-" PG_MAJORVERSION DLSUFFIX,
+								 RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		/*
+		 * For end users, this probably isn't an error condition, it just
+		 * means the flow isn't installed. Developers and package maintainers
+		 * may want to debug this via the PGOAUTHDEBUG envvar, though.
+		 *
+		 * Note that POSIX dlerror() isn't guaranteed to be threadsafe.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlopen for libpq-oauth: %s\n", dlerror());
+
+		return false;
+	}
+
+	if ((init = dlsym(state->builtin_flow, "libpq_oauth_init")) == NULL
+		|| (flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow")) == NULL
+		|| (cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow")) == NULL)
+	{
+		/*
+		 * This is more of an error condition than the one above, but due to
+		 * the dlerror() threadsafety issue, lock it behind PGOAUTHDEBUG too.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlsym for libpq-oauth: %s\n", dlerror());
+
+		dlclose(state->builtin_flow);
+		return false;
+	}
+
+	/*
+	 * Inject necessary function pointers into the module.
+	 */
+	init(pg_g_threadlock,
+#ifdef ENABLE_NLS
+		 libpq_gettext,
+#else
+		 NULL,
+#endif
+		 conn_errorMessage);
+
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+
+	return true;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth-static.c b/src/interfaces/libpq/fe-auth-oauth-static.c
new file mode 100644
index 00000000000..25119bbb50c
--- /dev/null
+++ b/src/interfaces/libpq/fe-auth-oauth-static.c
@@ -0,0 +1,40 @@
+/*-------------------------------------------------------------------------
+ *
+ * fe-auth-oauth-static.c
+ *
+ *     Implements the builtin flow using the libpq-oauth.a staticlib.
+ *     See also fe-auth-oauth-dynamic.c, which loads a plugin at runtime.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq/fe-auth-oauth-static.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#ifndef USE_LIBCURL
+#error this should only be compiled when OAuth support is enabled
+#endif
+
+#include "fe-auth-oauth.h"
+
+/* see libpq-oauth/oauth-curl.h */
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+/*
+ * Loads the builtin flow from libpq-oauth.a.
+ */
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = pg_fe_run_oauth_flow;
+	conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+	return true;
+}
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cf1a25e2ccc..0bad132b580 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -22,6 +22,7 @@
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
 #include "mb/pg_wchar.h"
+#include "pg_config_paths.h"
 
 /* The exported OAuth callback mechanism. */
 static void *oauth_init(PGconn *conn, const char *password,
@@ -721,6 +722,21 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+#ifndef USE_LIBCURL
+
+/*
+ * This configuration doesn't support the builtin flow.
+ *
+ * Alternative implementations are in fe-auth-oauth-dynamic/-static.c.
+ */
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	return false;
+}
+
+#endif
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +808,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
 		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..687e664475f 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -33,12 +33,13 @@ typedef struct
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
+extern bool use_builtin_flow(PGconn *conn, fe_oauth_state *state);
 
 /* Mechanisms in fe-auth-oauth.c */
 extern const pg_fe_sasl_mech pg_oauth_mech;
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 292fecf3320..bf880e053f8 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -18,6 +18,7 @@ libpq_sources = files(
   'pqexpbuffer.c',
 )
 libpq_so_sources = [] # for shared lib, in addition to the above
+libpq_st_sources = [] # for static lib, in addition to the above
 
 if host_system == 'windows'
   libpq_sources += files('pthread-win32.c', 'win32.c')
@@ -38,8 +39,11 @@ if gssapi.found()
   )
 endif
 
+# Add the correct implementations of the OAuth flow, if requested, for both
+# shared and static builds.
 if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
+  libpq_so_sources += files('fe-auth-oauth-dynamic.c')
+  libpq_st_sources += files('fe-auth-oauth-static.c')
 endif
 
 export_file = custom_target('libpq.exports',
@@ -59,7 +63,7 @@ libpq_c_args = ['-DSO_MAJOR_VERSION=5']
 # more complexity than its worth (reusing object files requires also linking
 # to the library on windows or breaks precompiled headers).
 libpq_st = static_library('libpq',
-  libpq_sources,
+  libpq_sources + libpq_st_sources,
   include_directories: [libpq_inc],
   c_args: libpq_c_args,
   c_pch: pch_postgres_fe_h,
diff --git a/src/interfaces/libpq/nls.mk b/src/interfaces/libpq/nls.mk
index ae761265852..b87df277d93 100644
--- a/src/interfaces/libpq/nls.mk
+++ b/src/interfaces/libpq/nls.mk
@@ -13,15 +13,21 @@ GETTEXT_FILES    = fe-auth.c \
                    fe-secure-common.c \
                    fe-secure-gssapi.c \
                    fe-secure-openssl.c \
-                   win32.c
-GETTEXT_TRIGGERS = libpq_append_conn_error:2 \
+                   win32.c \
+                   ../libpq-oauth/oauth-curl.c \
+                   ../libpq-oauth/oauth-utils.c
+GETTEXT_TRIGGERS = actx_error:2 \
+                   libpq_append_conn_error:2 \
                    libpq_append_error:2 \
                    libpq_gettext \
                    libpq_ngettext:1,2 \
+                   oauth_parse_set_error:2 \
                    pqInternalNotice:2
-GETTEXT_FLAGS    = libpq_append_conn_error:2:c-format \
+GETTEXT_FLAGS    = actx_error:2:c-format \
+                   libpq_append_conn_error:2:c-format \
                    libpq_append_error:2:c-format \
                    libpq_gettext:1:pass-c-format \
                    libpq_ngettext:1:pass-c-format \
                    libpq_ngettext:2:pass-c-format \
+                   oauth_parse_set_error:2:c-format \
                    pqInternalNotice:2:c-format
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 46d8da070e8..f2ba5b38124 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -201,6 +201,8 @@ pgxs_empty = [
   'ICU_LIBS',
 
   'LIBURING_CFLAGS', 'LIBURING_LIBS',
+
+  'LIBCURL_CPPFLAGS', 'LIBCURL_LDFLAGS', 'LIBCURL_LDLIBS',
 ]
 
 if host_system == 'windows' and cc.get_argument_syntax() != 'msvc'
-- 
2.34.1

#335Wolfgang Walther
walther@technowledgy.de
In reply to: Jacob Champion (#334)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion:

On Wed, Apr 9, 2025 at 4:42 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:

I think your suggestion of not using any .so files would best there (from w user perspective). I'd be quite surprised if a static build still resulted in me having to manage shared library files anyway.

Done this way in v5. I had planned to separate the implementations by
a #define, but I ran into issues with Makefile.shlib, so I split the
shared and dynamic versions into separate files. I just now realized
that we do something about this exact problem in src/common, so I'll
see if I can copy its technique for the next go round.

I tried to apply this patch to nixpkgs' libpq build [1]https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/sql/postgresql/libpq.nix. First, I pinned
a recent commit from master (one where the v5 patch will apply cleanly
later) and enabled --with-libcurl [2].

At this stage, without the patch applied, I observe the following:

1. The default, dynamically linked, build succeeds and libpq.so is
linked to libcurl.so as expected!

2. The statically linked build fails during configure:

  checking for curl_multi_init in -lcurl... no
  configure: error: library 'curl' does not provide curl_multi_init

config.log tells me that it can't link to libcurl, because of undefined
references, for example:

  undefined reference to `psl_is_cookie_domain_acceptable'
  undefined reference to `nghttp2_session_check_request_allowed'

I assume the many libs listed in Libs.private in libcurl.pc are not
added automatically for this check?

Next, I applied the v5 patch and:

3. Running the same build as in step 1 above (dynamically linked), I can
see that libpq.so does have some reference to dlopen / libpq-oauth in it
- good. But libpq-oauth.so itself is not built. The commands I am using
to build just the libpq package are essentially like this:

  make submake-libpgport
  make submake-libpq
  make -C src/bin/pg_config install
  make -C src/common install
  make -C src/include install
  make -C src/interfaces/libpq install
  make -C src/port install

I tried adding "make submake-libpq-oauth", but that doesn't exist.

When I do "make -C src/interfaces/libpq-oauth", I get this error:

  make: *** No rule to make target 'oauth-curl.o', needed by
'libpq-oauth-18.so'.  Stop.

Not sure how to proceed to build libpq-oauth.so.

4. The statically linked build fails with the same configure error as above.

I can only test autoconf right now, not meson - don't have a working
setup for that, yet.

Best,

Wolfgang

[1]: https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/sql/postgresql/libpq.nix
https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/sql/postgresql/libpq.nix

#336Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#321)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 08.04.25 19:44, Jacob Champion wrote:

Would anybody following along be opposed to a situation where
- dynamiclib builds go through the dlopen() shim
- staticlib builds always rely on statically linked symbols

If this can be implemented in a straightforward way, that would be the
best way, I think.

#337Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Wolfgang Walther (#335)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Apr 11, 2025 at 9:21 AM Wolfgang Walther
<walther@technowledgy.de> wrote:

I tried to apply this patch to nixpkgs' libpq build [1]. First, I pinned
a recent commit from master (one where the v5 patch will apply cleanly
later) and enabled --with-libcurl [2].

(The [2] link is missing, I think.)

2. The statically linked build fails during configure:

I'm confused by this -- the build produces staticlibs alongside the
dynamically linked ones, so that's what I've been testing against.
What different options do you pass to configure for a "statically
linked build"?

undefined reference to `psl_is_cookie_domain_acceptable'
undefined reference to `nghttp2_session_check_request_allowed'

I assume the many libs listed in Libs.private in libcurl.pc are not
added automatically for this check?

Not unless there is some magic in PKG_CHECK_MODULES I've never heard
of (which is entirely possible!). Furthermore I imagine that the
transitive dependencies of all its dependencies are not added either.

Does your build method currently work for dependency forests like
libgssapi_krb5 and libldap? (I want to make sure I'm not accidentally
doing less work than we currently support for those other deps, but
I'm also not planning to add more feature work as part of this
particular open item.)

I tried adding "make submake-libpq-oauth", but that doesn't exist.

There is no submake for this because no other targets depend on it.
Currently I don't have any plans to add one (but -C should work).

When I do "make -C src/interfaces/libpq-oauth", I get this error:

make: *** No rule to make target 'oauth-curl.o', needed by
'libpq-oauth-18.so'. Stop.

I cannot reproduce this. The CI seems happy, too. Is this patch the
only modification you've made to our build system, or are there more
changes?

I'm about to rewrite this part somewhat, so a deep dive may not be very helpful.

Thanks,
--Jacob

#338Andres Freund
andres@anarazel.de
In reply to: Wolfgang Walther (#335)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-04-11 18:21:14 +0200, Wolfgang Walther wrote:

Jacob Champion:

On Wed, Apr 9, 2025 at 4:42 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:

I think your suggestion of not using any .so files would best there (from w user perspective). I'd be quite surprised if a static build still resulted in me having to manage shared library files anyway.

Done this way in v5. I had planned to separate the implementations by
a #define, but I ran into issues with Makefile.shlib, so I split the
shared and dynamic versions into separate files. I just now realized
that we do something about this exact problem in src/common, so I'll
see if I can copy its technique for the next go round.

I tried to apply this patch to nixpkgs' libpq build [1]. First, I pinned a
recent commit from master (one where the v5 patch will apply cleanly later)
and enabled --with-libcurl [2].

At this stage, without the patch applied, I observe the following:

1. The default, dynamically linked, build succeeds and libpq.so is linked to
libcurl.so as expected!

2. The statically linked build fails during configure:

What specifically does "statically linked build" mean? There is no such thing
in postgres, so this must be either patching upstream or injecting build flags
somehow? The [1] link wasn't immediately elucidating.

  checking for curl_multi_init in -lcurl... no
  configure: error: library 'curl' does not provide curl_multi_init

config.log tells me that it can't link to libcurl, because of undefined
references, for example:

  undefined reference to `psl_is_cookie_domain_acceptable'
  undefined reference to `nghttp2_session_check_request_allowed'

I assume the many libs listed in Libs.private in libcurl.pc are not added
automatically for this check?

The configure test shouldn't link statically, so this doesn't make sense to
me?

Greetings,

Andres Freund

#339Wolfgang Walther
walther@technowledgy.de
In reply to: Jacob Champion (#337)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion:

(The [2] link is missing, I think.)

Ah, sry. This is the link:

https://github.com/wolfgangwalther/nixpkgs/commits/postgresql-libpq-curl/

It's the last two commits on that branch.

I'm confused by this -- the build produces staticlibs alongside the
dynamically linked ones, so that's what I've been testing against.
What different options do you pass to configure for a "statically
linked build"?

It's not so much the options, but more that for this build there are no
shared libs available at buildtime at all. You can consider it a "fully
static system". So in your case, you'd always do the configure test with
shared libs, but I can't.

The build system passes --enable-static and --disable-shared to
configure, but both of those are ignored by configure, as indicated by a
WARNING immediately.

Not unless there is some magic in PKG_CHECK_MODULES I've never heard
of (which is entirely possible!). Furthermore I imagine that the
transitive dependencies of all its dependencies are not added either.

IIUC, the transitive dependencies would be part of libcurl's
Libs.private / Requires.private (assuming that file is correctly
created). So that would be taken care of, I guess.

Does your build method currently work for dependency forests like
libgssapi_krb5 and libldap? (I want to make sure I'm not accidentally
doing less work than we currently support for those other deps, but
I'm also not planning to add more feature work as part of this
particular open item.)

We currently build libpq with neither libldap, nor libkrb5, at least for
the static case. But I just tried on the bigger postgresql package and
force-enabled libldap there for the static build - it fails in exactly
the same way.

So yes, not related to your patch. I do understand that PostgreSQL's
autoconf build system is not designed for "static only", I am certainly
not expecting you to fix that.

I think meson will do better here, but I was not able to make that work,
yet.

When I do "make -C src/interfaces/libpq-oauth", I get this error:

make: *** No rule to make target 'oauth-curl.o', needed by
'libpq-oauth-18.so'. Stop.

I cannot reproduce this. The CI seems happy, too. Is this patch the
only modification you've made to our build system, or are there more
changes?

We apply another patch to change the default socket directory to /run,
but that's certainly unrelated. All the other custom stuff only kicks in
afterwards, in the installPhase, so unrelated as well.

I just tried the same thing on the bigger postgresql package, where the
full build is run and not only libpq / libpq-oauth. It fails with the
same error. No rule for oauth-curl.o.

I'm about to rewrite this part somewhat, so a deep dive may not be very helpful.

OK. I will try to get meson running, at least enough to try this patch
again. Maybe that gives better results.

Thanks,

Wolfgang

#340Wolfgang Walther
walther@technowledgy.de
In reply to: Wolfgang Walther (#339)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Wolfgang Walther:

So yes, not related to your patch. I do understand that PostgreSQL's
autoconf build system is not designed for "static only", I am certainly
not expecting you to fix that.

I think meson will do better here, but I was not able to make that work,
yet.

I did a basic meson build. Full postgresql package, not libpq-only.

The static-only build just works. On master that is. Same as the regular
build.

So yes, meson will handle the static stuff much better.

I just tried the same thing on the bigger postgresql package, where the
full build is run and not only libpq / libpq-oauth. It fails with the
same error. No rule for oauth-curl.o.

Applying the v5 patch to the above meson build, will give me a different
error. This time for both the static-only and the regular build:

src/interfaces/libpq-oauth/meson.build:18:22: ERROR: File
oauth-curl.c does not exist.

This.. clears it up, because that file is indeed missing for me on disk.
I assume that's because this file is tracked as a rename in the v5
patch. I can apply this with git, but not directly in the nix build
system. TIL, I need to use "fetchpatch2" instead of "fetchpatch" for
that. Sure thing.

So, with the patch applied correctly, I get the following:

1. Meson regular build:

libpq-oauth-18.so
libpq.so
libpq.so.5
libpq.so.5.18

The libpq.so file has references to dlopen and libpq-auth-18.so, cool.

2. Meson static-only build:

libpq.a
libpq-oauth-18.a

The libpq.a file has no references to dlopen, but plenty of references
to curl stuff.

I'm not sure what the libpq-oauth-18.a file is for.

3. Back to the lipq-only build with autoconf, from where I started. I
only need to add the following line:

make -C src/interfaces/libpq-oauth install

and get this:

libpq-oauth-18.so
libpq.so
libpq.so.5
libpq.so.5.18

Sweet!

4. Of course the static-only build does not work with autoconf, but
that's expected.

So, sorry for the noise before. Now, that I know how to apply patches
with renames, I will try your next patch as well.

Best,

Wolfgang

#341Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Wolfgang Walther (#340)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Apr 14, 2025 at 11:27 AM Wolfgang Walther
<walther@technowledgy.de> wrote:

src/interfaces/libpq-oauth/meson.build:18:22: ERROR: File
oauth-curl.c does not exist.

This.. clears it up, because that file is indeed missing for me on disk.

Aha! Okay, glad I don't need to track that down.

libpq.a
libpq-oauth-18.a

The libpq.a file has no references to dlopen, but plenty of references
to curl stuff.

Which references? libpq-oauth should be the only thing using Curl symbols:

$ nm src/interfaces/libpq/libpq.a | grep --count curl
0
$ nm src/interfaces/libpq-oauth/libpq-oauth-18.a | grep --count curl
116

I'm not sure what the libpq-oauth-18.a file is for.

That implements the flow. You'll need to link that into your
application or it will complain about missing flow symbols. (I don't
think there's an easy way to combine the two libraries in our Autoconf
setup; the only ways I can think of right now would introduce a
circular dependency between libpq and libpq-oauth...)

Thanks!
--Jacob

#342Wolfgang Walther
walther@technowledgy.de
In reply to: Jacob Champion (#341)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion:

libpq.a
libpq-oauth-18.a

The libpq.a file has no references to dlopen, but plenty of references
to curl stuff.

Which references? libpq-oauth should be the only thing using Curl symbols:

$ nm src/interfaces/libpq/libpq.a | grep --count curl
0
$ nm src/interfaces/libpq-oauth/libpq-oauth-18.a | grep --count curl
116

Not sure what I was looking at earlier, probably too many different
builds at the same time. Now I can't find the curl symbols in libpq.a
either...

I'm not sure what the libpq-oauth-18.a file is for.

That implements the flow. You'll need to link that into your
application or it will complain about missing flow symbols. (I don't
think there's an easy way to combine the two libraries in our Autoconf
setup; the only ways I can think of right now would introduce a
circular dependency between libpq and libpq-oauth...)

... which immediately explains what the libpq-oauth-18.a file is for, yes.

But that means we'd need a -lpq-oauth-18 or something like that in
Libs.private in libpq.pc, right?

This seems to be missing, I checked both the autoconf and meson builds.

Best,

Wolfgang

#343Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Wolfgang Walther (#342)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Apr 14, 2025 at 12:46 PM Wolfgang Walther
<walther@technowledgy.de> wrote:

But that means we'd need a -lpq-oauth-18 or something like that in
Libs.private in libpq.pc, right?

I believe so. I'm in the middle of the .pc stuff right now; v6 should
have the fixes as long as I don't get stuck.

Thanks,
--Jacob

#344Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#343)
2 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Apr 14, 2025 at 1:17 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

I believe so. I'm in the middle of the .pc stuff right now; v6 should
have the fixes as long as I don't get stuck.

Done in v6-0001. I think this is now architecturally complete, so if
reviewers are happy I can work on docs and the commit message. As a
summary:

- We provide a libpq-oauth-18.so module for shared builds, and a
corresponding .a for static builds, when OAuth is enabled.
- Platforms which cannot support the builtin flow now error out if you
request OAuth at configure time.
- When OAuth is enabled and there's no custom client flow, libpq.so
loads the module via dlopen(), which respects RPATH/LD_LIBRARY_PATH et
al. If it's not installed, OAuth doesn't continue.
- Static builds must link libpq-oauth-18.a explicitly. libpq.pc now
puts -lpq-oauth-18 in Libs.private, and libcurl in Requires.private.
- Internally, we compile separate versions of fe-auth-oauth.c to
handle the different cases (disabled, dynamic, static). This is
borrowed from src/common.
- The only new export from libpq is appendPQExpBufferVA. Other
internal symbols are shared with libpq-oauth via dependency injection.

v6-0002 is a WIP rename of the --with-libcurl option to
--with-oauth-client. I'm not sure I have all of the Meson corner cases
with auto_features figured out, but maybe it doesn't matter since it's
temporary (it targets a total of seven buildfarm animals, and once
they've switched we can remove the old name). I have added a separate
open item for this.

Thanks,
--Jacob

Attachments:

v6-0002-oauth-rename-with-libcurl-to-with-oauth-client.patchapplication/octet-stream; name=v6-0002-oauth-rename-with-libcurl-to-with-oauth-client.patchDownload
From 6ba708e512f18b6d0cada3f6657a4d7fd8b1058f Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 14 Apr 2025 16:34:09 -0700
Subject: [PATCH v6 2/2] oauth: rename --with-libcurl to --with-oauth-client

WIP, see mailing list.

Discussion: https://postgr.es/m/CAOYmi%2Bn9DHS_xUatuuspdC8tjtaMzY8P11Y9y5Fz%2B2pjikkL9g%40mail.gmail.com
---
 .cirrus.tasks.yml                             |  2 +-
 config/programs.m4                            |  2 +-
 configure                                     | 56 +++++++++++++++----
 configure.ac                                  | 22 +++++---
 meson.build                                   | 16 ++++--
 meson_options.txt                             |  6 +-
 src/Makefile.global.in                        |  2 +-
 src/include/pg_config.h.in                    |  7 ++-
 src/interfaces/Makefile                       |  4 +-
 src/interfaces/libpq/Makefile                 |  2 +-
 src/interfaces/libpq/fe-auth-oauth.c          |  4 +-
 src/makefiles/meson.build                     |  3 +-
 src/test/modules/oauth_validator/Makefile     |  2 +-
 src/test/modules/oauth_validator/meson.build  |  2 +-
 .../modules/oauth_validator/t/001_server.pl   |  2 +-
 .../modules/oauth_validator/t/002_client.pl   |  2 +-
 16 files changed, 94 insertions(+), 40 deletions(-)

diff --git a/.cirrus.tasks.yml b/.cirrus.tasks.yml
index 86a1fa9bbdb..30bdeb96738 100644
--- a/.cirrus.tasks.yml
+++ b/.cirrus.tasks.yml
@@ -350,11 +350,11 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
   --with-gssapi
   --with-icu
   --with-ldap
-  --with-libcurl
   --with-libxml
   --with-libxslt
   --with-llvm
   --with-lz4
+  --with-oauth-client
   --with-pam
   --with-perl
   --with-python
diff --git a/config/programs.m4 b/config/programs.m4
index 0ad1e58b48d..328a4701cee 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -285,7 +285,7 @@ AC_DEFUN([PGAC_CHECK_STRIP],
 AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
   AC_CHECK_HEADER(curl/curl.h, [],
-				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-oauth-client])])
   AC_CHECK_LIB(curl, curl_multi_init, [
 				 AC_DEFINE([HAVE_LIBCURL], [1], [Define to 1 if you have the `curl' library (-lcurl).])
 				 AC_SUBST(LIBCURL_LDLIBS, -lcurl)
diff --git a/configure b/configure
index df1da549c4c..a99b97006f2 100755
--- a/configure
+++ b/configure
@@ -713,7 +713,7 @@ LIBCURL_LDFLAGS
 LIBCURL_CPPFLAGS
 LIBCURL_LIBS
 LIBCURL_CFLAGS
-with_libcurl
+with_oauth_client
 with_uuid
 LIBURING_LIBS
 LIBURING_CFLAGS
@@ -874,6 +874,7 @@ with_libedit_preferred
 with_liburing
 with_uuid
 with_ossp_uuid
+with_oauth_client
 with_libcurl
 with_libxml
 with_libxslt
@@ -1590,7 +1591,8 @@ Optional Packages:
   --with-liburing         build with io_uring support, for asynchronous I/O
   --with-uuid=LIB         build contrib/uuid-ossp using LIB (bsd,e2fs,ossp)
   --with-ossp-uuid        obsolete spelling of --with-uuid=ossp
-  --with-libcurl          build with libcurl support
+  --with-oauth-client     build OAuth Device Authorization support
+  --with-libcurl          Deprecated. Use --with-oauth-client instead
   --with-libxml           build with XML support
   --with-libxslt          use XSLT support when building contrib/xml2
   --with-system-tzdata=DIR
@@ -8918,8 +8920,36 @@ fi
 #
 # libcurl
 #
-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build with libcurl support" >&5
-$as_echo_n "checking whether to build with libcurl support... " >&6; }
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build OAuth Device Authorization support" >&5
+$as_echo_n "checking whether to build OAuth Device Authorization support... " >&6; }
+
+
+
+# Check whether --with-oauth-client was given.
+if test "${with_oauth_client+set}" = set; then :
+  withval=$with_oauth_client;
+  case $withval in
+    yes)
+
+$as_echo "#define USE_BUILTIN_OAUTH 1" >>confdefs.h
+
+      ;;
+    no)
+      :
+      ;;
+    *)
+      as_fn_error $? "no argument expected for --with-oauth-client option" "$LINENO" 5
+      ;;
+  esac
+
+else
+  with_oauth_client=no
+
+fi
+
+
+
+# --with-libcurl is a deprecated equivalent. TODO: remove
 
 
 
@@ -8929,7 +8959,7 @@ if test "${with_libcurl+set}" = set; then :
   case $withval in
     yes)
 
-$as_echo "#define USE_LIBCURL 1" >>confdefs.h
+$as_echo "#define USE_BUILTIN_OAUTH 1" >>confdefs.h
 
       ;;
     no)
@@ -8946,11 +8976,15 @@ else
 fi
 
 
-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_libcurl" >&5
-$as_echo "$with_libcurl" >&6; }
+if test "$with_libcurl" = yes ; then
+	with_oauth_client=yes
+fi
 
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_oauth_client" >&5
+$as_echo "$with_oauth_client" >&6; }
 
-if test "$with_libcurl" = yes ; then
+
+if test "$with_oauth_client" = yes ; then
   # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
   # to explicitly set TLS 1.3 ciphersuites).
 
@@ -12528,13 +12562,13 @@ fi
 
 fi
 
-if test "$with_libcurl" = yes ; then
+if test "$with_oauth_client" = yes ; then
 
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
 if test "x$ac_cv_header_curl_curl_h" = xyes; then :
 
 else
-  as_fn_error $? "header file <curl/curl.h> is required for --with-libcurl" "$LINENO" 5
+  as_fn_error $? "header file <curl/curl.h> is required for --with-oauth-client" "$LINENO" 5
 fi
 
 
@@ -14350,7 +14384,7 @@ done
 
 fi
 
-if test "$with_libcurl" = yes ; then
+if test "$with_oauth_client" = yes ; then
   # Error out early if this platform can't support libpq-oauth.
   if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
     as_fn_error $? "client OAuth is not supported on this platform" "$LINENO" 5
diff --git a/configure.ac b/configure.ac
index 218aeea1b3b..7ffe8901250 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1022,13 +1022,21 @@ AC_SUBST(with_uuid)
 #
 # libcurl
 #
-AC_MSG_CHECKING([whether to build with libcurl support])
-PGAC_ARG_BOOL(with, libcurl, no, [build with libcurl support],
-              [AC_DEFINE([USE_LIBCURL], 1, [Define to 1 to build with libcurl support. (--with-libcurl)])])
-AC_MSG_RESULT([$with_libcurl])
-AC_SUBST(with_libcurl)
+AC_MSG_CHECKING([whether to build OAuth Device Authorization support])
+PGAC_ARG_BOOL(with, oauth-client, no, [build OAuth Device Authorization support],
+              [AC_DEFINE([USE_BUILTIN_OAUTH], 1, [Define to 1 to build with OAuth Device Authorization support. (--with-oauth-client)])])
 
+# --with-libcurl is a deprecated equivalent. TODO: remove
+PGAC_ARG_BOOL(with, libcurl, no, [Deprecated. Use --with-oauth-client instead],
+              [AC_DEFINE([USE_BUILTIN_OAUTH], 1, [Define to 1 to build with OAuth Device Authorization support. (--with-oauth-client)])])
 if test "$with_libcurl" = yes ; then
+	with_oauth_client=yes
+fi
+
+AC_MSG_RESULT([$with_oauth_client])
+AC_SUBST(with_oauth_client)
+
+if test "$with_oauth_client" = yes ; then
   # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
   # to explicitly set TLS 1.3 ciphersuites).
   PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
@@ -1348,7 +1356,7 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
-if test "$with_libcurl" = yes ; then
+if test "$with_oauth_client" = yes ; then
   PGAC_CHECK_LIBCURL
 fi
 
@@ -1645,7 +1653,7 @@ if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
 
-if test "$with_libcurl" = yes ; then
+if test "$with_oauth_client" = yes ; then
   # Error out early if this platform can't support libpq-oauth.
   if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
     AC_MSG_ERROR([client OAuth is not supported on this platform])
diff --git a/meson.build b/meson.build
index b436c362147..ab34d69dd1a 100644
--- a/meson.build
+++ b/meson.build
@@ -860,13 +860,19 @@ endif
 # Library: libcurl
 ###############################################################
 
-libcurlopt = get_option('libcurl')
+oauthopt = get_option('oauth-client')
 oauth_flow_supported = false
 
-if not libcurlopt.disabled()
+# -Dlibcurl is a deprecated equivalent. TODO: remove
+libcurlopt = get_option('libcurl')
+if oauthopt.auto() or libcurlopt.enabled()
+  oauthopt = libcurlopt
+endif
+
+if not oauthopt.disabled()
   # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
   # to explicitly set TLS 1.3 ciphersuites).
-  libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
+  libcurl = dependency('libcurl', version: '>= 7.61.0', required: oauthopt)
   if libcurl.found()
     # Check to see whether the current platform supports thread-safe Curl
     # initialization.
@@ -950,8 +956,8 @@ if not libcurlopt.disabled()
   )
 
   if oauth_flow_supported
-    cdata.set('USE_LIBCURL', 1)
-  elif libcurlopt.enabled()
+    cdata.set('USE_BUILTIN_OAUTH', 1)
+  elif oauthopt.enabled()
     error('client OAuth is not supported on this platform')
   endif
 
diff --git a/meson_options.txt b/meson_options.txt
index dd7126da3a7..5d828b491a9 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -100,8 +100,9 @@ option('icu', type: 'feature', value: 'auto',
 option('ldap', type: 'feature', value: 'auto',
   description: 'LDAP support')
 
+# Deprecated. TODO: remove
 option('libcurl', type : 'feature', value: 'auto',
-  description: 'libcurl support')
+  description: 'Deprecated. Use -Doauth-client instead')
 
 option('libedit_preferred', type: 'boolean', value: false,
   description: 'Prefer BSD Libedit over GNU Readline')
@@ -121,6 +122,9 @@ option('llvm', type: 'feature', value: 'disabled',
 option('lz4', type: 'feature', value: 'auto',
   description: 'LZ4 support')
 
+option('oauth-client', type : 'feature', value: 'auto',
+  description: 'OAuth Device Authorization support')
+
 option('nls', type: 'feature', value: 'auto',
   description: 'Native language support')
 
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index eb9b5de75b4..0c0822c314b 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -195,11 +195,11 @@ with_systemd	= @with_systemd@
 with_gssapi	= @with_gssapi@
 with_krb_srvnam	= @with_krb_srvnam@
 with_ldap	= @with_ldap@
-with_libcurl	= @with_libcurl@
 with_liburing	= @with_liburing@
 with_libxml	= @with_libxml@
 with_libxslt	= @with_libxslt@
 with_llvm	= @with_llvm@
+with_oauth_client	= @with_oauth_client@
 with_system_tzdata = @with_system_tzdata@
 with_uuid	= @with_uuid@
 with_zlib	= @with_zlib@
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index 9891b9b05c3..1e189581896 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -677,6 +677,10 @@
 /* Define to 1 to build with BSD Authentication support. (--with-bsd-auth) */
 #undef USE_BSD_AUTH
 
+/* Define to 1 to build with OAuth Device Authorization support.
+   (--with-oauth-client) */
+#undef USE_BUILTIN_OAUTH
+
 /* Define to build with ICU support. (--with-icu) */
 #undef USE_ICU
 
@@ -686,9 +690,6 @@
 /* Define to 1 to build with LDAP support. (--with-ldap) */
 #undef USE_LDAP
 
-/* Define to 1 to build with libcurl support. (--with-libcurl) */
-#undef USE_LIBCURL
-
 /* Define to build with io_uring support. (--with-liburing) */
 #undef USE_LIBURING
 
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index e6822caa206..ccb4a9b6e69 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,7 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
-ifeq ($(with_libcurl), yes)
+ifeq ($(with_oauth_client), yes)
 SUBDIRS += libpq-oauth
 else
 ALWAYS_SUBDIRS += libpq-oauth
@@ -26,7 +26,7 @@ $(recurse_always)
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
 
-ifeq ($(with_libcurl), yes)
+ifeq ($(with_oauth_client), yes)
 all-libpq-oauth-recurse: all-libpq-recurse
 install-libpq-oauth-recurse: install-libpq-recurse
 endif
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index d4c20066ce4..a835d94a142 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -102,7 +102,7 @@ ifeq ($(with_ssl),openssl)
 PKG_CONFIG_REQUIRES_PRIVATE = libssl, libcrypto
 endif
 
-ifeq ($(with_libcurl),yes)
+ifeq ($(with_oauth_client),yes)
 # libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
 # libpq-oauth needs libcurl. Put both into *.private.
 PKG_CONFIG_REQUIRES_PRIVATE += libcurl
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index 5c285adccbd..af6db2eec28 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -738,7 +738,7 @@ cleanup_user_oauth_flow(PGconn *conn)
  *    executable.
  */
 
-#if !defined(USE_LIBCURL)
+#if !defined(USE_BUILTIN_OAUTH)
 
 /*
  * This configuration doesn't support the builtin flow.
@@ -859,7 +859,7 @@ use_builtin_flow(PGconn *conn, fe_oauth_state *state)
 	return true;
 }
 
-#endif							/* USE_LIBCURL */
+#endif							/* USE_BUILTIN_OAUTH */
 
 
 /*
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index f2ba5b38124..6160c172d75 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -75,6 +75,8 @@ pgxs_kv = {
   'with_krb_srvnam': get_option('krb_srvnam'),
   'krb_srvtab': krb_srvtab,
 
+  'with_oauth_client': oauth_flow_supported ? 'yes' : 'no',
+
   'STRIP': ' '.join(strip_cmd),
   'STRIP_STATIC_LIB': ' '.join(strip_static_cmd),
   'STRIP_SHARED_LIB': ' '.join(strip_shared_cmd),
@@ -233,7 +235,6 @@ pgxs_deps = {
   'gssapi': gssapi,
   'icu': icu,
   'ldap': ldap,
-  'libcurl': libcurl,
   'liburing': liburing,
   'libxml': libxml,
   'libxslt': libxslt,
diff --git a/src/test/modules/oauth_validator/Makefile b/src/test/modules/oauth_validator/Makefile
index 05b9f06ed73..57733dc533f 100644
--- a/src/test/modules/oauth_validator/Makefile
+++ b/src/test/modules/oauth_validator/Makefile
@@ -34,7 +34,7 @@ include $(top_builddir)/src/Makefile.global
 include $(top_srcdir)/contrib/contrib-global.mk
 
 export PYTHON
-export with_libcurl
+export with_oauth_client
 export with_python
 
 endif
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index 36d1b26369f..84d169cb8e1 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -78,7 +78,7 @@ tests += {
     ],
     'env': {
       'PYTHON': python.path(),
-      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_oauth_client': oauth_flow_supported ? 'yes' : 'no',
       'with_python': 'yes',
     },
   },
diff --git a/src/test/modules/oauth_validator/t/001_server.pl b/src/test/modules/oauth_validator/t/001_server.pl
index d88994abc24..01b5e1c3c43 100644
--- a/src/test/modules/oauth_validator/t/001_server.pl
+++ b/src/test/modules/oauth_validator/t/001_server.pl
@@ -33,7 +33,7 @@ unless (check_pg_config("#define HAVE_SYS_EVENT_H 1")
 	  'OAuth server-side tests are not supported on this platform';
 }
 
-if ($ENV{with_libcurl} ne 'yes')
+if ($ENV{with_oauth_client} ne 'yes')
 {
 	plan skip_all => 'client-side OAuth not supported by this build';
 }
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
index 54769f12f57..1e329b328a6 100644
--- a/src/test/modules/oauth_validator/t/002_client.pl
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -104,7 +104,7 @@ $node->log_check("validator receives correct token",
 	$log_start,
 	log_like => [ qr/oauth_validator: token="my-token", role="$user"/, ]);
 
-if ($ENV{with_libcurl} ne 'yes')
+if ($ENV{with_oauth_client} ne 'yes')
 {
 	# libpq should help users out if no OAuth support is built in.
 	test(
-- 
2.34.1

v6-0001-WIP-split-Device-Authorization-flow-into-dlopen-d.patchapplication/octet-stream; name=v6-0001-WIP-split-Device-Authorization-flow-into-dlopen-d.patchDownload
From a202bd932ea390c97df10fcbb0cc3b60419453be Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH v6 1/2] WIP: split Device Authorization flow into dlopen'd
 module

See notes on mailing list.

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
---
 config/programs.m4                            |  17 +-
 configure                                     |  50 ++++-
 configure.ac                                  |  26 ++-
 meson.build                                   |  32 ++-
 src/Makefile.global.in                        |   3 +
 src/interfaces/Makefile                       |  12 ++
 src/interfaces/libpq-oauth/Makefile           |  65 ++++++
 src/interfaces/libpq-oauth/README             |  43 ++++
 src/interfaces/libpq-oauth/exports.txt        |   4 +
 src/interfaces/libpq-oauth/meson.build        |  43 ++++
 .../oauth-curl.c}                             |  60 +++---
 src/interfaces/libpq-oauth/oauth-curl.h       |  24 +++
 src/interfaces/libpq-oauth/oauth-utils.c      | 202 ++++++++++++++++++
 src/interfaces/libpq-oauth/oauth-utils.h      |  35 +++
 src/interfaces/libpq/Makefile                 |  36 +++-
 src/interfaces/libpq/exports.txt              |   1 +
 src/interfaces/libpq/fe-auth-oauth.c          | 151 ++++++++++++-
 src/interfaces/libpq/fe-auth-oauth.h          |   5 +-
 src/interfaces/libpq/meson.build              |  28 ++-
 src/interfaces/libpq/nls.mk                   |  12 +-
 src/makefiles/meson.build                     |   2 +
 21 files changed, 763 insertions(+), 88 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/README
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 rename src/interfaces/{libpq/fe-auth-oauth-curl.c => libpq-oauth/oauth-curl.c} (98%)
 create mode 100644 src/interfaces/libpq-oauth/oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.c
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.h

diff --git a/config/programs.m4 b/config/programs.m4
index 0a07feb37cc..0ad1e58b48d 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -286,9 +286,20 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
   AC_CHECK_HEADER(curl/curl.h, [],
 				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
-  AC_CHECK_LIB(curl, curl_multi_init, [],
+  AC_CHECK_LIB(curl, curl_multi_init, [
+				 AC_DEFINE([HAVE_LIBCURL], [1], [Define to 1 if you have the `curl' library (-lcurl).])
+				 AC_SUBST(LIBCURL_LDLIBS, -lcurl)
+			   ],
 			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
@@ -338,4 +349,8 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 *** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
 *** to use it with libpq.])
   fi
+
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 8f4a5ab28ec..df1da549c4c 100755
--- a/configure
+++ b/configure
@@ -655,6 +655,7 @@ UUID_LIBS
 LDAP_LIBS_BE
 LDAP_LIBS_FE
 with_ssl
+LIBCURL_LDLIBS
 PTHREAD_CFLAGS
 PTHREAD_LIBS
 PTHREAD_CC
@@ -708,6 +709,8 @@ XML2_LIBS
 XML2_CFLAGS
 XML2_CONFIG
 with_libxml
+LIBCURL_LDFLAGS
+LIBCURL_CPPFLAGS
 LIBCURL_LIBS
 LIBCURL_CFLAGS
 with_libcurl
@@ -9042,19 +9045,27 @@ $as_echo "yes" >&6; }
 
 fi
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+
+
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
@@ -12517,9 +12528,6 @@ fi
 
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
 
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
@@ -12567,17 +12575,26 @@ fi
 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
 $as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
 if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
-  cat >>confdefs.h <<_ACEOF
-#define HAVE_LIBCURL 1
-_ACEOF
 
-  LIBS="-lcurl $LIBS"
+
+$as_echo "#define HAVE_LIBCURL 1" >>confdefs.h
+
+				 LIBCURL_LDLIBS=-lcurl
+
 
 else
   as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
 fi
 
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
@@ -12681,6 +12698,10 @@ $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
 *** to use it with libpq." "$LINENO" 5
   fi
 
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
+
 fi
 
 if test "$with_gssapi" = yes ; then
@@ -14329,6 +14350,13 @@ done
 
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    as_fn_error $? "client OAuth is not supported on this platform" "$LINENO" 5
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/configure.ac b/configure.ac
index fc5f7475d07..218aeea1b3b 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1033,19 +1033,27 @@ if test "$with_libcurl" = yes ; then
   # to explicitly set TLS 1.3 ciphersuites).
   PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+  AC_SUBST(LIBCURL_CPPFLAGS)
+  AC_SUBST(LIBCURL_LDFLAGS)
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     AC_MSG_WARN([*** OAuth support tests require --with-python to run])
@@ -1340,9 +1348,6 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
   PGAC_CHECK_LIBCURL
 fi
@@ -1640,6 +1645,13 @@ if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    AC_MSG_ERROR([client OAuth is not supported on this platform])
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/meson.build b/meson.build
index 27717ad8976..b436c362147 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -860,13 +861,13 @@ endif
 ###############################################################
 
 libcurlopt = get_option('libcurl')
+oauth_flow_supported = false
+
 if not libcurlopt.disabled()
   # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
   # to explicitly set TLS 1.3 ciphersuites).
   libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
   if libcurl.found()
-    cdata.set('USE_LIBCURL', 1)
-
     # Check to see whether the current platform supports thread-safe Curl
     # initialization.
     libcurl_threadsafe_init = false
@@ -938,6 +939,22 @@ if not libcurlopt.disabled()
     endif
   endif
 
+  # Check that the current platform supports our builtin flow. This requires
+  # libcurl and one of either epoll or kqueue.
+  oauth_flow_supported = (
+    libcurl.found()
+    and (cc.check_header('sys/event.h', required: false,
+                         args: test_c_args, include_directories: postgres_inc)
+         or cc.check_header('sys/epoll.h', required: false,
+                            args: test_c_args, include_directories: postgres_inc))
+  )
+
+  if oauth_flow_supported
+    cdata.set('USE_LIBCURL', 1)
+  elif libcurlopt.enabled()
+    error('client OAuth is not supported on this platform')
+  endif
+
 else
   libcurl = not_found_dep
 endif
@@ -3251,17 +3268,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 737b2dd1869..eb9b5de75b4 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -343,6 +343,9 @@ perl_embed_ldflags	= @perl_embed_ldflags@
 
 AWK	= @AWK@
 LN_S	= @LN_S@
+LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@
+LIBCURL_LDFLAGS = @LIBCURL_LDFLAGS@
+LIBCURL_LDLIBS = @LIBCURL_LDLIBS@
 MSGFMT  = @MSGFMT@
 MSGFMT_FLAGS = @MSGFMT_FLAGS@
 MSGMERGE = @MSGMERGE@
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..e6822caa206 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,19 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+else
+ALWAYS_SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
+$(recurse_always)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..5fd251a1d27
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,65 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization OAuth support"
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+NAME = pq-oauth-$(MAJORVERSION)
+
+# Force the name "libpq-oauth" for both the static and shared libraries.
+override shlib := lib$(NAME)$(DLSUFFIX)
+override stlib := lib$(NAME).a
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(LIBCURL_CPPFLAGS) $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES) \
+	oauth-curl.o
+
+# The shared library needs additional glue symbols.
+$(shlib): OBJS += oauth-utils.o
+$(shlib): oauth-utils.o
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)
+SHLIB_PREREQS = submake-libpq
+SHLIB_EXPORTS = exports.txt
+
+# Disable -bundle_loader on macOS.
+BE_DLLLIBS =
+
+# By default, a library without an SONAME doesn't get a static library, so we
+# add it to the build explicitly.
+all: all-lib all-static-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+# Ignore the standard rules for SONAME-less installation; we want both the
+# static and shared libraries to go into libdir.
+install: all installdirs $(stlib) $(shlib)
+	$(INSTALL_SHLIB) $(shlib) '$(DESTDIR)$(libdir)/$(shlib)'
+	$(INSTALL_STLIB) $(stlib) '$(DESTDIR)$(libdir)/$(stlib)'
+
+installdirs:
+	$(MKDIR_P) '$(DESTDIR)$(libdir)'
+
+uninstall:
+	rm -f '$(DESTDIR)$(libdir)/$(stlib)'
+	rm -f '$(DESTDIR)$(libdir)/$(shlib)'
+
+clean distclean: clean-lib
+	rm -f $(OBJS) oauth-utils.o
diff --git a/src/interfaces/libpq-oauth/README b/src/interfaces/libpq-oauth/README
new file mode 100644
index 00000000000..45def6c1ab6
--- /dev/null
+++ b/src/interfaces/libpq-oauth/README
@@ -0,0 +1,43 @@
+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
+later split out as its own shared library in order to isolate its dependency on
+libcurl. (End users who don't want the Curl dependency can simply choose not to
+install this module.)
+
+If a connection string allows the use of OAuth, and the server asks for it, and
+a libpq client has not installed its own custom OAuth flow, libpq will attempt
+to delay-load this module using dlopen() and the following ABI. Failure to load
+results in a failed connection.
+
+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+across major releases; the name of the module (libpq-oauth-MAJOR) reflects this.
+The module exports the following symbols:
+
+- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+- void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+pg_fe_run_oauth_flow and pg_fe_cleanup_oauth_flow are implementations of
+conn->async_auth and conn->cleanup_async_auth, respectively.
+
+- void libpq_oauth_init(pgthreadlock_t threadlock,
+						libpq_gettext_func gettext_impl,
+						conn_errorMessage_func errmsg_impl);
+
+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
+libpq_gettext(), which must be injected by libpq using this initialization
+function before the flow is run. It also relies on libpq to expose
+conn->errorMessage, via the errmsg_impl.
+
+This dependency injection is done to ensure that the module ABI is decoupled
+from the internals of `struct pg_conn`. This way we can safely search the
+standard dlopen() paths (e.g. RPATH, LD_LIBRARY_PATH, the SO cache) for an
+implementation module to use, even if that module wasn't compiled at the same
+time as libpq.
+
+= Static Build =
+
+The static library libpq.a does not perform any dynamic loading. If the builtin
+flow is enabled, the application is expected to link against libpq-oauth-*.a
+directly to provide the necessary symbols.
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..6891a83dbf9
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+libpq_oauth_init          1
+pg_fe_run_oauth_flow      2
+pg_fe_cleanup_oauth_flow  3
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..cf597e1da1e
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,43 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+if not oauth_flow_supported
+  subdir_done()
+endif
+
+libpq_oauth_sources = files(
+  'oauth-curl.c',
+)
+
+# The shared library needs additional glue symbols.
+libpq_oauth_so_sources = files(
+  'oauth-utils.c',
+)
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+libpq_oauth_name = 'libpq-oauth-@0@'.format(pg_version_major)
+
+libpq_oauth_st = static_library(libpq_oauth_name,
+  libpq_oauth_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_stlib_code, libpq_oauth_deps],
+  kwargs: default_lib_args,
+)
+
+libpq_oauth_so = shared_module(libpq_oauth_name,
+  libpq_oauth_sources + libpq_oauth_so_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/oauth-curl.c
similarity index 98%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/oauth-curl.c
index cd9c0323bb6..d52125415bc 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/oauth-curl.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * fe-auth-oauth-curl.c
+ * oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *	  src/interfaces/libpq-oauth/oauth-curl.c
  *
  *-------------------------------------------------------------------------
  */
@@ -17,20 +17,23 @@
 
 #include <curl/curl.h>
 #include <math.h>
-#ifdef HAVE_SYS_EPOLL_H
+#include <unistd.h>
+
+#if defined(HAVE_SYS_EPOLL_H)
 #include <sys/epoll.h>
 #include <sys/timerfd.h>
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 #include <sys/event.h>
+#else
+#error libpq-oauth is not supported on this platform
 #endif
-#include <unistd.h>
 
 #include "common/jsonapi.h"
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
-#include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "oauth-curl.h"
+#include "oauth-utils.h"
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -1110,7 +1113,7 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 static bool
 setup_multiplexer(struct async_ctx *actx)
 {
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {.events = EPOLLIN};
 
 	actx->mux = epoll_create1(EPOLL_CLOEXEC);
@@ -1134,8 +1137,7 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	actx->mux = kqueue();
 	if (actx->mux < 0)
 	{
@@ -1158,10 +1160,9 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
+#else
+#error setup_multiplexer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
-	return false;
 }
 
 /*
@@ -1174,7 +1175,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 {
 	struct async_ctx *actx = ctx;
 
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {0};
 	int			res;
 	int			op = EPOLL_CTL_ADD;
@@ -1230,8 +1231,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev[2] = {0};
 	struct kevent ev_out[2];
 	struct timespec timeout = {0};
@@ -1312,10 +1312,9 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
+#else
+#error register_socket is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
-	return -1;
 }
 
 /*
@@ -1334,7 +1333,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 static bool
 set_timer(struct async_ctx *actx, long timeout)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timeout < 0)
@@ -1363,8 +1362,7 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev;
 
 #ifdef __NetBSD__
@@ -1419,10 +1417,9 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
+#else
+#error set_timer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return false;
 }
 
 /*
@@ -1433,7 +1430,7 @@ set_timer(struct async_ctx *actx, long timeout)
 static int
 timer_expired(struct async_ctx *actx)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timerfd_gettime(actx->timerfd, &spec) < 0)
@@ -1453,8 +1450,7 @@ timer_expired(struct async_ctx *actx)
 	/* If the remaining time to expiration is zero, we're done. */
 	return (spec.it_value.tv_sec == 0
 			&& spec.it_value.tv_nsec == 0);
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	int			res;
 
 	/* Is the timer queue ready? */
@@ -1466,10 +1462,9 @@ timer_expired(struct async_ctx *actx)
 	}
 
 	return (res > 0);
+#else
+#error timer_expired is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return -1;
 }
 
 /*
@@ -2487,8 +2482,9 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 		.verification_uri_complete = actx->authz.verification_uri_complete,
 		.expires_in = actx->authz.expires_in,
 	};
+	PQauthDataHook_type hook = PQgetAuthDataHook();
 
-	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+	res = hook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
 
 	if (!res)
 	{
diff --git a/src/interfaces/libpq-oauth/oauth-curl.h b/src/interfaces/libpq-oauth/oauth-curl.h
new file mode 100644
index 00000000000..248d0424ad0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-curl.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_CURL_H
+#define OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+/* Exported async-auth callbacks. */
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.c b/src/interfaces/libpq-oauth/oauth-utils.c
new file mode 100644
index 00000000000..2bdbf904743
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.c
@@ -0,0 +1,202 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.c
+ *
+ *	  "Glue" helpers providing a copy of some internal APIs from libpq. At
+ *	  some point in the future, we might be able to deduplicate.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq-oauth/oauth-utils.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <signal.h>
+
+#include "libpq-int.h"
+#include "oauth-utils.h"
+
+static libpq_gettext_func libpq_gettext_impl;
+static conn_errorMessage_func conn_errorMessage;
+
+pgthreadlock_t pg_g_threadlock;
+
+/*-
+ * Initializes libpq-oauth by setting necessary callbacks.
+ *
+ * The current implementation relies on the following private implementation
+ * details of libpq:
+ *
+ * - pg_g_threadlock: protects libcurl initialization if the underlying Curl
+ *   installation is not threadsafe
+ *
+ * - libpq_gettext: translates error messages using libpq's message domain
+ *
+ * - conn->errorMessage: holds translated errors for the connection. This is
+ *   handled through a translation shim, which avoids either depending on the
+ *   offset of the errorMessage in PGconn, or needing to export the variadic
+ *   libpq_append_conn_error().
+ */
+void
+libpq_oauth_init(pgthreadlock_t threadlock_impl,
+				 libpq_gettext_func gettext_impl,
+				 conn_errorMessage_func errmsg_impl)
+{
+	pg_g_threadlock = threadlock_impl;
+	libpq_gettext_impl = gettext_impl;
+	conn_errorMessage = errmsg_impl;
+}
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  This is a copy of libpq's internal API.
+ */
+void
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+	PQExpBuffer errorMessage = conn_errorMessage(conn);
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(errorMessage, '\n');
+}
+
+#ifdef ENABLE_NLS
+
+/*
+ * A shim that defers to the actual libpq_gettext().
+ */
+char *
+libpq_gettext(const char *msgid)
+{
+	if (!libpq_gettext_impl)
+	{
+		/*
+		 * Possible if the libpq build doesn't enable NLS. That's a concerning
+		 * mismatch, but in this particular case we can handle it. Try to warn
+		 * a developer with an assertion, though.
+		 */
+		Assert(false);
+
+		/*
+		 * Note that callers of libpq_gettext() have to treat the return value
+		 * as if it were const, because builds without NLS simply pass through
+		 * their argument.
+		 */
+		return unconstify(char *, msgid);
+	}
+
+	return libpq_gettext_impl(msgid);
+}
+
+#endif							/* ENABLE_NLS */
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+/*
+ * Duplicate SOCK_ERRNO* definitions from libpq-int.h, for use by
+ * pq_block/reset_sigpipe().
+ */
+#ifdef WIN32
+#define SOCK_ERRNO (WSAGetLastError())
+#define SOCK_ERRNO_SET(e) WSASetLastError(e)
+#else
+#define SOCK_ERRNO errno
+#define SOCK_ERRNO_SET(e) (errno = (e))
+#endif
+
+/*
+ *	Block SIGPIPE for this thread. This is a copy of libpq's internal API.
+ */
+int
+pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+
+/*
+ *	Discard any pending SIGPIPE and reset the signal mask. This is a copy of
+ *	libpq's internal API.
+ */
+void
+pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
diff --git a/src/interfaces/libpq-oauth/oauth-utils.h b/src/interfaces/libpq-oauth/oauth-utils.h
new file mode 100644
index 00000000000..279fc113248
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.h
@@ -0,0 +1,35 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.h
+ *
+ *	  Definitions providing missing libpq internal APIs
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_UTILS_H
+#define OAUTH_UTILS_H
+
+#include "libpq-fe.h"
+#include "pqexpbuffer.h"
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/* Initializes libpq-oauth. */
+extern PGDLLEXPORT void libpq_oauth_init(pgthreadlock_t threadlock,
+										 libpq_gettext_func gettext_impl,
+										 conn_errorMessage_func errmsg_impl);
+
+/* Duplicated APIs, copied from libpq. */
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+extern bool oauth_unsafe_debugging_enabled(void);
+extern int	pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
+extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
+
+#endif							/* OAUTH_UTILS_H */
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..d4c20066ce4 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,7 +31,6 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
-	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -64,9 +63,11 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
+# The OAuth implementation differs depending on the type of library being built.
+OBJS_STATIC = fe-auth-oauth.o
+
+fe-auth-oauth_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
+OBJS_SHLIB = fe-auth-oauth_shlib.o
 
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
@@ -86,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -101,12 +102,26 @@ ifeq ($(with_ssl),openssl)
 PKG_CONFIG_REQUIRES_PRIVATE = libssl, libcrypto
 endif
 
+ifeq ($(with_libcurl),yes)
+# libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+# libpq-oauth needs libcurl. Put both into *.private.
+PKG_CONFIG_REQUIRES_PRIVATE += libcurl
+%.pc: override SHLIB_LINK_INTERNAL += -lpq-oauth-$(MAJORVERSION)
+endif
+
 all: all-lib libpq-refs-stamp
 
 # Shared library stuff
 include $(top_srcdir)/src/Makefile.shlib
 backend_src = $(top_srcdir)/src/backend
 
+# Add shlib-/stlib-specific objects.
+$(shlib): override OBJS += $(OBJS_SHLIB)
+$(shlib): $(OBJS_SHLIB)
+
+$(stlib): override OBJS += $(OBJS_STATIC)
+$(stlib): $(OBJS_STATIC)
+
 # Check for functions that libpq must not call, currently just exit().
 # (Ideally we'd reject abort() too, but there are various scenarios where
 # build toolchains insert abort() calls, e.g. to implement assert().)
@@ -115,8 +130,6 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
-# libcurl registers an exit handler in the memory debugging code when running
-# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -124,7 +137,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
@@ -138,6 +151,11 @@ fe-misc.o: fe-misc.c $(top_builddir)/src/port/pg_config_paths.h
 $(top_builddir)/src/port/pg_config_paths.h:
 	$(MAKE) -C $(top_builddir)/src/port pg_config_paths.h
 
+# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
+# objects.
+%_shlib.o: %.c %.o
+	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
+
 install: all installdirs install-lib
 	$(INSTALL_DATA) $(srcdir)/libpq-fe.h '$(DESTDIR)$(includedir)'
 	$(INSTALL_DATA) $(srcdir)/libpq-events.h '$(DESTDIR)$(includedir)'
@@ -171,6 +189,6 @@ uninstall: uninstall-lib
 clean distclean: clean-lib
 	$(MAKE) -C test $@
 	rm -rf tmp_check
-	rm -f $(OBJS) pthread.h libpq-refs-stamp
+	rm -f $(OBJS) $(OBJS_SHLIB) $(OBJS_STATIC) pthread.h libpq-refs-stamp
 # Might be left over from a Win32 client-only build
 	rm -f pg_config_paths.h
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..0625cf39e9a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,4 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cf1a25e2ccc..5c285adccbd 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -15,6 +15,10 @@
 
 #include "postgres_fe.h"
 
+#ifdef USE_DYNAMIC_OAUTH
+#include <dlfcn.h>
+#endif
+
 #include "common/base64.h"
 #include "common/hmac.h"
 #include "common/jsonapi.h"
@@ -721,6 +725,143 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+/*-------------
+ * Builtin Flow
+ *
+ * There are three potential implementations of use_builtin_flow:
+ *
+ * 1) If the OAuth client is disabled at configuration time, return false.
+ *    Dependent clients must provide their own flow.
+ * 2) If the OAuth client is enabled and USE_DYNAMIC_OAUTH is defined, dlopen()
+ *    the libpq-oauth plugin and use its implementation.
+ * 3) Otherwise, use flow callbacks that are statically linked into the
+ *    executable.
+ */
+
+#if !defined(USE_LIBCURL)
+
+/*
+ * This configuration doesn't support the builtin flow.
+ */
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	return false;
+}
+
+#elif defined(USE_DYNAMIC_OAUTH)
+
+/*
+ * Use the builtin flow in the libpq-oauth plugin, which is loaded at runtime.
+ */
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/*
+ * This shim is injected into libpq-oauth so that it doesn't depend on the
+ * offset of conn->errorMessage.
+ *
+ * TODO: look into exporting libpq_append_conn_error or a comparable API from
+ * libpq, instead.
+ */
+static PQExpBuffer
+conn_errorMessage(PGconn *conn)
+{
+	return &conn->errorMessage;
+}
+
+/*
+ * Loads the libpq-oauth plugin via dlopen(), initializes it, and plugs its
+ * callbacks into the connection's async auth handlers.
+ *
+ * Failure to load here results in a relatively quiet connection error, to
+ * handle the use case where the build supports loading a flow but a user does
+ * not want to install it. Troubleshooting of linker/loader failures can be done
+ * via PGOAUTHDEBUG.
+ */
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	void		(*init) (pgthreadlock_t threadlock,
+						 libpq_gettext_func gettext_impl,
+						 conn_errorMessage_func errmsg_impl);
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+
+	state->builtin_flow = dlopen("libpq-oauth-" PG_MAJORVERSION DLSUFFIX,
+								 RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		/*
+		 * For end users, this probably isn't an error condition, it just
+		 * means the flow isn't installed. Developers and package maintainers
+		 * may want to debug this via the PGOAUTHDEBUG envvar, though.
+		 *
+		 * Note that POSIX dlerror() isn't guaranteed to be threadsafe.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlopen for libpq-oauth: %s\n", dlerror());
+
+		return false;
+	}
+
+	if ((init = dlsym(state->builtin_flow, "libpq_oauth_init")) == NULL
+		|| (flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow")) == NULL
+		|| (cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow")) == NULL)
+	{
+		/*
+		 * This is more of an error condition than the one above, but due to
+		 * the dlerror() threadsafety issue, lock it behind PGOAUTHDEBUG too.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlsym for libpq-oauth: %s\n", dlerror());
+
+		dlclose(state->builtin_flow);
+		return false;
+	}
+
+	/*
+	 * Inject necessary function pointers into the module.
+	 */
+	init(pg_g_threadlock,
+#ifdef ENABLE_NLS
+		 libpq_gettext,
+#else
+		 NULL,
+#endif
+		 conn_errorMessage);
+
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+
+	return true;
+}
+
+#else
+
+/*
+ * Use the builtin flow in libpq-oauth.a (see libpq-oauth/oauth-curl.h).
+ */
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = pg_fe_run_oauth_flow;
+	conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+	return true;
+}
+
+#endif							/* USE_LIBCURL */
+
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +933,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
 		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..687e664475f 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -33,12 +33,13 @@ typedef struct
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
+extern bool use_builtin_flow(PGconn *conn, fe_oauth_state *state);
 
 /* Mechanisms in fe-auth-oauth.c */
 extern const pg_fe_sasl_mech pg_oauth_mech;
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 292fecf3320..63e48d9fcfb 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -18,6 +18,7 @@ libpq_sources = files(
   'pqexpbuffer.c',
 )
 libpq_so_sources = [] # for shared lib, in addition to the above
+libpq_st_sources = [] # for static lib, in addition to the above
 
 if host_system == 'windows'
   libpq_sources += files('pthread-win32.c', 'win32.c')
@@ -38,10 +39,6 @@ if gssapi.found()
   )
 endif
 
-if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
-endif
-
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
@@ -50,6 +47,9 @@ export_file = custom_target('libpq.exports',
 libpq_inc = include_directories('.', '../../port')
 libpq_c_args = ['-DSO_MAJOR_VERSION=5']
 
+# The OAuth implementation differs depending on the type of library being built.
+libpq_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
+
 # Not using both_libraries() here as
 # 1) resource files should only be in the shared library
 # 2) we want the .pc file to include a dependency to {pgport,common}_static for
@@ -59,7 +59,7 @@ libpq_c_args = ['-DSO_MAJOR_VERSION=5']
 # more complexity than its worth (reusing object files requires also linking
 # to the library on windows or breaks precompiled headers).
 libpq_st = static_library('libpq',
-  libpq_sources,
+  libpq_sources + libpq_st_sources,
   include_directories: [libpq_inc],
   c_args: libpq_c_args,
   c_pch: pch_postgres_fe_h,
@@ -70,7 +70,7 @@ libpq_st = static_library('libpq',
 libpq_so = shared_library('libpq',
   libpq_sources + libpq_so_sources,
   include_directories: [libpq_inc, postgres_inc],
-  c_args: libpq_c_args,
+  c_args: libpq_c_args + libpq_so_c_args,
   c_pch: pch_postgres_fe_h,
   version: '5.' + pg_version_major.to_string(),
   soversion: host_system != 'windows' ? '5' : '',
@@ -86,12 +86,26 @@ libpq = declare_dependency(
   include_directories: [include_directories('.')]
 )
 
+private_deps = [
+  frontend_stlib_code,
+  libpq_deps,
+]
+
+if oauth_flow_supported
+  # libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+  # libpq-oauth needs libcurl. Put both into *.private.
+  private_deps += [
+    libpq_oauth_deps,
+    '-lpq-oauth-@0@'.format(pg_version_major),
+  ]
+endif
+
 pkgconfig.generate(
   name: 'libpq',
   description: 'PostgreSQL libpq library',
   url: pg_url,
   libraries: libpq,
-  libraries_private: [frontend_stlib_code, libpq_deps],
+  libraries_private: private_deps,
 )
 
 install_headers(
diff --git a/src/interfaces/libpq/nls.mk b/src/interfaces/libpq/nls.mk
index ae761265852..b87df277d93 100644
--- a/src/interfaces/libpq/nls.mk
+++ b/src/interfaces/libpq/nls.mk
@@ -13,15 +13,21 @@ GETTEXT_FILES    = fe-auth.c \
                    fe-secure-common.c \
                    fe-secure-gssapi.c \
                    fe-secure-openssl.c \
-                   win32.c
-GETTEXT_TRIGGERS = libpq_append_conn_error:2 \
+                   win32.c \
+                   ../libpq-oauth/oauth-curl.c \
+                   ../libpq-oauth/oauth-utils.c
+GETTEXT_TRIGGERS = actx_error:2 \
+                   libpq_append_conn_error:2 \
                    libpq_append_error:2 \
                    libpq_gettext \
                    libpq_ngettext:1,2 \
+                   oauth_parse_set_error:2 \
                    pqInternalNotice:2
-GETTEXT_FLAGS    = libpq_append_conn_error:2:c-format \
+GETTEXT_FLAGS    = actx_error:2:c-format \
+                   libpq_append_conn_error:2:c-format \
                    libpq_append_error:2:c-format \
                    libpq_gettext:1:pass-c-format \
                    libpq_ngettext:1:pass-c-format \
                    libpq_ngettext:2:pass-c-format \
+                   oauth_parse_set_error:2:c-format \
                    pqInternalNotice:2:c-format
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 46d8da070e8..f2ba5b38124 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -201,6 +201,8 @@ pgxs_empty = [
   'ICU_LIBS',
 
   'LIBURING_CFLAGS', 'LIBURING_LIBS',
+
+  'LIBCURL_CPPFLAGS', 'LIBCURL_LDFLAGS', 'LIBCURL_LDLIBS',
 ]
 
 if host_system == 'windows' and cc.get_argument_syntax() != 'msvc'
-- 
2.34.1

#345Peter Eisentraut
peter@eisentraut.org
In reply to: Jacob Champion (#332)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 10.04.25 01:08, Jacob Champion wrote:

Christoph noted that this was also confusing from the packaging side,
earlier, and Daniel proposed -Doauth-client/--with-oauth-client as the
feature switch name instead. Any objections? Unfortunately it would
mean a buildfarm email is in order, so we should get it locked in.

We had that discussion early in the development, and I still think it's
not the right choice.

The general idea, at least on the Autoconf side, is that --with-FOO
means enable all the features that require library FOO. For example,
--with-ldap enables all the LDAP-related features, including
authentication support in libpq, authentication support in the server,
and service lookup in libpq. --with-[open]ssl enables all the features
that use OpenSSL, including SSL support in the client and server but
also encryption support in pgcrypto.

The naming system you propose has problems:

First, what if we add another kind of "oauth-client" that doesn't use
libcurl, how would you extend the set of options?

Second, what if we add some kind of oauth plugin for the server that
uses libcurl, how would you extend the set of options?

If you used that system for options in the ldap or openssl cases, you'd
end up with maybe six options (and packagers would forget to turn on
half of them). But worse, what you are hiding is the information what
dependencies you are pulling in, which is the actual reason for the
options. (If there was no external dependency, there would be no option
at all.)

This seems unnecessarily complicated and inconsistent. Once you have
made the choice of taking the libcurl dependency, why not build
everything that requires it?

(Nitpick: If you go with this kind of option, it should be --enable-XXX
on the Autoconf side.)

#346Christoph Berg
myon@debian.org
In reply to: Peter Eisentraut (#345)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Peter Eisentraut

But worse, what you are hiding is the information what dependencies
you are pulling in, which is the actual reason for the options. (If there
was no external dependency, there would be no option at all.)

This seems unnecessarily complicated and inconsistent. Once you have made
the choice of taking the libcurl dependency, why not build everything that
requires it?

I agree with this reasoning and retract my suggestion to rename the option.

Thanks for the explanation,
Christoph

#347Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Peter Eisentraut (#345)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 15, 2025 at 5:31 AM Peter Eisentraut <peter@eisentraut.org> wrote:

On 10.04.25 01:08, Jacob Champion wrote:

Christoph noted that this was also confusing from the packaging side,
earlier,

Since Christoph has withdrawn the request, I will drop -0002.

However, I'll go ahead and put some of my opinions down on paper here:

The general idea, at least on the Autoconf side, is that --with-FOO
means enable all the features that require library FOO.

I don't think this is particularly user-friendly if it's not obvious
what feature is enabled by FOO.

LDAP? PAM? Sure. SSL? Eh, I think the pgcrypto coupling is a little
strange -- that's not implied by "SSL" at all! -- but it's not
problematic enough to complain loudly. --with-gssapi selects... some
dependency... which may or may not come from a particular library.
--with-bsd-auth doesn't add any library dependencies at all, instead
depending on the kernel, but it makes sense.

But there's no connection between "libcurl" and "OAuth Device
Authorization flow" in anyone's mind except the people who have worked
on that feature.

If the argument is that we'd need to switch to --enable-oauth-client
rather than --with-oauth-client, that works for me. But I don't quite
understand the desire to stick to the existing configuration
methodology for something that's very different from an end-user
perspective.

The naming system you propose has problems:

First, what if we add another kind of "oauth-client" that doesn't use
libcurl, how would you extend the set of options?

With an extension to the values that you can provide to
--with-oauth-client, similarly to what was originally proposed for
--with-ssl.

Second, what if we add some kind of oauth plugin for the server that
uses libcurl, how would you extend the set of options?

With a new option.

But let me turn this around, because we currently have the opposite
problem: if someone comes in and adds a completely new feature
depending on libcurl, and you want OAuth but you do not want that new
feature -- or vice-versa -- what do you do? In other words, what if
your concern is not with libcurl, but with the feature itself?

But worse, what you are hiding is the information what
dependencies you are pulling in, which is the actual reason for the
options. (If there was no external dependency, there would be no option
at all.)

I'm not sure I agree, either practically or philosophically. I like to
see the build dependencies, definitely, but I also like to see the
features. (Meson will make both things visible separately, for that
matter.)

This seems unnecessarily complicated and inconsistent. Once you have
made the choice of taking the libcurl dependency, why not build
everything that requires it?

Simply because the end user or packager might not want to.

In any case -- I won't die on this particular hill, and I'm happy to
continue forward with 0001 alone.

Thanks!
--Jacob

#348Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#346)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 15, 2025 at 8:34 AM Christoph Berg <myon@debian.org> wrote:

I agree with this reasoning and retract my suggestion to rename the option.

(Thank you for chiming in; having the packager feedback has been
extremely helpful.)

While I have you, may I ask whether you're okay (from the packager
perspective) with the current division of dynamic and static
behaviors?

Dynamic: --with-libcurl builds a runtime-loadable module, and if you
don't install it, OAuth isn't supported (i.e. it's optional)
Static: --with-libcurl builds an additional linkable staticlib, which
you must link into your application (i.e. not optional)

I want to make absolutely sure the existing packager requests are not
conflicting. :D

Thanks,
--Jacob

#349Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#348)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

But there's no connection between "libcurl" and "OAuth Device
Authorization flow" in anyone's mind except the people who have worked
on that feature.

Fwiw that was exactly the reason I originally voiced the idea to
rename.

But let me turn this around, because we currently have the opposite
problem: if someone comes in and adds a completely new feature
depending on libcurl, and you want OAuth but you do not want that new
feature -- or vice-versa -- what do you do? In other words, what if
your concern is not with libcurl, but with the feature itself?

What made me reconsider was Peter saying that what defines the blast
radius of some feature is usually the extra dependency pulled in. If
you don't like tracking OpenSSL problems, build without it. If you
don't like libcurl, build without it. That's the "we are going to be
hated by security scanner people" argument that brought up this
sub-thread.

Now if the feature itself were a problem, that might change how
configuration should be working. Is "libpq can now initiate oauth
requests" something people would like to be able to control?

Re: Jacob Champion

Dynamic: --with-libcurl builds a runtime-loadable module, and if you
don't install it, OAuth isn't supported (i.e. it's optional)

Ok.

Static: --with-libcurl builds an additional linkable staticlib, which
you must link into your application (i.e. not optional)

Debian does not care really about static libs. We are currently
shipping libpq.a, but if it breaks in any funny way, we might as well
remove it.

Christoph

#350Robert Haas
robertmhaas@gmail.com
In reply to: Peter Eisentraut (#345)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 15, 2025 at 8:32 AM Peter Eisentraut <peter@eisentraut.org> wrote:

On 10.04.25 01:08, Jacob Champion wrote:

Christoph noted that this was also confusing from the packaging side,
earlier, and Daniel proposed -Doauth-client/--with-oauth-client as the
feature switch name instead. Any objections? Unfortunately it would
mean a buildfarm email is in order, so we should get it locked in.

We had that discussion early in the development, and I still think it's
not the right choice.

I strongly agree. I think it will not be long before somebody
implements a second feature depending on libcurl, and there's no
precedent for the idea that we allow those features to be enabled and
disabled individually. If that turns out to be something that is
wanted, then since this will be a separate library, a packager can
choose not to ship it, or to put it in a separate package, if they
wish. If there's REALLY a lot of demand for a separate enable/disable
switch for this feature then we can consider making this an exception
to what we do for all other dependent libraries, but I bet there won't
be. I can imagine someone not wanting libcurl on their system on the
theory that it would potentially open up the ability to download data
from arbitrary URLs which might be considered bad from a security
posture -- but I don't really see why someone would be particularly
upset about one particular way in which libcurl might be used.

I also don't mind being wrong, of course. But I think it's better to
bet on making this like other things and then change strategy if that
doesn't work out, rather than starting out by making this different
from other things.

--
Robert Haas
EDB: http://www.enterprisedb.com

#351Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#349)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 15, 2025 at 11:57 AM Christoph Berg <myon@debian.org> wrote:

What made me reconsider was Peter saying that what defines the blast
radius of some feature is usually the extra dependency pulled in. If
you don't like tracking OpenSSL problems, build without it. If you
don't like libcurl, build without it. That's the "we are going to be
hated by security scanner people" argument that brought up this
sub-thread.

Now if the feature itself were a problem, that might change how
configuration should be working. Is "libpq can now initiate oauth
requests" something people would like to be able to control?

Well... I'd sure like to live in a world where users thought about the
implications and risks of what they're using and why, rather than
farming a decision out to a static analysis tool. ("And as long as I'm
dreaming, I'd like a pony.")

But end users already control the initiation of OAuth requests (it's
opt-in via the connection string), so that's not really the goal.

Debian does not care really about static libs. We are currently
shipping libpq.a, but if it breaks in any funny way, we might as well
remove it.

Awesome. I think we have a consensus.

Thanks!
--Jacob

#352Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Robert Haas (#350)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 15, 2025 at 12:21 PM Robert Haas <robertmhaas@gmail.com> wrote:

I also don't mind being wrong, of course. But I think it's better to
bet on making this like other things and then change strategy if that
doesn't work out, rather than starting out by making this different
from other things.

Works for me. (And it's less work, too!)

Thanks,
--Jacob

#353Jelte Fennema-Nio
postgres@jeltef.nl
In reply to: Jacob Champion (#347)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, 15 Apr 2025 at 19:53, Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

But let me turn this around, because we currently have the opposite
problem: if someone comes in and adds a completely new feature
depending on libcurl, and you want OAuth but you do not want that new
feature -- or vice-versa -- what do you do? In other words, what if
your concern is not with libcurl, but with the feature itself?

After reconsidering this, I now agree with Peter and Robert that
--with-libcurl is the flag that we should be relying on. Specifically
because of the situation you're describing above: Once you have
libcurl, why wouldn't you want all the features (e.g. in some other
thread there was a suggestion about fetching the PGSERVICEFILE from a
HTTP endpoint).

It's not like we add compile time flags for other user facing features
like --enable-index-scan. All the --enable-xyz options that we have
are for developer features (like debug and asserts). Starting to add
such a flag for this feature seems unnecessary.

Regarding discoverability, I think the error message that you have
already solves that:

libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");

Side-note: I think it would be good to have a different error when
libpq was build with libcurl support, but the dlopen failed. Something
like:

libpq_append_conn_error(conn, "no custom OAuth flows are available,
and libpq-oauth could not be loaded library could not be loaded. Try
installing the libpq-oauth package from the same source that you
installed libpq from");

#354Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jelte Fennema-Nio (#353)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 15, 2025 at 2:38 PM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:

libpq_append_conn_error(conn, "no custom OAuth flows are available,
and libpq-oauth could not be loaded library could not be loaded. Try
installing the libpq-oauth package from the same source that you
installed libpq from");

Thanks! I think that's a little too prescriptive for packagers,
personally, but I agree that the current message isn't correct
anymore. I've gone with "no custom OAuth flows are available, and the
builtin flow is not installed". (I suppose packagers could patch in a
platform-specific message if they really wanted?)

--

Other changes in v7:

- The option name remains --with-libcurl.
- Daniel and I have tweaked the documentation, and a draft commit message is up
- Removed the ENABLE_NLS-mismatch assertion in oauth-utils.c; we don't
need to care anymore
- Added an initialization mutex

I was feeling paranoid about injecting dependency pointers
concurrently to their use in another thread. They're _supposed_ to be
constant... but I have no doubt that someone somewhere knows of a
platform/compiler/linker combo where that blows up anyway.
Initialization is now run once, under pthread_mutex protection.

- Fixed module load on macOS

The green CI was masking a bug with its use of DYLD_LIBRARY_PATH: we
don't make use of RPATH on macOS, so after installing libpq, it lost
the ability to find libpq-oauth. (A stale installation due to SIP
weirdness was masking this on my local machine; sorry for not catching
it before.)

I have swapped to using an absolute path on Mac only, because unlike
LD_LIBRARY_PATH on *nix, DYLD_LIBRARY_PATH can still override absolute
paths in dlopen()! Whee. I could use a sanity check from a native Mac
developer, but I believe this mirrors the expected behavior for a
"typical" runtime dependency: libraries point directly to the things
they depend on.

With those, I have no more TODOs and I believe this is ready for a
final review round.

Thanks,
--Jacob

Attachments:

v7-0001-oauth-Move-the-builtin-flow-into-a-separate-modul.patchapplication/octet-stream; name=v7-0001-oauth-Move-the-builtin-flow-into-a-separate-modul.patchDownload
From 942ad5391e2acbb143ffcfec3d5bf8023d4a17ad Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH v7] oauth: Move the builtin flow into a separate module

The additional packaging footprint of the OAuth Curl dependency, as well
as the existence of libcurl in the address space even if OAuth isn't
ever used by a client, has raised some concerns. Split off this
dependency into a separate loadable module called libpq-oauth.

When configured using --with-libcurl, libpq.so searches for this new
module via dlopen(). End users may choose not to install the libpq-oauth
module, in which case the default flow is disabled.

For static applications using libpq.a, the libpq-oauth staticlib is a
mandatory link-time dependency for --with-libcurl builds. libpq.pc has
been updated accordingly.

The default flow relies on some libpq internals. Some of these can be
safely duplicated (such as the SIGPIPE handlers), but others need to be
shared between libpq and libpq-oauth for thread-safety. To avoid exporting
these internals to all libpq clients forever, these dependencies are
instead injected from the libpq side via an initialization function.
This also lets libpq communicate the offset of conn->errorMessage to
libpq-oauth, so that we can function without crashing if the module on
the search path came from a different build of Postgres.

This ABI is considered "private". The module has no SONAME or version
symlinks, and it's named libpq-oauth-<major>.so to avoid mixing and
matching across major Postgres versions. (Future improvements may
promote this "OAuth flow plugin" to a first-class concept, at which
point we would need a public API to replace this anyway.)

Additionally, NLS support for error messages in b3f0be788a was
incomplete, because the new error macros weren't being scanned by
xgettext. Fix that now.

Per request from Tom Lane and Bruce Momjian. Based on an initial patch
by Daniel Gustafsson, who also contributed docs changes. The "bare"
dlopen() concept came from Thomas Munro. Many many people reviewed the
design and implementation; thank you!

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Wolfgang Walther <walther@technowledgy.de>
Discussion: https://postgr.es/m/641687.1742360249%40sss.pgh.pa.us
---
 config/programs.m4                            |  17 +-
 configure                                     |  50 ++++-
 configure.ac                                  |  26 ++-
 doc/src/sgml/installation.sgml                |   8 +
 doc/src/sgml/libpq.sgml                       |  30 ++-
 meson.build                                   |  32 ++-
 src/Makefile.global.in                        |   3 +
 src/interfaces/Makefile                       |  12 ++
 src/interfaces/libpq-oauth/Makefile           |  65 ++++++
 src/interfaces/libpq-oauth/README             |  43 ++++
 src/interfaces/libpq-oauth/exports.txt        |   4 +
 src/interfaces/libpq-oauth/meson.build        |  43 ++++
 .../oauth-curl.c}                             |  60 +++---
 src/interfaces/libpq-oauth/oauth-curl.h       |  24 +++
 src/interfaces/libpq-oauth/oauth-utils.c      | 198 ++++++++++++++++++
 src/interfaces/libpq-oauth/oauth-utils.h      |  35 ++++
 src/interfaces/libpq/Makefile                 |  36 +++-
 src/interfaces/libpq/exports.txt              |   1 +
 src/interfaces/libpq/fe-auth-oauth.c          | 197 ++++++++++++++++-
 src/interfaces/libpq/fe-auth-oauth.h          |   5 +-
 src/interfaces/libpq/meson.build              |  25 ++-
 src/interfaces/libpq/nls.mk                   |  12 +-
 src/makefiles/meson.build                     |   2 +
 src/test/modules/oauth_validator/meson.build  |   2 +-
 .../modules/oauth_validator/t/002_client.pl   |   2 +-
 25 files changed, 834 insertions(+), 98 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/README
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 rename src/interfaces/{libpq/fe-auth-oauth-curl.c => libpq-oauth/oauth-curl.c} (98%)
 create mode 100644 src/interfaces/libpq-oauth/oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.c
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.h

diff --git a/config/programs.m4 b/config/programs.m4
index 0a07feb37cc..0ad1e58b48d 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -286,9 +286,20 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
   AC_CHECK_HEADER(curl/curl.h, [],
 				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
-  AC_CHECK_LIB(curl, curl_multi_init, [],
+  AC_CHECK_LIB(curl, curl_multi_init, [
+				 AC_DEFINE([HAVE_LIBCURL], [1], [Define to 1 if you have the `curl' library (-lcurl).])
+				 AC_SUBST(LIBCURL_LDLIBS, -lcurl)
+			   ],
 			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
@@ -338,4 +349,8 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 *** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
 *** to use it with libpq.])
   fi
+
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0936010718d..a4c4bcb40ea 100755
--- a/configure
+++ b/configure
@@ -655,6 +655,7 @@ UUID_LIBS
 LDAP_LIBS_BE
 LDAP_LIBS_FE
 with_ssl
+LIBCURL_LDLIBS
 PTHREAD_CFLAGS
 PTHREAD_LIBS
 PTHREAD_CC
@@ -711,6 +712,8 @@ with_libxml
 LIBNUMA_LIBS
 LIBNUMA_CFLAGS
 with_libnuma
+LIBCURL_LDFLAGS
+LIBCURL_CPPFLAGS
 LIBCURL_LIBS
 LIBCURL_CFLAGS
 with_libcurl
@@ -9053,19 +9056,27 @@ $as_echo "yes" >&6; }
 
 fi
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+
+
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
@@ -12704,9 +12715,6 @@ fi
 
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
 
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
@@ -12754,17 +12762,26 @@ fi
 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
 $as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
 if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
-  cat >>confdefs.h <<_ACEOF
-#define HAVE_LIBCURL 1
-_ACEOF
 
-  LIBS="-lcurl $LIBS"
+
+$as_echo "#define HAVE_LIBCURL 1" >>confdefs.h
+
+				 LIBCURL_LDLIBS=-lcurl
+
 
 else
   as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
 fi
 
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
@@ -12868,6 +12885,10 @@ $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
 *** to use it with libpq." "$LINENO" 5
   fi
 
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
+
 fi
 
 if test "$with_gssapi" = yes ; then
@@ -14516,6 +14537,13 @@ done
 
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    as_fn_error $? "client OAuth is not supported on this platform" "$LINENO" 5
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/configure.ac b/configure.ac
index 2a78cddd825..5d90bf0e979 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1033,19 +1033,27 @@ if test "$with_libcurl" = yes ; then
   # to explicitly set TLS 1.3 ciphersuites).
   PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+  AC_SUBST(LIBCURL_CPPFLAGS)
+  AC_SUBST(LIBCURL_LDFLAGS)
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     AC_MSG_WARN([*** OAuth support tests require --with-python to run])
@@ -1354,9 +1362,6 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
   PGAC_CHECK_LIBCURL
 fi
@@ -1654,6 +1659,13 @@ if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    AC_MSG_ERROR([client OAuth is not supported on this platform])
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 077bcc20759..d928b103d22 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -313,6 +313,14 @@
      </para>
     </listitem>
 
+    <listitem>
+     <para>
+      You need <productname>Curl</productname> to build an optional module
+      which implements the <link linkend="libpq-oauth">OAuth Device
+      Authorization flow</link> for client applications.
+     </para>
+    </listitem>
+
     <listitem>
      <para>
       You need <productname>LZ4</productname>, if you want to support
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 37102c235b0..ae536b2da07 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10226,15 +10226,20 @@ void PQinitSSL(int do_ssl);
   <title>OAuth Support</title>
 
   <para>
-   libpq implements support for the OAuth v2 Device Authorization client flow,
+   <application>libpq</application> implements support for the OAuth v2 Device Authorization client flow,
    documented in
    <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
-   which it will attempt to use by default if the server
+   as an optional module. See the <link linkend="configure-option-with-libcurl">
+   installation documentation</link> for information on how to enable support
+   for Device Authorization as a builtin flow.
+  </para>
+  <para>
+   When support is enabled and the optional module installed, <application>libpq</application>
+   will use the builtin flow by default if the server
    <link linkend="auth-oauth">requests a bearer token</link> during
    authentication. This flow can be utilized even if the system running the
    client application does not have a usable web browser, for example when
-   running a client via <application>SSH</application>. Client applications may implement their own flows
-   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
+   running a client via <acronym>SSH</acronym>.
   </para>
   <para>
    The builtin flow will, by default, print a URL to visit and a user code to
@@ -10251,6 +10256,11 @@ Visit https://example.com/device and enter the code: ABCD-EFGH
    they match expectations, before continuing. Permissions should not be given
    to untrusted third parties.
   </para>
+  <para>
+   Client applications may implement their own flows to customize interaction
+   and integration with applications. See <xref linkend="libpq-oauth-authdata-hooks"/>
+   for more information on how add a custom flow to <application>libpq</application>.
+  </para>
   <para>
    For an OAuth client flow to be usable, the connection string must at minimum
    contain <xref linkend="libpq-connect-oauth-issuer"/> and
@@ -10366,7 +10376,9 @@ typedef struct _PGpromptOAuthDevice
 </synopsis>
         </para>
         <para>
-         The OAuth Device Authorization flow included in <application>libpq</application>
+         The OAuth Device Authorization flow which
+         <link linkend="configure-option-with-libcurl">can be included</link>
+         in <application>libpq</application>
          requires the end user to visit a URL with a browser, then enter a code
          which permits <application>libpq</application> to connect to the server
          on their behalf. The default prompt simply prints the
@@ -10378,7 +10390,8 @@ typedef struct _PGpromptOAuthDevice
          This callback is only invoked during the builtin device
          authorization flow. If the application installs a
          <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
-         flow</link>, this authdata type will not be used.
+         flow</link>, or <application>libpq</application> was not built with
+         support for the builtin flow, this authdata type will not be used.
         </para>
         <para>
          If a non-NULL <structfield>verification_uri_complete</structfield> is
@@ -10400,8 +10413,9 @@ typedef struct _PGpromptOAuthDevice
        </term>
        <listitem>
         <para>
-         Replaces the entire OAuth flow with a custom implementation. The hook
-         should either directly return a Bearer token for the current
+         Adds a custom implementation of a flow, replacing the builtin flow if
+         it is <link linkend="configure-option-with-libcurl">installed</link>.
+         The hook should either directly return a Bearer token for the current
          user/issuer/scope combination, if one is available without blocking, or
          else set up an asynchronous callback to retrieve one.
         </para>
diff --git a/meson.build b/meson.build
index a1516e54529..b902d4a00f8 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -860,13 +861,13 @@ endif
 ###############################################################
 
 libcurlopt = get_option('libcurl')
+oauth_flow_supported = false
+
 if not libcurlopt.disabled()
   # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
   # to explicitly set TLS 1.3 ciphersuites).
   libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
   if libcurl.found()
-    cdata.set('USE_LIBCURL', 1)
-
     # Check to see whether the current platform supports thread-safe Curl
     # initialization.
     libcurl_threadsafe_init = false
@@ -938,6 +939,22 @@ if not libcurlopt.disabled()
     endif
   endif
 
+  # Check that the current platform supports our builtin flow. This requires
+  # libcurl and one of either epoll or kqueue.
+  oauth_flow_supported = (
+    libcurl.found()
+    and (cc.check_header('sys/event.h', required: false,
+                         args: test_c_args, include_directories: postgres_inc)
+         or cc.check_header('sys/epoll.h', required: false,
+                            args: test_c_args, include_directories: postgres_inc))
+  )
+
+  if oauth_flow_supported
+    cdata.set('USE_LIBCURL', 1)
+  elif libcurlopt.enabled()
+    error('client OAuth is not supported on this platform')
+  endif
+
 else
   libcurl = not_found_dep
 endif
@@ -3272,17 +3289,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 6722fbdf365..04952b533de 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -347,6 +347,9 @@ perl_embed_ldflags	= @perl_embed_ldflags@
 
 AWK	= @AWK@
 LN_S	= @LN_S@
+LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@
+LIBCURL_LDFLAGS = @LIBCURL_LDFLAGS@
+LIBCURL_LDLIBS = @LIBCURL_LDLIBS@
 MSGFMT  = @MSGFMT@
 MSGFMT_FLAGS = @MSGFMT_FLAGS@
 MSGMERGE = @MSGMERGE@
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..e6822caa206 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,19 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+else
+ALWAYS_SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
+$(recurse_always)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..5fd251a1d27
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,65 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization OAuth support"
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+NAME = pq-oauth-$(MAJORVERSION)
+
+# Force the name "libpq-oauth" for both the static and shared libraries.
+override shlib := lib$(NAME)$(DLSUFFIX)
+override stlib := lib$(NAME).a
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(LIBCURL_CPPFLAGS) $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES) \
+	oauth-curl.o
+
+# The shared library needs additional glue symbols.
+$(shlib): OBJS += oauth-utils.o
+$(shlib): oauth-utils.o
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)
+SHLIB_PREREQS = submake-libpq
+SHLIB_EXPORTS = exports.txt
+
+# Disable -bundle_loader on macOS.
+BE_DLLLIBS =
+
+# By default, a library without an SONAME doesn't get a static library, so we
+# add it to the build explicitly.
+all: all-lib all-static-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+# Ignore the standard rules for SONAME-less installation; we want both the
+# static and shared libraries to go into libdir.
+install: all installdirs $(stlib) $(shlib)
+	$(INSTALL_SHLIB) $(shlib) '$(DESTDIR)$(libdir)/$(shlib)'
+	$(INSTALL_STLIB) $(stlib) '$(DESTDIR)$(libdir)/$(stlib)'
+
+installdirs:
+	$(MKDIR_P) '$(DESTDIR)$(libdir)'
+
+uninstall:
+	rm -f '$(DESTDIR)$(libdir)/$(stlib)'
+	rm -f '$(DESTDIR)$(libdir)/$(shlib)'
+
+clean distclean: clean-lib
+	rm -f $(OBJS) oauth-utils.o
diff --git a/src/interfaces/libpq-oauth/README b/src/interfaces/libpq-oauth/README
new file mode 100644
index 00000000000..45def6c1ab6
--- /dev/null
+++ b/src/interfaces/libpq-oauth/README
@@ -0,0 +1,43 @@
+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
+later split out as its own shared library in order to isolate its dependency on
+libcurl. (End users who don't want the Curl dependency can simply choose not to
+install this module.)
+
+If a connection string allows the use of OAuth, and the server asks for it, and
+a libpq client has not installed its own custom OAuth flow, libpq will attempt
+to delay-load this module using dlopen() and the following ABI. Failure to load
+results in a failed connection.
+
+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+across major releases; the name of the module (libpq-oauth-MAJOR) reflects this.
+The module exports the following symbols:
+
+- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+- void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+pg_fe_run_oauth_flow and pg_fe_cleanup_oauth_flow are implementations of
+conn->async_auth and conn->cleanup_async_auth, respectively.
+
+- void libpq_oauth_init(pgthreadlock_t threadlock,
+						libpq_gettext_func gettext_impl,
+						conn_errorMessage_func errmsg_impl);
+
+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
+libpq_gettext(), which must be injected by libpq using this initialization
+function before the flow is run. It also relies on libpq to expose
+conn->errorMessage, via the errmsg_impl.
+
+This dependency injection is done to ensure that the module ABI is decoupled
+from the internals of `struct pg_conn`. This way we can safely search the
+standard dlopen() paths (e.g. RPATH, LD_LIBRARY_PATH, the SO cache) for an
+implementation module to use, even if that module wasn't compiled at the same
+time as libpq.
+
+= Static Build =
+
+The static library libpq.a does not perform any dynamic loading. If the builtin
+flow is enabled, the application is expected to link against libpq-oauth-*.a
+directly to provide the necessary symbols.
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..6891a83dbf9
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+libpq_oauth_init          1
+pg_fe_run_oauth_flow      2
+pg_fe_cleanup_oauth_flow  3
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..cf597e1da1e
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,43 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+if not oauth_flow_supported
+  subdir_done()
+endif
+
+libpq_oauth_sources = files(
+  'oauth-curl.c',
+)
+
+# The shared library needs additional glue symbols.
+libpq_oauth_so_sources = files(
+  'oauth-utils.c',
+)
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+libpq_oauth_name = 'libpq-oauth-@0@'.format(pg_version_major)
+
+libpq_oauth_st = static_library(libpq_oauth_name,
+  libpq_oauth_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_stlib_code, libpq_oauth_deps],
+  kwargs: default_lib_args,
+)
+
+libpq_oauth_so = shared_module(libpq_oauth_name,
+  libpq_oauth_sources + libpq_oauth_so_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/oauth-curl.c
similarity index 98%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/oauth-curl.c
index cd9c0323bb6..d52125415bc 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/oauth-curl.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * fe-auth-oauth-curl.c
+ * oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *	  src/interfaces/libpq-oauth/oauth-curl.c
  *
  *-------------------------------------------------------------------------
  */
@@ -17,20 +17,23 @@
 
 #include <curl/curl.h>
 #include <math.h>
-#ifdef HAVE_SYS_EPOLL_H
+#include <unistd.h>
+
+#if defined(HAVE_SYS_EPOLL_H)
 #include <sys/epoll.h>
 #include <sys/timerfd.h>
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 #include <sys/event.h>
+#else
+#error libpq-oauth is not supported on this platform
 #endif
-#include <unistd.h>
 
 #include "common/jsonapi.h"
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
-#include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "oauth-curl.h"
+#include "oauth-utils.h"
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -1110,7 +1113,7 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 static bool
 setup_multiplexer(struct async_ctx *actx)
 {
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {.events = EPOLLIN};
 
 	actx->mux = epoll_create1(EPOLL_CLOEXEC);
@@ -1134,8 +1137,7 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	actx->mux = kqueue();
 	if (actx->mux < 0)
 	{
@@ -1158,10 +1160,9 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
+#else
+#error setup_multiplexer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
-	return false;
 }
 
 /*
@@ -1174,7 +1175,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 {
 	struct async_ctx *actx = ctx;
 
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {0};
 	int			res;
 	int			op = EPOLL_CTL_ADD;
@@ -1230,8 +1231,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev[2] = {0};
 	struct kevent ev_out[2];
 	struct timespec timeout = {0};
@@ -1312,10 +1312,9 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
+#else
+#error register_socket is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
-	return -1;
 }
 
 /*
@@ -1334,7 +1333,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 static bool
 set_timer(struct async_ctx *actx, long timeout)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timeout < 0)
@@ -1363,8 +1362,7 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev;
 
 #ifdef __NetBSD__
@@ -1419,10 +1417,9 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
+#else
+#error set_timer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return false;
 }
 
 /*
@@ -1433,7 +1430,7 @@ set_timer(struct async_ctx *actx, long timeout)
 static int
 timer_expired(struct async_ctx *actx)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timerfd_gettime(actx->timerfd, &spec) < 0)
@@ -1453,8 +1450,7 @@ timer_expired(struct async_ctx *actx)
 	/* If the remaining time to expiration is zero, we're done. */
 	return (spec.it_value.tv_sec == 0
 			&& spec.it_value.tv_nsec == 0);
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	int			res;
 
 	/* Is the timer queue ready? */
@@ -1466,10 +1462,9 @@ timer_expired(struct async_ctx *actx)
 	}
 
 	return (res > 0);
+#else
+#error timer_expired is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return -1;
 }
 
 /*
@@ -2487,8 +2482,9 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 		.verification_uri_complete = actx->authz.verification_uri_complete,
 		.expires_in = actx->authz.expires_in,
 	};
+	PQauthDataHook_type hook = PQgetAuthDataHook();
 
-	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+	res = hook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
 
 	if (!res)
 	{
diff --git a/src/interfaces/libpq-oauth/oauth-curl.h b/src/interfaces/libpq-oauth/oauth-curl.h
new file mode 100644
index 00000000000..248d0424ad0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-curl.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_CURL_H
+#define OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+/* Exported async-auth callbacks. */
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.c b/src/interfaces/libpq-oauth/oauth-utils.c
new file mode 100644
index 00000000000..62ba4299ec1
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.c
@@ -0,0 +1,198 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.c
+ *
+ *	  "Glue" helpers providing a copy of some internal APIs from libpq. At
+ *	  some point in the future, we might be able to deduplicate.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq-oauth/oauth-utils.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <signal.h>
+
+#include "libpq-int.h"
+#include "oauth-utils.h"
+
+static libpq_gettext_func libpq_gettext_impl;
+static conn_errorMessage_func conn_errorMessage;
+
+pgthreadlock_t pg_g_threadlock;
+
+/*-
+ * Initializes libpq-oauth by setting necessary callbacks.
+ *
+ * The current implementation relies on the following private implementation
+ * details of libpq:
+ *
+ * - pg_g_threadlock: protects libcurl initialization if the underlying Curl
+ *   installation is not threadsafe
+ *
+ * - libpq_gettext: translates error messages using libpq's message domain
+ *
+ * - conn->errorMessage: holds translated errors for the connection. This is
+ *   handled through a translation shim, which avoids either depending on the
+ *   offset of the errorMessage in PGconn, or needing to export the variadic
+ *   libpq_append_conn_error().
+ */
+void
+libpq_oauth_init(pgthreadlock_t threadlock_impl,
+				 libpq_gettext_func gettext_impl,
+				 conn_errorMessage_func errmsg_impl)
+{
+	pg_g_threadlock = threadlock_impl;
+	libpq_gettext_impl = gettext_impl;
+	conn_errorMessage = errmsg_impl;
+}
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  This is a copy of libpq's internal API.
+ */
+void
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+	PQExpBuffer errorMessage = conn_errorMessage(conn);
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(errorMessage, '\n');
+}
+
+#ifdef ENABLE_NLS
+
+/*
+ * A shim that defers to the actual libpq_gettext().
+ */
+char *
+libpq_gettext(const char *msgid)
+{
+	if (!libpq_gettext_impl)
+	{
+		/*
+		 * Possible if the libpq build didn't enable NLS but the libpq-oauth
+		 * build did. That's an odd mismatch, but we can handle it.
+		 *
+		 * Note that callers of libpq_gettext() have to treat the return value
+		 * as if it were const, because builds without NLS simply pass through
+		 * their argument.
+		 */
+		return unconstify(char *, msgid);
+	}
+
+	return libpq_gettext_impl(msgid);
+}
+
+#endif							/* ENABLE_NLS */
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+/*
+ * Duplicate SOCK_ERRNO* definitions from libpq-int.h, for use by
+ * pq_block/reset_sigpipe().
+ */
+#ifdef WIN32
+#define SOCK_ERRNO (WSAGetLastError())
+#define SOCK_ERRNO_SET(e) WSASetLastError(e)
+#else
+#define SOCK_ERRNO errno
+#define SOCK_ERRNO_SET(e) (errno = (e))
+#endif
+
+/*
+ *	Block SIGPIPE for this thread. This is a copy of libpq's internal API.
+ */
+int
+pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+
+/*
+ *	Discard any pending SIGPIPE and reset the signal mask. This is a copy of
+ *	libpq's internal API.
+ */
+void
+pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
diff --git a/src/interfaces/libpq-oauth/oauth-utils.h b/src/interfaces/libpq-oauth/oauth-utils.h
new file mode 100644
index 00000000000..279fc113248
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.h
@@ -0,0 +1,35 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.h
+ *
+ *	  Definitions providing missing libpq internal APIs
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_UTILS_H
+#define OAUTH_UTILS_H
+
+#include "libpq-fe.h"
+#include "pqexpbuffer.h"
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/* Initializes libpq-oauth. */
+extern PGDLLEXPORT void libpq_oauth_init(pgthreadlock_t threadlock,
+										 libpq_gettext_func gettext_impl,
+										 conn_errorMessage_func errmsg_impl);
+
+/* Duplicated APIs, copied from libpq. */
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+extern bool oauth_unsafe_debugging_enabled(void);
+extern int	pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
+extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
+
+#endif							/* OAUTH_UTILS_H */
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..d4c20066ce4 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,7 +31,6 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
-	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -64,9 +63,11 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
+# The OAuth implementation differs depending on the type of library being built.
+OBJS_STATIC = fe-auth-oauth.o
+
+fe-auth-oauth_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
+OBJS_SHLIB = fe-auth-oauth_shlib.o
 
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
@@ -86,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -101,12 +102,26 @@ ifeq ($(with_ssl),openssl)
 PKG_CONFIG_REQUIRES_PRIVATE = libssl, libcrypto
 endif
 
+ifeq ($(with_libcurl),yes)
+# libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+# libpq-oauth needs libcurl. Put both into *.private.
+PKG_CONFIG_REQUIRES_PRIVATE += libcurl
+%.pc: override SHLIB_LINK_INTERNAL += -lpq-oauth-$(MAJORVERSION)
+endif
+
 all: all-lib libpq-refs-stamp
 
 # Shared library stuff
 include $(top_srcdir)/src/Makefile.shlib
 backend_src = $(top_srcdir)/src/backend
 
+# Add shlib-/stlib-specific objects.
+$(shlib): override OBJS += $(OBJS_SHLIB)
+$(shlib): $(OBJS_SHLIB)
+
+$(stlib): override OBJS += $(OBJS_STATIC)
+$(stlib): $(OBJS_STATIC)
+
 # Check for functions that libpq must not call, currently just exit().
 # (Ideally we'd reject abort() too, but there are various scenarios where
 # build toolchains insert abort() calls, e.g. to implement assert().)
@@ -115,8 +130,6 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
-# libcurl registers an exit handler in the memory debugging code when running
-# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -124,7 +137,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
@@ -138,6 +151,11 @@ fe-misc.o: fe-misc.c $(top_builddir)/src/port/pg_config_paths.h
 $(top_builddir)/src/port/pg_config_paths.h:
 	$(MAKE) -C $(top_builddir)/src/port pg_config_paths.h
 
+# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
+# objects.
+%_shlib.o: %.c %.o
+	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
+
 install: all installdirs install-lib
 	$(INSTALL_DATA) $(srcdir)/libpq-fe.h '$(DESTDIR)$(includedir)'
 	$(INSTALL_DATA) $(srcdir)/libpq-events.h '$(DESTDIR)$(includedir)'
@@ -171,6 +189,6 @@ uninstall: uninstall-lib
 clean distclean: clean-lib
 	$(MAKE) -C test $@
 	rm -rf tmp_check
-	rm -f $(OBJS) pthread.h libpq-refs-stamp
+	rm -f $(OBJS) $(OBJS_SHLIB) $(OBJS_STATIC) pthread.h libpq-refs-stamp
 # Might be left over from a Win32 client-only build
 	rm -f pg_config_paths.h
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..0625cf39e9a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,4 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cf1a25e2ccc..20c848ec9e0 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -15,6 +15,10 @@
 
 #include "postgres_fe.h"
 
+#ifdef USE_DYNAMIC_OAUTH
+#include <dlfcn.h>
+#endif
+
 #include "common/base64.h"
 #include "common/hmac.h"
 #include "common/jsonapi.h"
@@ -22,6 +26,7 @@
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
 #include "mb/pg_wchar.h"
+#include "pg_config_paths.h"
 
 /* The exported OAuth callback mechanism. */
 static void *oauth_init(PGconn *conn, const char *password,
@@ -721,6 +726,186 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+/*-------------
+ * Builtin Flow
+ *
+ * There are three potential implementations of use_builtin_flow:
+ *
+ * 1) If the OAuth client is disabled at configuration time, return false.
+ *    Dependent clients must provide their own flow.
+ * 2) If the OAuth client is enabled and USE_DYNAMIC_OAUTH is defined, dlopen()
+ *    the libpq-oauth plugin and use its implementation.
+ * 3) Otherwise, use flow callbacks that are statically linked into the
+ *    executable.
+ */
+
+#if !defined(USE_LIBCURL)
+
+/*
+ * This configuration doesn't support the builtin flow.
+ */
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	return false;
+}
+
+#elif defined(USE_DYNAMIC_OAUTH)
+
+/*
+ * Use the builtin flow in the libpq-oauth plugin, which is loaded at runtime.
+ */
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/*
+ * This shim is injected into libpq-oauth so that it doesn't depend on the
+ * offset of conn->errorMessage.
+ *
+ * TODO: look into exporting libpq_append_conn_error or a comparable API from
+ * libpq, instead.
+ */
+static PQExpBuffer
+conn_errorMessage(PGconn *conn)
+{
+	return &conn->errorMessage;
+}
+
+/*
+ * Loads the libpq-oauth plugin via dlopen(), initializes it, and plugs its
+ * callbacks into the connection's async auth handlers.
+ *
+ * Failure to load here results in a relatively quiet connection error, to
+ * handle the use case where the build supports loading a flow but a user does
+ * not want to install it. Troubleshooting of linker/loader failures can be done
+ * via PGOAUTHDEBUG.
+ */
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	static bool initialized = false;
+	static pthread_mutex_t init_mutex = PTHREAD_MUTEX_INITIALIZER;
+	int			lockerr;
+
+	void		(*init) (pgthreadlock_t threadlock,
+						 libpq_gettext_func gettext_impl,
+						 conn_errorMessage_func errmsg_impl);
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+
+	/*
+	 * On macOS only, load the module using its absolute install path; the
+	 * standard search behavior is not very helpful for this use case. Unlike
+	 * on other platforms, DYLD_LIBRARY_PATH is used as a fallback even with
+	 * absolute paths (modulo SIP effects), so tests can continue to work.
+	 *
+	 * On the other platforms, load the module using only the basename, to
+	 * rely on the runtime linker's standard search behavior.
+	 */
+	const char *const module_name =
+#if defined(__darwin__)
+		LIBDIR "/libpq-oauth-" PG_MAJORVERSION DLSUFFIX;
+#else
+		"libpq-oauth-" PG_MAJORVERSION DLSUFFIX;
+#endif
+
+	state->builtin_flow = dlopen(module_name, RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		/*
+		 * For end users, this probably isn't an error condition, it just
+		 * means the flow isn't installed. Developers and package maintainers
+		 * may want to debug this via the PGOAUTHDEBUG envvar, though.
+		 *
+		 * Note that POSIX dlerror() isn't guaranteed to be threadsafe.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlopen for libpq-oauth: %s\n", dlerror());
+
+		return false;
+	}
+
+	if ((init = dlsym(state->builtin_flow, "libpq_oauth_init")) == NULL
+		|| (flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow")) == NULL
+		|| (cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow")) == NULL)
+	{
+		/*
+		 * This is more of an error condition than the one above, but due to
+		 * the dlerror() threadsafety issue, lock it behind PGOAUTHDEBUG too.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlsym for libpq-oauth: %s\n", dlerror());
+
+		dlclose(state->builtin_flow);
+		return false;
+	}
+
+	/*
+	 * Past this point, we do not unload the module. It stays in the process
+	 * permanently.
+	 */
+
+	/*
+	 * We need to inject necessary function pointers into the module. This
+	 * only needs to be done once -- even if the pointers are constant,
+	 * assigning them while another thread is executing the flows feels like
+	 * tempting fate.
+	 */
+	if ((lockerr = pthread_mutex_lock(&init_mutex)) != 0)
+	{
+		/* Should not happen... but don't continue if it does. */
+		Assert(false);
+
+		libpq_append_conn_error(conn, "failed to lock mutex (%d)", lockerr);
+		return false;
+	}
+
+	if (!initialized)
+	{
+		init(pg_g_threadlock,
+#ifdef ENABLE_NLS
+			 libpq_gettext,
+#else
+			 NULL,
+#endif
+			 conn_errorMessage);
+
+		initialized = true;
+	}
+
+	pthread_mutex_unlock(&init_mutex);
+
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+
+	return true;
+}
+
+#else
+
+/*
+ * Use the builtin flow in libpq-oauth.a (see libpq-oauth/oauth-curl.h).
+ */
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = pg_fe_run_oauth_flow;
+	conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+	return true;
+}
+
+#endif							/* USE_LIBCURL */
+
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +977,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
-		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and the builtin flow is not installed");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..687e664475f 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -33,12 +33,13 @@ typedef struct
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
+extern bool use_builtin_flow(PGconn *conn, fe_oauth_state *state);
 
 /* Mechanisms in fe-auth-oauth.c */
 extern const pg_fe_sasl_mech pg_oauth_mech;
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 292fecf3320..29b3451e3aa 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -38,10 +38,6 @@ if gssapi.found()
   )
 endif
 
-if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
-endif
-
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
@@ -50,6 +46,9 @@ export_file = custom_target('libpq.exports',
 libpq_inc = include_directories('.', '../../port')
 libpq_c_args = ['-DSO_MAJOR_VERSION=5']
 
+# The OAuth implementation differs depending on the type of library being built.
+libpq_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
+
 # Not using both_libraries() here as
 # 1) resource files should only be in the shared library
 # 2) we want the .pc file to include a dependency to {pgport,common}_static for
@@ -70,7 +69,7 @@ libpq_st = static_library('libpq',
 libpq_so = shared_library('libpq',
   libpq_sources + libpq_so_sources,
   include_directories: [libpq_inc, postgres_inc],
-  c_args: libpq_c_args,
+  c_args: libpq_c_args + libpq_so_c_args,
   c_pch: pch_postgres_fe_h,
   version: '5.' + pg_version_major.to_string(),
   soversion: host_system != 'windows' ? '5' : '',
@@ -86,12 +85,26 @@ libpq = declare_dependency(
   include_directories: [include_directories('.')]
 )
 
+private_deps = [
+  frontend_stlib_code,
+  libpq_deps,
+]
+
+if oauth_flow_supported
+  # libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+  # libpq-oauth needs libcurl. Put both into *.private.
+  private_deps += [
+    libpq_oauth_deps,
+    '-lpq-oauth-@0@'.format(pg_version_major),
+  ]
+endif
+
 pkgconfig.generate(
   name: 'libpq',
   description: 'PostgreSQL libpq library',
   url: pg_url,
   libraries: libpq,
-  libraries_private: [frontend_stlib_code, libpq_deps],
+  libraries_private: private_deps,
 )
 
 install_headers(
diff --git a/src/interfaces/libpq/nls.mk b/src/interfaces/libpq/nls.mk
index ae761265852..b87df277d93 100644
--- a/src/interfaces/libpq/nls.mk
+++ b/src/interfaces/libpq/nls.mk
@@ -13,15 +13,21 @@ GETTEXT_FILES    = fe-auth.c \
                    fe-secure-common.c \
                    fe-secure-gssapi.c \
                    fe-secure-openssl.c \
-                   win32.c
-GETTEXT_TRIGGERS = libpq_append_conn_error:2 \
+                   win32.c \
+                   ../libpq-oauth/oauth-curl.c \
+                   ../libpq-oauth/oauth-utils.c
+GETTEXT_TRIGGERS = actx_error:2 \
+                   libpq_append_conn_error:2 \
                    libpq_append_error:2 \
                    libpq_gettext \
                    libpq_ngettext:1,2 \
+                   oauth_parse_set_error:2 \
                    pqInternalNotice:2
-GETTEXT_FLAGS    = libpq_append_conn_error:2:c-format \
+GETTEXT_FLAGS    = actx_error:2:c-format \
+                   libpq_append_conn_error:2:c-format \
                    libpq_append_error:2:c-format \
                    libpq_gettext:1:pass-c-format \
                    libpq_ngettext:1:pass-c-format \
                    libpq_ngettext:2:pass-c-format \
+                   oauth_parse_set_error:2:c-format \
                    pqInternalNotice:2:c-format
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 55da678ec27..91a8de1ee9b 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -203,6 +203,8 @@ pgxs_empty = [
   'LIBNUMA_CFLAGS', 'LIBNUMA_LIBS',
 
   'LIBURING_CFLAGS', 'LIBURING_LIBS',
+
+  'LIBCURL_CPPFLAGS', 'LIBCURL_LDFLAGS', 'LIBCURL_LDLIBS',
 ]
 
 if host_system == 'windows' and cc.get_argument_syntax() != 'msvc'
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index 36d1b26369f..e190f9cf15a 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -78,7 +78,7 @@ tests += {
     ],
     'env': {
       'PYTHON': python.path(),
-      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_libcurl': oauth_flow_supported ? 'yes' : 'no',
       'with_python': 'yes',
     },
   },
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
index 54769f12f57..c23b53ac98f 100644
--- a/src/test/modules/oauth_validator/t/002_client.pl
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -111,7 +111,7 @@ if ($ENV{with_libcurl} ne 'yes')
 		"fails without custom hook installed",
 		flags => ["--no-hook"],
 		expected_stderr =>
-		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+		  qr/no custom OAuth flows are available, and the builtin flow is not installed/
 	);
 }
 
-- 
2.34.1

#355Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#354)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Apr 17, 2025 at 5:47 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

With those, I have no more TODOs and I believe this is ready for a
final review round.

Some ABI self-review. These references to conn->errorMessage also need
the indirection treatment, which I'm working on now:

if (actx->errctx)
{
appendPQExpBufferStr(&conn->errorMessage,
libpq_gettext(actx->errctx));
appendPQExpBufferStr(&conn->errorMessage, ": ");
...

I was searching backwards through history to confirm that we don't
rearrange struct pg_conn in back branches; turns out that was a false
assumption. See e8f60e6fe2:

While at it, fix some places where parameter-related infrastructure
was added with the aid of a dartboard, or perhaps with the aid of
the anti-pattern "add new stuff at the end". It should be safe
to rearrange the contents of struct pg_conn even in released
branches, since that's private to libpq (and we'd have to move
some fields in some builds to fix this, anyway).

So that means, I think, the name needs to go back to -<major>-<minor>,
unless anyone can think of a clever way around it. (Injecting
conn->errorMessage to avoid the messiness around ENABLE_GSS et al is
still useful, but injecting every single offset doesn't seem
maintainable to me.) Sorry, Christoph; I know that's not what you were
hoping for.

--Jacob

#356Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#354)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

libpq_append_conn_error(conn, "no custom OAuth flows are available,
and libpq-oauth could not be loaded library could not be loaded. Try
installing the libpq-oauth package from the same source that you
installed libpq from");

Thanks! I think that's a little too prescriptive for packagers,
personally, but I agree that the current message isn't correct
anymore. I've gone with "no custom OAuth flows are available, and the
builtin flow is not installed".

This whole oauth business is highly confusing if you aren't a web
security expert. It's a pretty long way from "the builtin flow is not
installed" to "if you want this to work, you need to install an extra
library/package on the client", so I don't think this message is
helpful.

The originally suggested message was pretty good in that regard. The
distinction about custom flows could probably be dropped.

How about this:

No libpq OAuth flows are available. (Try installing the libpq-oauth package.)

People who have custom flows will likely know that they have to do
anyway.

Devrim: Does that match the package name you'd use?

(I suppose packagers could patch in a
platform-specific message if they really wanted?)

We could, but I'd prefer if we didn't have to. :*)

Christoph

#357Ivan Kush
ivan.kush@tantorlabs.com
In reply to: Jacob Champion (#344)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hello!

I'm testing OAuth Device Flow implementation on Google. Met several
problems.

Postgres from master branch, commit 764d501d24b
Google Device Flow API
https://developers.google.com/identity/protocols/oauth2/limited-input-device

1) In Device Authorization Request Google returns 428 code on pending
https://developers.google.com/identity/protocols/oauth2/limited-input-device#authorization-pending

Source code handles only 400/401 error codes, they are in the Section
5.2 RFC6749
* An error response uses either 400 Bad Request or 401 Unauthorized.
* There are references online to implementations using 403 for error
* return which would violate the specification.

-----------------
I suggest to add a GUC in postgresql.conf that contains additional
non-standard error codes for a specific service.
oauth_add_error_codes = [
  {
         issuer: google
        add_err_codes: [428],
    },
    {
        issuer: someservice
        add_err_code: [403],
    }
]
So Google can contain 400,401,428

Additionally write parsing of such json-like config-values. Will be cool
to create serializer, that matches struct to such json-like GUC.

Or we can create a separate file oauth.conf where json-like data will
be. And postgresql.conf may contain link to this file, name oauth_conf GUC

oauth_conf = /var/lib/postgres/data/oauth.conf

=================

2) Google requires client_secret only in the Device Access Token Request
(Section 3.3 RFC-8628). Also note that secret is in a body of a request
https://developers.google.com/identity/protocols/oauth2/limited-input-device#step-4:-poll-googles-authorization-server

curl -d "client_id=client_id&client_secret=client_secret& \
         device_code=device_code& \
grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code" \
         -H "Content-Type: application/x-www-form-urlencoded" \
         https://oauth2.googleapis.com/token

Not Device Authorization Request (Section 3.1 RFC-8628)
https://developers.google.com/identity/protocols/oauth2/limited-input-device#step-2:-handle-the-authorization-server-response

curl -d "client_id=client_id&scope=email%20profile" \
        https://oauth2.googleapis.com/device/code

But Postgres sends client_secret in both request, also in Device
Authorization Request. See calls of a func add_client_identification in
funs start_device_authz & start_token_request
Azure also use secret only in Device Access Token Request
https://learn.microsoft.com/en-us/entra/identity-platform/v2-oauth2-device-code#device-authorization-request
-----------------

I suggest to remove send secret on Device Authorization Request.

=================
3) Additionally if secret exists PG sends it only using Basic Auth. But
RFC contain only MAY word about Basic Auth. Section 2.3.1 RFC 6749,
if (conn->oauth_client_secret) /* Zero-length secrets are permitted! */
{
username = urlencode(conn->oauth_client_id);
password = urlencode(conn->oauth_client_secret);
...
CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_BASIC, goto cleanup);
CHECK_SETOPT(actx, CURLOPT_USERNAME, username, goto cleanup);
CHECK_SETOPT(actx, CURLOPT_PASSWORD, password, goto cleanup);
actx->used_basic_auth = true;
}
Also this section contains words about body, Google use such approach

Alternatively, the authorization server MAY support including the client
credentials in the request-body using the following parameters:
client_id REQUIRED. The client identifier issued to the client during
the registration process described by Section 2.2
<https://datatracker.ietf.org/doc/html/rfc6749#section-2.2&gt;.
client_secret REQUIRED. The client secret. The client MAY omit the
parameter if the client secret is an empty string.
https://developers.google.com/identity/protocols/oauth2/limited-input-device#step-2:-handle-the-authorization-server-response

-----------------
I suggest to set such cases in config. Let's create a json-like oauth
array config. Field auth_scheme shows what scheme we want to use. (see
GUC description in pt1 of this email).
oauth = [
  {
         issuer: google
        add_err_codes: [428],
        auth_scheme: body
    },
    {
    issuer: someservice
        add_err_code: [403],
        auth_scheme: basic
    }

]

--
Best wishes,
Ivan Kush
Tantor Labs LLC

#358Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Ivan Kush (#357)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sun, Apr 20, 2025 at 10:12 AM Ivan Kush <ivan.kush@tantorlabs.com> wrote:

I'm testing OAuth Device Flow implementation on Google. Met several
problems.

Hi Ivan, thank you for testing and reporting! Unfortunately, yeah,
Google is a known problem [1]/messages/by-id/CAOYmi+kTumP6FHwLnUKX0DVKrTv=N9xSOAu7YMH_XKSMP7ozfA@mail.gmail.com. They've taken several liberties with
the spec, as you point out.

We have some options for dealing with them, since their documentation
instructs clients to hardcode their API entry points instead of using
discovery. (That makes it easy for us to figure out when we're talking
to Google, and potentially switch to a quirks mode.)

But! Before we do that: How do you intend to authorize tokens issued
by Google? Last I checked, they still had no way to register an
application-specific scope, making it very dangerous IMO to use a
public flow [2]/messages/by-id/CAOYmi+=MFyrjDps-YNtem3=Gr3mUsgZ49m7bfMCgr1TDjHL58g@mail.gmail.com. Do you have an architecture where this usage is safe,
and/or have they added custom scopes? (I deprioritized handling the
nonstandard behavior when I couldn't prove to myself that it was
possible to use the Google version of Device Authorization safely, but
I'm happy to jump back into that if we have a good use case.)

1) In Device Authorization Request Google returns 428 code on pending
https://developers.google.com/identity/protocols/oauth2/limited-input-device#authorization-pending

Right. I believe there were other nonstandard errors in other corner
cases, too. :(

I suggest to add a GUC in postgresql.conf that contains additional
non-standard error codes for a specific service.
oauth_add_error_codes = [
{
issuer: google
add_err_codes: [428],
},
{
issuer: someservice
add_err_code: [403],
}
]
So Google can contain 400,401,428

The server config doesn't help us much, since this is a client-side
feature. Any "global" configuration is probably going to be done
through environment variables or a service file [3]https://www.postgresql.org/docs/current/libpq-pgservice.html.

Additionally write parsing of such json-like config-values. Will be cool
to create serializer, that matches struct to such json-like GUC.

I'm not too excited about a separate configuration DSL. I'm guessing
most end users, if they really want Google as their Device
Authorization provider, would rather have us switch over to "Google
mode" once we notice the magic Google endpoint is in use.

2) Google requires client_secret only in the Device Access Token Request
(Section 3.3 RFC-8628).
...
But Postgres sends client_secret in both request, also in Device
Authorization Request.

Yes. See 3.1 (Device Authorization Request):

The client authentication requirements of Section 3.2.1 of [RFC6749]
apply to requests on this endpoint, which means that confidential
clients (those that have established client credentials) authenticate
in the same manner as when making requests to the token endpoint, and
public clients provide the "client_id" parameter to identify
themselves.

I suggest to remove send secret on Device Authorization Request.

This breaks Okta, at minimum. We can't do it across the board. (As for
Azure, I haven't figured out how to configure it to *require* a
confidential client secret for the device flow -- which makes a
certain amount of sense since the flow is public -- but its v2
endpoint doesn't mind being *sent* a secret.)

3) Additionally if secret exists PG sends it only using Basic Auth. But
RFC contain only MAY word about Basic Auth. Section 2.3.1 RFC 6749,

From 2.3.1:

The authorization server MUST support the HTTP Basic
authentication scheme for authenticating clients that were issued a
client password.

We rely on that MUST, at the moment. We can add an exception for a
provider, certainly, but it needs to be limited for safety reasons:
"Including the client credentials in the request-body using the two
parameters is NOT RECOMMENDED and SHOULD be limited to clients unable
to directly utilize the HTTP Basic authentication scheme..."

(Authentication is its own nasty minefield; OAuth introduced its own
encoding requirements on top of HTTP that a bunch of servers ignored,
but in practice we cross our fingers that servers will only issue
ASCII credentials if they're not willing to follow the encoding
rules...)

So to recap: I'm happy to add a Google compatibility mode, but I'd
like to gather some evidence that their device flow can actually
authorize tokens for third parties safely, before we commit to that.
Thoughts?

Thanks!
--Jacob

[1]: /messages/by-id/CAOYmi+kTumP6FHwLnUKX0DVKrTv=N9xSOAu7YMH_XKSMP7ozfA@mail.gmail.com
[2]: /messages/by-id/CAOYmi+=MFyrjDps-YNtem3=Gr3mUsgZ49m7bfMCgr1TDjHL58g@mail.gmail.com
[3]: https://www.postgresql.org/docs/current/libpq-pgservice.html

#359Devrim Gündüz
devrim@gunduz.org
In reply to: Christoph Berg (#356)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On Sat, 2025-04-19 at 14:03 +0200, Christoph Berg wrote:

  No libpq OAuth flows are available. (Try installing the libpq-oauth
package.)

People who have custom flows will likely know that they have to do
anyway.

Devrim: Does that match the package name you'd use?

On PGDG RPM world it would be libpq5-oauth -- but I need to read the
whole thread first as I don't know yet why we need to split out oauth
into a separate package (at least in the RPM world)

Regards,
--
Devrim Gündüz
Open Source Solution Architect, PostgreSQL Major Contributor
BlueSky: @devrim.gunduz.org , @gunduz.org

#360Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#356)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sat, Apr 19, 2025 at 5:04 AM Christoph Berg <myon@debian.org> wrote:

How about this:

No libpq OAuth flows are available. (Try installing the libpq-oauth package.)

Tweaked for capitalization/punctuation rules, and removing the first
"libpq" mention (which I don't think helps a user of, say, psql):

no OAuth flows are available (try installing the libpq-oauth package)

v8 also makes the following changes:

- Per ABI comment upthread, we are back to major-minor versioning for
the shared library (e.g. libpq-oauth-18-0.so). 0001 adds the macros
and makefile variables to make this easy, and 0002 is the bulk of the
change now.
- Since libpq-oauth.a is going to be discovered at compile time, not
runtime, I've removed the versioning from that filename. Static
clients need to match them anyway, so we don't need that additional
packaging headache.
- conn->errorMessage is now decoupled from oauth-curl.c. Separate
object file builds are made using the same technique as libpq.

Thanks,
--Jacob

Attachments:

since-v7.diff.txttext/plain; charset=US-ASCII; name=since-v7.diff.txtDownload
-:  ----------- > 1:  5f87f11b18e Add minor-version counterpart to (PG_)MAJORVERSION
1:  942ad5391e2 ! 2:  4c9cc7f69af oauth: Move the builtin flow into a separate module
    @@ Commit message
         the search path came from a different build of Postgres.
     
         This ABI is considered "private". The module has no SONAME or version
    -    symlinks, and it's named libpq-oauth-<major>.so to avoid mixing and
    -    matching across major Postgres versions. (Future improvements may
    -    promote this "OAuth flow plugin" to a first-class concept, at which
    -    point we would need a public API to replace this anyway.)
    +    symlinks, and it's named libpq-oauth-<major>-<minor>.so to avoid mixing
    +    and matching across Postgres versions, in case internal struct order
    +    needs to change. (Future improvements may promote this "OAuth flow
    +    plugin" to a first-class concept, at which point we would need a public
    +    API to replace this anyway.)
     
         Additionally, NLS support for error messages in b3f0be788a was
         incomplete, because the new error macros weren't being scanned by
    @@ src/interfaces/libpq-oauth/Makefile (new)
     +
     +# This is an internal module; we don't want an SONAME and therefore do not set
     +# SO_MAJOR_VERSION.
    -+NAME = pq-oauth-$(MAJORVERSION)
    ++NAME = pq-oauth-$(MAJORVERSION)-$(MINORVERSION)
     +
    -+# Force the name "libpq-oauth" for both the static and shared libraries.
    ++# Force the name "libpq-oauth" for both the static and shared libraries. The
    ++# staticlib doesn't need version information in its name.
     +override shlib := lib$(NAME)$(DLSUFFIX)
    -+override stlib := lib$(NAME).a
    ++override stlib := libpq-oauth.a
     +
     +override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(LIBCURL_CPPFLAGS) $(CPPFLAGS)
     +
     +OBJS = \
    -+	$(WIN32RES) \
    -+	oauth-curl.o
    ++	$(WIN32RES)
    ++
    ++OBJS_STATIC = oauth-curl.o
     +
     +# The shared library needs additional glue symbols.
    -+$(shlib): OBJS += oauth-utils.o
    -+$(shlib): oauth-utils.o
    ++OBJS_SHLIB = \
    ++	oauth-curl_shlib.o \
    ++	oauth-utils.o \
    ++
    ++oauth-utils.o: override CPPFLAGS += -DUSE_DYNAMIC_OAUTH
    ++oauth-curl_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
    ++
    ++# Add shlib-/stlib-specific objects.
    ++$(shlib): override OBJS += $(OBJS_SHLIB)
    ++$(shlib): $(OBJS_SHLIB)
    ++
    ++$(stlib): override OBJS += $(OBJS_STATIC)
    ++$(stlib): $(OBJS_STATIC)
     +
     +SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
     +SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)
    @@ src/interfaces/libpq-oauth/Makefile (new)
     +# Shared library stuff
     +include $(top_srcdir)/src/Makefile.shlib
     +
    ++# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
    ++# objects.
    ++%_shlib.o: %.c %.o
    ++	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
    ++
     +# Ignore the standard rules for SONAME-less installation; we want both the
     +# static and shared libraries to go into libdir.
     +install: all installdirs $(stlib) $(shlib)
    @@ src/interfaces/libpq-oauth/Makefile (new)
     +	rm -f '$(DESTDIR)$(libdir)/$(shlib)'
     +
     +clean distclean: clean-lib
    -+	rm -f $(OBJS) oauth-utils.o
    ++	rm -f $(OBJS) $(OBJS_STATIC) $(OBJS_SHLIB)
     
      ## src/interfaces/libpq-oauth/README (new) ##
     @@
    @@ src/interfaces/libpq-oauth/README (new)
     += Load-Time ABI =
     +
     +This module ABI is an internal implementation detail, so it's subject to change
    -+across major releases; the name of the module (libpq-oauth-MAJOR) reflects this.
    ++across releases; the name of the module (libpq-oauth-MAJOR-MINOR) reflects this.
     +The module exports the following symbols:
     +
     +- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
    @@ src/interfaces/libpq-oauth/README (new)
     +
     +At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
     +libpq_gettext(), which must be injected by libpq using this initialization
    -+function before the flow is run. It also relies on libpq to expose
    -+conn->errorMessage, via the errmsg_impl.
    ++function before the flow is run.
     +
    -+This dependency injection is done to ensure that the module ABI is decoupled
    -+from the internals of `struct pg_conn`. This way we can safely search the
    -+standard dlopen() paths (e.g. RPATH, LD_LIBRARY_PATH, the SO cache) for an
    -+implementation module to use, even if that module wasn't compiled at the same
    -+time as libpq.
    ++It also relies on libpq to expose conn->errorMessage, via the errmsg_impl. This
    ++is done to decouple the module ABI from the offset of errorMessage, which can
    ++change positions depending on configure-time options. This way we can safely
    ++search the standard dlopen() paths (e.g. RPATH, LD_LIBRARY_PATH, the SO cache)
    ++for an implementation module to use, even if that module wasn't compiled at the
    ++same time as libpq.
     +
     += Static Build =
     +
     +The static library libpq.a does not perform any dynamic loading. If the builtin
    -+flow is enabled, the application is expected to link against libpq-oauth-*.a
    ++flow is enabled, the application is expected to link against libpq-oauth.a
     +directly to provide the necessary symbols.
     
      ## src/interfaces/libpq-oauth/exports.txt (new) ##
    @@ src/interfaces/libpq-oauth/meson.build (new)
     +libpq_oauth_so_sources = files(
     +  'oauth-utils.c',
     +)
    ++libpq_oauth_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
     +
     +export_file = custom_target('libpq-oauth.exports',
     +  kwargs: gen_export_kwargs,
    @@ src/interfaces/libpq-oauth/meson.build (new)
     +# port needs to be in include path due to pthread-win32.h
     +libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
     +
    -+# This is an internal module; we don't want an SONAME and therefore do not set
    -+# SO_MAJOR_VERSION.
    -+libpq_oauth_name = 'libpq-oauth-@0@'.format(pg_version_major)
    -+
    -+libpq_oauth_st = static_library(libpq_oauth_name,
    ++libpq_oauth_st = static_library('libpq-oauth',
     +  libpq_oauth_sources,
     +  include_directories: [libpq_oauth_inc, postgres_inc],
     +  c_pch: pch_postgres_fe_h,
    @@ src/interfaces/libpq-oauth/meson.build (new)
     +  kwargs: default_lib_args,
     +)
     +
    ++# This is an internal module; we don't want an SONAME and therefore do not set
    ++# SO_MAJOR_VERSION.
    ++libpq_oauth_name = 'libpq-oauth-@0@-@1@'.format(pg_version_major, pg_version_minor)
    ++
     +libpq_oauth_so = shared_module(libpq_oauth_name,
     +  libpq_oauth_sources + libpq_oauth_so_sources,
     +  include_directories: [libpq_oauth_inc, postgres_inc],
    ++  c_args: libpq_so_c_args,
     +  c_pch: pch_postgres_fe_h,
     +  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
     +  link_depends: export_file,
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c => src/interfaces/libpq-oauth/oauth-cu
     -#include "libpq-int.h"
      #include "mb/pg_wchar.h"
     +#include "oauth-curl.h"
    ++#ifdef USE_DYNAMIC_OAUTH
     +#include "oauth-utils.h"
    ++#endif
      
      /*
       * It's generally prudent to set a maximum response size to buffer in memory,
    @@ src/interfaces/libpq-oauth/oauth-curl.c: prompt_user(struct async_ctx *actx, PGc
      
      	if (!res)
      	{
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
    + {
    + 	fe_oauth_state *state = conn->sasl_state;
    + 	struct async_ctx *actx;
    ++	PQExpBuffer errbuf;
    + 
    + 	if (!initialize_curl(conn))
    + 		return PGRES_POLLING_FAILED;
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
    + 
    + error_return:
    + 
    ++	/*
    ++	 * For the dynamic module build, we can't safely rely on the offset of
    ++	 * conn->errorMessage, since it depends on build options like USE_SSL et
    ++	 * al. libpq gives us a translator function instead.
    ++	 */
    ++#ifdef USE_DYNAMIC_OAUTH
    ++	errbuf = conn_errorMessage(conn);
    ++#else
    ++	errbuf = &conn->errorMessage;
    ++#endif
    ++
    + 	/*
    + 	 * Assemble the three parts of our error: context, body, and detail. See
    + 	 * also the documentation for struct async_ctx.
    + 	 */
    + 	if (actx->errctx)
    + 	{
    +-		appendPQExpBufferStr(&conn->errorMessage,
    +-							 libpq_gettext(actx->errctx));
    +-		appendPQExpBufferStr(&conn->errorMessage, ": ");
    ++		appendPQExpBufferStr(errbuf, libpq_gettext(actx->errctx));
    ++		appendPQExpBufferStr(errbuf, ": ");
    + 	}
    + 
    + 	if (PQExpBufferDataBroken(actx->errbuf))
    +-		appendPQExpBufferStr(&conn->errorMessage,
    +-							 libpq_gettext("out of memory"));
    ++		appendPQExpBufferStr(errbuf, libpq_gettext("out of memory"));
    + 	else
    +-		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
    ++		appendPQExpBufferStr(errbuf, actx->errbuf.data);
    + 
    + 	if (actx->curl_err[0])
    + 	{
    + 		size_t		len;
    + 
    +-		appendPQExpBuffer(&conn->errorMessage,
    +-						  " (libcurl: %s)", actx->curl_err);
    ++		appendPQExpBuffer(errbuf, " (libcurl: %s)", actx->curl_err);
    + 
    + 		/* Sometimes libcurl adds a newline to the error buffer. :( */
    +-		len = conn->errorMessage.len;
    +-		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
    ++		len = errbuf->len;
    ++		if (len >= 2 && errbuf->data[len - 2] == '\n')
    + 		{
    +-			conn->errorMessage.data[len - 2] = ')';
    +-			conn->errorMessage.data[len - 1] = '\0';
    +-			conn->errorMessage.len--;
    ++			errbuf->data[len - 2] = ')';
    ++			errbuf->data[len - 1] = '\0';
    ++			errbuf->len--;
    + 		}
    + 	}
    + 
    +-	appendPQExpBufferChar(&conn->errorMessage, '\n');
    ++	appendPQExpBufferChar(errbuf, '\n');
    + 
    + 	return PGRES_POLLING_FAILED;
    + }
     
      ## src/interfaces/libpq-oauth/oauth-curl.h (new) ##
     @@
    @@ src/interfaces/libpq-oauth/oauth-utils.c (new)
     +#include "libpq-int.h"
     +#include "oauth-utils.h"
     +
    ++#ifndef USE_DYNAMIC_OAUTH
    ++#error oauth-utils.c is not supported in static builds
    ++#endif
    ++
     +static libpq_gettext_func libpq_gettext_impl;
    -+static conn_errorMessage_func conn_errorMessage;
     +
     +pgthreadlock_t pg_g_threadlock;
    ++conn_errorMessage_func conn_errorMessage;
     +
     +/*-
     + * Initializes libpq-oauth by setting necessary callbacks.
    @@ src/interfaces/libpq-oauth/oauth-utils.h (new)
     +										 libpq_gettext_func gettext_impl,
     +										 conn_errorMessage_func errmsg_impl);
     +
    ++/* Callback to safely obtain conn->errorMessage from a PGconn. */
    ++extern conn_errorMessage_func conn_errorMessage;
    ++
     +/* Duplicated APIs, copied from libpq. */
     +extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
     +extern bool oauth_unsafe_debugging_enabled(void);
    @@ src/interfaces/libpq/Makefile: ifeq ($(with_ssl),openssl)
     +# libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
     +# libpq-oauth needs libcurl. Put both into *.private.
     +PKG_CONFIG_REQUIRES_PRIVATE += libcurl
    -+%.pc: override SHLIB_LINK_INTERNAL += -lpq-oauth-$(MAJORVERSION)
    ++%.pc: override SHLIB_LINK_INTERNAL += -lpq-oauth
     +endif
     +
      all: all-lib libpq-refs-stamp
    @@ src/interfaces/libpq/fe-auth-oauth.c: cleanup_user_oauth_flow(PGconn *conn)
     +	 */
     +	const char *const module_name =
     +#if defined(__darwin__)
    -+		LIBDIR "/libpq-oauth-" PG_MAJORVERSION DLSUFFIX;
    ++		LIBDIR "/libpq-oauth-" PG_MAJORVERSION "-" PG_MINORVERSION DLSUFFIX;
     +#else
    -+		"libpq-oauth-" PG_MAJORVERSION DLSUFFIX;
    ++		"libpq-oauth-" PG_MAJORVERSION "-" PG_MINORVERSION DLSUFFIX;
     +#endif
     +
     +	state->builtin_flow = dlopen(module_name, RTLD_NOW | RTLD_LOCAL);
    @@ src/interfaces/libpq/fe-auth-oauth.c: setup_token_request(PGconn *conn, fe_oauth
     -
     -#else
     -		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
    -+		libpq_append_conn_error(conn, "no custom OAuth flows are available, and the builtin flow is not installed");
    ++		libpq_append_conn_error(conn, "no OAuth flows are available (try installing the libpq-oauth package)");
      		goto fail;
     -
     -#endif
    @@ src/interfaces/libpq/meson.build: libpq = declare_dependency(
     +  # libpq-oauth needs libcurl. Put both into *.private.
     +  private_deps += [
     +    libpq_oauth_deps,
    -+    '-lpq-oauth-@0@'.format(pg_version_major),
    ++    '-lpq-oauth',
     +  ]
     +endif
     +
    @@ src/test/modules/oauth_validator/t/002_client.pl: if ($ENV{with_libcurl} ne 'yes
      		flags => ["--no-hook"],
      		expected_stderr =>
     -		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
    -+		  qr/no custom OAuth flows are available, and the builtin flow is not installed/
    ++		  qr/no OAuth flows are available \(try installing the libpq-oauth package\)/
      	);
      }
      
v8-0001-Add-minor-version-counterpart-to-PG_-MAJORVERSION.patchapplication/x-patch; name=v8-0001-Add-minor-version-counterpart-to-PG_-MAJORVERSION.patchDownload
From 5f87f11b18ea83615c342c832caace49bf7e3897 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 21 Apr 2025 13:43:08 -0700
Subject: [PATCH v8 1/2] Add minor-version counterpart to (PG_)MAJORVERSION

An upcoming commit will name a library, libpq-oauth, using the major and
minor versions. Make the minor version accessible from the Makefiles and
as a string constant in the code.
---
 configure                  | 7 +++++++
 configure.ac               | 2 ++
 meson.build                | 1 +
 src/Makefile.global.in     | 1 +
 src/include/pg_config.h.in | 3 +++
 src/makefiles/meson.build  | 1 +
 6 files changed, 15 insertions(+)

diff --git a/configure b/configure
index 0936010718d..3d783793dfa 100755
--- a/configure
+++ b/configure
@@ -792,6 +792,7 @@ build_os
 build_vendor
 build_cpu
 build
+PG_MINORVERSION
 PG_MAJORVERSION
 target_alias
 host_alias
@@ -2877,6 +2878,12 @@ cat >>confdefs.h <<_ACEOF
 _ACEOF
 
 
+
+cat >>confdefs.h <<_ACEOF
+#define PG_MINORVERSION "$PG_MINORVERSION"
+_ACEOF
+
+
 cat >>confdefs.h <<_ACEOF
 #define PG_MINORVERSION_NUM $PG_MINORVERSION
 _ACEOF
diff --git a/configure.ac b/configure.ac
index 2a78cddd825..1cb3a0ff042 100644
--- a/configure.ac
+++ b/configure.ac
@@ -35,6 +35,8 @@ test -n "$PG_MINORVERSION" || PG_MINORVERSION=0
 AC_SUBST(PG_MAJORVERSION)
 AC_DEFINE_UNQUOTED(PG_MAJORVERSION, "$PG_MAJORVERSION", [PostgreSQL major version as a string])
 AC_DEFINE_UNQUOTED(PG_MAJORVERSION_NUM, $PG_MAJORVERSION, [PostgreSQL major version number])
+AC_SUBST(PG_MINORVERSION)
+AC_DEFINE_UNQUOTED(PG_MINORVERSION, "$PG_MINORVERSION", [PostgreSQL minor version as a string])
 AC_DEFINE_UNQUOTED(PG_MINORVERSION_NUM, $PG_MINORVERSION, [PostgreSQL minor version number])
 
 PGAC_ARG_REQ(with, extra-version, [STRING], [append STRING to version],
diff --git a/meson.build b/meson.build
index a1516e54529..18423a7c13e 100644
--- a/meson.build
+++ b/meson.build
@@ -148,6 +148,7 @@ pg_version += get_option('extra_version')
 cdata.set_quoted('PG_VERSION', pg_version)
 cdata.set_quoted('PG_MAJORVERSION', pg_version_major.to_string())
 cdata.set('PG_MAJORVERSION_NUM', pg_version_major)
+cdata.set_quoted('PG_MINORVERSION', pg_version_minor.to_string())
 cdata.set('PG_MINORVERSION_NUM', pg_version_minor)
 cdata.set('PG_VERSION_NUM', pg_version_num)
 # PG_VERSION_STR is built later, it depends on compiler test results
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 6722fbdf365..54b4a07712e 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -40,6 +40,7 @@ maintainer-clean: distclean
 # PostgreSQL version number
 VERSION = @PACKAGE_VERSION@
 MAJORVERSION = @PG_MAJORVERSION@
+MINORVERSION = @PG_MINORVERSION@
 VERSION_NUM = @PG_VERSION_NUM@
 
 PACKAGE_URL = @PACKAGE_URL@
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index c3cc9fa856d..4fe37d228c5 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -602,6 +602,9 @@
 /* PostgreSQL major version number */
 #undef PG_MAJORVERSION_NUM
 
+/* PostgreSQL minor version as a string */
+#undef PG_MINORVERSION
+
 /* PostgreSQL minor version number */
 #undef PG_MINORVERSION_NUM
 
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 55da678ec27..e3adb5d8dc4 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -36,6 +36,7 @@ pgxs_kv = {
   'PACKAGE_URL': pg_url,
   'PACKAGE_VERSION': pg_version,
   'PG_MAJORVERSION': pg_version_major,
+  'PG_MINORVERSION': pg_version_minor,
   'PG_VERSION_NUM': pg_version_num,
   'configure_input': 'meson',
 
-- 
2.34.1

v8-0002-oauth-Move-the-builtin-flow-into-a-separate-modul.patchapplication/x-patch; name=v8-0002-oauth-Move-the-builtin-flow-into-a-separate-modul.patchDownload
From 4c9cc7f69afd8f59f98b38900e53fc199d4b4009 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH v8 2/2] oauth: Move the builtin flow into a separate module

The additional packaging footprint of the OAuth Curl dependency, as well
as the existence of libcurl in the address space even if OAuth isn't
ever used by a client, has raised some concerns. Split off this
dependency into a separate loadable module called libpq-oauth.

When configured using --with-libcurl, libpq.so searches for this new
module via dlopen(). End users may choose not to install the libpq-oauth
module, in which case the default flow is disabled.

For static applications using libpq.a, the libpq-oauth staticlib is a
mandatory link-time dependency for --with-libcurl builds. libpq.pc has
been updated accordingly.

The default flow relies on some libpq internals. Some of these can be
safely duplicated (such as the SIGPIPE handlers), but others need to be
shared between libpq and libpq-oauth for thread-safety. To avoid exporting
these internals to all libpq clients forever, these dependencies are
instead injected from the libpq side via an initialization function.
This also lets libpq communicate the offset of conn->errorMessage to
libpq-oauth, so that we can function without crashing if the module on
the search path came from a different build of Postgres.

This ABI is considered "private". The module has no SONAME or version
symlinks, and it's named libpq-oauth-<major>-<minor>.so to avoid mixing
and matching across Postgres versions, in case internal struct order
needs to change. (Future improvements may promote this "OAuth flow
plugin" to a first-class concept, at which point we would need a public
API to replace this anyway.)

Additionally, NLS support for error messages in b3f0be788a was
incomplete, because the new error macros weren't being scanned by
xgettext. Fix that now.

Per request from Tom Lane and Bruce Momjian. Based on an initial patch
by Daniel Gustafsson, who also contributed docs changes. The "bare"
dlopen() concept came from Thomas Munro. Many many people reviewed the
design and implementation; thank you!

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Wolfgang Walther <walther@technowledgy.de>
Discussion: https://postgr.es/m/641687.1742360249%40sss.pgh.pa.us
---
 config/programs.m4                            |  17 +-
 configure                                     |  50 ++++-
 configure.ac                                  |  26 ++-
 doc/src/sgml/installation.sgml                |   8 +
 doc/src/sgml/libpq.sgml                       |  30 ++-
 meson.build                                   |  32 ++-
 src/Makefile.global.in                        |   3 +
 src/interfaces/Makefile                       |  12 ++
 src/interfaces/libpq-oauth/Makefile           |  83 +++++++
 src/interfaces/libpq-oauth/README             |  43 ++++
 src/interfaces/libpq-oauth/exports.txt        |   4 +
 src/interfaces/libpq-oauth/meson.build        |  45 ++++
 .../oauth-curl.c}                             |  99 +++++----
 src/interfaces/libpq-oauth/oauth-curl.h       |  24 +++
 src/interfaces/libpq-oauth/oauth-utils.c      | 202 ++++++++++++++++++
 src/interfaces/libpq-oauth/oauth-utils.h      |  38 ++++
 src/interfaces/libpq/Makefile                 |  36 +++-
 src/interfaces/libpq/exports.txt              |   1 +
 src/interfaces/libpq/fe-auth-oauth.c          | 197 ++++++++++++++++-
 src/interfaces/libpq/fe-auth-oauth.h          |   5 +-
 src/interfaces/libpq/meson.build              |  25 ++-
 src/interfaces/libpq/nls.mk                   |  12 +-
 src/makefiles/meson.build                     |   2 +
 src/test/modules/oauth_validator/meson.build  |   2 +-
 .../modules/oauth_validator/t/002_client.pl   |   2 +-
 25 files changed, 886 insertions(+), 112 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/README
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 rename src/interfaces/{libpq/fe-auth-oauth-curl.c => libpq-oauth/oauth-curl.c} (97%)
 create mode 100644 src/interfaces/libpq-oauth/oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.c
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.h

diff --git a/config/programs.m4 b/config/programs.m4
index 0a07feb37cc..0ad1e58b48d 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -286,9 +286,20 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
   AC_CHECK_HEADER(curl/curl.h, [],
 				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
-  AC_CHECK_LIB(curl, curl_multi_init, [],
+  AC_CHECK_LIB(curl, curl_multi_init, [
+				 AC_DEFINE([HAVE_LIBCURL], [1], [Define to 1 if you have the `curl' library (-lcurl).])
+				 AC_SUBST(LIBCURL_LDLIBS, -lcurl)
+			   ],
 			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
@@ -338,4 +349,8 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 *** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
 *** to use it with libpq.])
   fi
+
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 3d783793dfa..eedd18e6d9a 100755
--- a/configure
+++ b/configure
@@ -655,6 +655,7 @@ UUID_LIBS
 LDAP_LIBS_BE
 LDAP_LIBS_FE
 with_ssl
+LIBCURL_LDLIBS
 PTHREAD_CFLAGS
 PTHREAD_LIBS
 PTHREAD_CC
@@ -711,6 +712,8 @@ with_libxml
 LIBNUMA_LIBS
 LIBNUMA_CFLAGS
 with_libnuma
+LIBCURL_LDFLAGS
+LIBCURL_CPPFLAGS
 LIBCURL_LIBS
 LIBCURL_CFLAGS
 with_libcurl
@@ -9060,19 +9063,27 @@ $as_echo "yes" >&6; }
 
 fi
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+
+
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
@@ -12711,9 +12722,6 @@ fi
 
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
 
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
@@ -12761,17 +12769,26 @@ fi
 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
 $as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
 if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
-  cat >>confdefs.h <<_ACEOF
-#define HAVE_LIBCURL 1
-_ACEOF
 
-  LIBS="-lcurl $LIBS"
+
+$as_echo "#define HAVE_LIBCURL 1" >>confdefs.h
+
+				 LIBCURL_LDLIBS=-lcurl
+
 
 else
   as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
 fi
 
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
@@ -12875,6 +12892,10 @@ $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
 *** to use it with libpq." "$LINENO" 5
   fi
 
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
+
 fi
 
 if test "$with_gssapi" = yes ; then
@@ -14523,6 +14544,13 @@ done
 
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    as_fn_error $? "client OAuth is not supported on this platform" "$LINENO" 5
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/configure.ac b/configure.ac
index 1cb3a0ff042..7329b23d309 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1035,19 +1035,27 @@ if test "$with_libcurl" = yes ; then
   # to explicitly set TLS 1.3 ciphersuites).
   PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+  AC_SUBST(LIBCURL_CPPFLAGS)
+  AC_SUBST(LIBCURL_LDFLAGS)
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     AC_MSG_WARN([*** OAuth support tests require --with-python to run])
@@ -1356,9 +1364,6 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
   PGAC_CHECK_LIBCURL
 fi
@@ -1656,6 +1661,13 @@ if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    AC_MSG_ERROR([client OAuth is not supported on this platform])
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 077bcc20759..d928b103d22 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -313,6 +313,14 @@
      </para>
     </listitem>
 
+    <listitem>
+     <para>
+      You need <productname>Curl</productname> to build an optional module
+      which implements the <link linkend="libpq-oauth">OAuth Device
+      Authorization flow</link> for client applications.
+     </para>
+    </listitem>
+
     <listitem>
      <para>
       You need <productname>LZ4</productname>, if you want to support
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 3be66789ba7..cd748902f4d 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10226,15 +10226,20 @@ void PQinitSSL(int do_ssl);
   <title>OAuth Support</title>
 
   <para>
-   libpq implements support for the OAuth v2 Device Authorization client flow,
+   <application>libpq</application> implements support for the OAuth v2 Device Authorization client flow,
    documented in
    <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
-   which it will attempt to use by default if the server
+   as an optional module. See the <link linkend="configure-option-with-libcurl">
+   installation documentation</link> for information on how to enable support
+   for Device Authorization as a builtin flow.
+  </para>
+  <para>
+   When support is enabled and the optional module installed, <application>libpq</application>
+   will use the builtin flow by default if the server
    <link linkend="auth-oauth">requests a bearer token</link> during
    authentication. This flow can be utilized even if the system running the
    client application does not have a usable web browser, for example when
-   running a client via <application>SSH</application>. Client applications may implement their own flows
-   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
+   running a client via <acronym>SSH</acronym>.
   </para>
   <para>
    The builtin flow will, by default, print a URL to visit and a user code to
@@ -10251,6 +10256,11 @@ Visit https://example.com/device and enter the code: ABCD-EFGH
    they match expectations, before continuing. Permissions should not be given
    to untrusted third parties.
   </para>
+  <para>
+   Client applications may implement their own flows to customize interaction
+   and integration with applications. See <xref linkend="libpq-oauth-authdata-hooks"/>
+   for more information on how add a custom flow to <application>libpq</application>.
+  </para>
   <para>
    For an OAuth client flow to be usable, the connection string must at minimum
    contain <xref linkend="libpq-connect-oauth-issuer"/> and
@@ -10366,7 +10376,9 @@ typedef struct _PGpromptOAuthDevice
 </synopsis>
         </para>
         <para>
-         The OAuth Device Authorization flow included in <application>libpq</application>
+         The OAuth Device Authorization flow which
+         <link linkend="configure-option-with-libcurl">can be included</link>
+         in <application>libpq</application>
          requires the end user to visit a URL with a browser, then enter a code
          which permits <application>libpq</application> to connect to the server
          on their behalf. The default prompt simply prints the
@@ -10378,7 +10390,8 @@ typedef struct _PGpromptOAuthDevice
          This callback is only invoked during the builtin device
          authorization flow. If the application installs a
          <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
-         flow</link>, this authdata type will not be used.
+         flow</link>, or <application>libpq</application> was not built with
+         support for the builtin flow, this authdata type will not be used.
         </para>
         <para>
          If a non-NULL <structfield>verification_uri_complete</structfield> is
@@ -10400,8 +10413,9 @@ typedef struct _PGpromptOAuthDevice
        </term>
        <listitem>
         <para>
-         Replaces the entire OAuth flow with a custom implementation. The hook
-         should either directly return a Bearer token for the current
+         Adds a custom implementation of a flow, replacing the builtin flow if
+         it is <link linkend="configure-option-with-libcurl">installed</link>.
+         The hook should either directly return a Bearer token for the current
          user/issuer/scope combination, if one is available without blocking, or
          else set up an asynchronous callback to retrieve one.
         </para>
diff --git a/meson.build b/meson.build
index 18423a7c13e..6787683ca27 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -861,13 +862,13 @@ endif
 ###############################################################
 
 libcurlopt = get_option('libcurl')
+oauth_flow_supported = false
+
 if not libcurlopt.disabled()
   # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
   # to explicitly set TLS 1.3 ciphersuites).
   libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
   if libcurl.found()
-    cdata.set('USE_LIBCURL', 1)
-
     # Check to see whether the current platform supports thread-safe Curl
     # initialization.
     libcurl_threadsafe_init = false
@@ -939,6 +940,22 @@ if not libcurlopt.disabled()
     endif
   endif
 
+  # Check that the current platform supports our builtin flow. This requires
+  # libcurl and one of either epoll or kqueue.
+  oauth_flow_supported = (
+    libcurl.found()
+    and (cc.check_header('sys/event.h', required: false,
+                         args: test_c_args, include_directories: postgres_inc)
+         or cc.check_header('sys/epoll.h', required: false,
+                            args: test_c_args, include_directories: postgres_inc))
+  )
+
+  if oauth_flow_supported
+    cdata.set('USE_LIBCURL', 1)
+  elif libcurlopt.enabled()
+    error('client OAuth is not supported on this platform')
+  endif
+
 else
   libcurl = not_found_dep
 endif
@@ -3273,17 +3290,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 54b4a07712e..f4caece04df 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -348,6 +348,9 @@ perl_embed_ldflags	= @perl_embed_ldflags@
 
 AWK	= @AWK@
 LN_S	= @LN_S@
+LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@
+LIBCURL_LDFLAGS = @LIBCURL_LDFLAGS@
+LIBCURL_LDLIBS = @LIBCURL_LDLIBS@
 MSGFMT  = @MSGFMT@
 MSGFMT_FLAGS = @MSGFMT_FLAGS@
 MSGMERGE = @MSGMERGE@
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..e6822caa206 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,19 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+else
+ALWAYS_SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
+$(recurse_always)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..98acaff1a3b
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,83 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization OAuth support"
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+NAME = pq-oauth-$(MAJORVERSION)-$(MINORVERSION)
+
+# Force the name "libpq-oauth" for both the static and shared libraries. The
+# staticlib doesn't need version information in its name.
+override shlib := lib$(NAME)$(DLSUFFIX)
+override stlib := libpq-oauth.a
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(LIBCURL_CPPFLAGS) $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES)
+
+OBJS_STATIC = oauth-curl.o
+
+# The shared library needs additional glue symbols.
+OBJS_SHLIB = \
+	oauth-curl_shlib.o \
+	oauth-utils.o \
+
+oauth-utils.o: override CPPFLAGS += -DUSE_DYNAMIC_OAUTH
+oauth-curl_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
+
+# Add shlib-/stlib-specific objects.
+$(shlib): override OBJS += $(OBJS_SHLIB)
+$(shlib): $(OBJS_SHLIB)
+
+$(stlib): override OBJS += $(OBJS_STATIC)
+$(stlib): $(OBJS_STATIC)
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)
+SHLIB_PREREQS = submake-libpq
+SHLIB_EXPORTS = exports.txt
+
+# Disable -bundle_loader on macOS.
+BE_DLLLIBS =
+
+# By default, a library without an SONAME doesn't get a static library, so we
+# add it to the build explicitly.
+all: all-lib all-static-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
+# objects.
+%_shlib.o: %.c %.o
+	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
+
+# Ignore the standard rules for SONAME-less installation; we want both the
+# static and shared libraries to go into libdir.
+install: all installdirs $(stlib) $(shlib)
+	$(INSTALL_SHLIB) $(shlib) '$(DESTDIR)$(libdir)/$(shlib)'
+	$(INSTALL_STLIB) $(stlib) '$(DESTDIR)$(libdir)/$(stlib)'
+
+installdirs:
+	$(MKDIR_P) '$(DESTDIR)$(libdir)'
+
+uninstall:
+	rm -f '$(DESTDIR)$(libdir)/$(stlib)'
+	rm -f '$(DESTDIR)$(libdir)/$(shlib)'
+
+clean distclean: clean-lib
+	rm -f $(OBJS) $(OBJS_STATIC) $(OBJS_SHLIB)
diff --git a/src/interfaces/libpq-oauth/README b/src/interfaces/libpq-oauth/README
new file mode 100644
index 00000000000..fdc1320d152
--- /dev/null
+++ b/src/interfaces/libpq-oauth/README
@@ -0,0 +1,43 @@
+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
+later split out as its own shared library in order to isolate its dependency on
+libcurl. (End users who don't want the Curl dependency can simply choose not to
+install this module.)
+
+If a connection string allows the use of OAuth, and the server asks for it, and
+a libpq client has not installed its own custom OAuth flow, libpq will attempt
+to delay-load this module using dlopen() and the following ABI. Failure to load
+results in a failed connection.
+
+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+across releases; the name of the module (libpq-oauth-MAJOR-MINOR) reflects this.
+The module exports the following symbols:
+
+- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+- void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+pg_fe_run_oauth_flow and pg_fe_cleanup_oauth_flow are implementations of
+conn->async_auth and conn->cleanup_async_auth, respectively.
+
+- void libpq_oauth_init(pgthreadlock_t threadlock,
+						libpq_gettext_func gettext_impl,
+						conn_errorMessage_func errmsg_impl);
+
+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
+libpq_gettext(), which must be injected by libpq using this initialization
+function before the flow is run.
+
+It also relies on libpq to expose conn->errorMessage, via the errmsg_impl. This
+is done to decouple the module ABI from the offset of errorMessage, which can
+change positions depending on configure-time options. This way we can safely
+search the standard dlopen() paths (e.g. RPATH, LD_LIBRARY_PATH, the SO cache)
+for an implementation module to use, even if that module wasn't compiled at the
+same time as libpq.
+
+= Static Build =
+
+The static library libpq.a does not perform any dynamic loading. If the builtin
+flow is enabled, the application is expected to link against libpq-oauth.a
+directly to provide the necessary symbols.
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..6891a83dbf9
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+libpq_oauth_init          1
+pg_fe_run_oauth_flow      2
+pg_fe_cleanup_oauth_flow  3
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..d97f893178a
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,45 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+if not oauth_flow_supported
+  subdir_done()
+endif
+
+libpq_oauth_sources = files(
+  'oauth-curl.c',
+)
+
+# The shared library needs additional glue symbols.
+libpq_oauth_so_sources = files(
+  'oauth-utils.c',
+)
+libpq_oauth_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+
+libpq_oauth_st = static_library('libpq-oauth',
+  libpq_oauth_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_stlib_code, libpq_oauth_deps],
+  kwargs: default_lib_args,
+)
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+libpq_oauth_name = 'libpq-oauth-@0@-@1@'.format(pg_version_major, pg_version_minor)
+
+libpq_oauth_so = shared_module(libpq_oauth_name,
+  libpq_oauth_sources + libpq_oauth_so_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_args: libpq_so_c_args,
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/oauth-curl.c
similarity index 97%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/oauth-curl.c
index c195e00cd28..3239315d952 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/oauth-curl.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * fe-auth-oauth-curl.c
+ * oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *	  src/interfaces/libpq-oauth/oauth-curl.c
  *
  *-------------------------------------------------------------------------
  */
@@ -17,20 +17,25 @@
 
 #include <curl/curl.h>
 #include <math.h>
-#ifdef HAVE_SYS_EPOLL_H
+#include <unistd.h>
+
+#if defined(HAVE_SYS_EPOLL_H)
 #include <sys/epoll.h>
 #include <sys/timerfd.h>
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 #include <sys/event.h>
+#else
+#error libpq-oauth is not supported on this platform
 #endif
-#include <unistd.h>
 
 #include "common/jsonapi.h"
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
-#include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "oauth-curl.h"
+#ifdef USE_DYNAMIC_OAUTH
+#include "oauth-utils.h"
+#endif
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -1110,7 +1115,7 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 static bool
 setup_multiplexer(struct async_ctx *actx)
 {
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {.events = EPOLLIN};
 
 	actx->mux = epoll_create1(EPOLL_CLOEXEC);
@@ -1134,8 +1139,7 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	actx->mux = kqueue();
 	if (actx->mux < 0)
 	{
@@ -1158,10 +1162,9 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
+#else
+#error setup_multiplexer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
-	return false;
 }
 
 /*
@@ -1174,7 +1177,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 {
 	struct async_ctx *actx = ctx;
 
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {0};
 	int			res;
 	int			op = EPOLL_CTL_ADD;
@@ -1230,8 +1233,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev[2] = {0};
 	struct kevent ev_out[2];
 	struct timespec timeout = {0};
@@ -1312,10 +1314,9 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
+#else
+#error register_socket is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
-	return -1;
 }
 
 /*
@@ -1334,7 +1335,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 static bool
 set_timer(struct async_ctx *actx, long timeout)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timeout < 0)
@@ -1363,8 +1364,7 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev;
 
 #ifdef __NetBSD__
@@ -1419,10 +1419,9 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
+#else
+#error set_timer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return false;
 }
 
 /*
@@ -1433,7 +1432,7 @@ set_timer(struct async_ctx *actx, long timeout)
 static int
 timer_expired(struct async_ctx *actx)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timerfd_gettime(actx->timerfd, &spec) < 0)
@@ -1453,8 +1452,7 @@ timer_expired(struct async_ctx *actx)
 	/* If the remaining time to expiration is zero, we're done. */
 	return (spec.it_value.tv_sec == 0
 			&& spec.it_value.tv_nsec == 0);
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	int			res;
 
 	/* Is the timer queue ready? */
@@ -1466,10 +1464,9 @@ timer_expired(struct async_ctx *actx)
 	}
 
 	return (res > 0);
+#else
+#error timer_expired is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return -1;
 }
 
 /*
@@ -2487,8 +2484,9 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 		.verification_uri_complete = actx->authz.verification_uri_complete,
 		.expires_in = actx->authz.expires_in,
 	};
+	PQauthDataHook_type hook = PQgetAuthDataHook();
 
-	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+	res = hook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
 
 	if (!res)
 	{
@@ -2635,6 +2633,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 {
 	fe_oauth_state *state = conn->sasl_state;
 	struct async_ctx *actx;
+	PQExpBuffer errbuf;
 
 	if (!initialize_curl(conn))
 		return PGRES_POLLING_FAILED;
@@ -2825,41 +2824,49 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 
 error_return:
 
+	/*
+	 * For the dynamic module build, we can't safely rely on the offset of
+	 * conn->errorMessage, since it depends on build options like USE_SSL et
+	 * al. libpq gives us a translator function instead.
+	 */
+#ifdef USE_DYNAMIC_OAUTH
+	errbuf = conn_errorMessage(conn);
+#else
+	errbuf = &conn->errorMessage;
+#endif
+
 	/*
 	 * Assemble the three parts of our error: context, body, and detail. See
 	 * also the documentation for struct async_ctx.
 	 */
 	if (actx->errctx)
 	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 libpq_gettext(actx->errctx));
-		appendPQExpBufferStr(&conn->errorMessage, ": ");
+		appendPQExpBufferStr(errbuf, libpq_gettext(actx->errctx));
+		appendPQExpBufferStr(errbuf, ": ");
 	}
 
 	if (PQExpBufferDataBroken(actx->errbuf))
-		appendPQExpBufferStr(&conn->errorMessage,
-							 libpq_gettext("out of memory"));
+		appendPQExpBufferStr(errbuf, libpq_gettext("out of memory"));
 	else
-		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+		appendPQExpBufferStr(errbuf, actx->errbuf.data);
 
 	if (actx->curl_err[0])
 	{
 		size_t		len;
 
-		appendPQExpBuffer(&conn->errorMessage,
-						  " (libcurl: %s)", actx->curl_err);
+		appendPQExpBuffer(errbuf, " (libcurl: %s)", actx->curl_err);
 
 		/* Sometimes libcurl adds a newline to the error buffer. :( */
-		len = conn->errorMessage.len;
-		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		len = errbuf->len;
+		if (len >= 2 && errbuf->data[len - 2] == '\n')
 		{
-			conn->errorMessage.data[len - 2] = ')';
-			conn->errorMessage.data[len - 1] = '\0';
-			conn->errorMessage.len--;
+			errbuf->data[len - 2] = ')';
+			errbuf->data[len - 1] = '\0';
+			errbuf->len--;
 		}
 	}
 
-	appendPQExpBufferChar(&conn->errorMessage, '\n');
+	appendPQExpBufferChar(errbuf, '\n');
 
 	return PGRES_POLLING_FAILED;
 }
diff --git a/src/interfaces/libpq-oauth/oauth-curl.h b/src/interfaces/libpq-oauth/oauth-curl.h
new file mode 100644
index 00000000000..248d0424ad0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-curl.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_CURL_H
+#define OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+/* Exported async-auth callbacks. */
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.c b/src/interfaces/libpq-oauth/oauth-utils.c
new file mode 100644
index 00000000000..1f85a6b0479
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.c
@@ -0,0 +1,202 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.c
+ *
+ *	  "Glue" helpers providing a copy of some internal APIs from libpq. At
+ *	  some point in the future, we might be able to deduplicate.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq-oauth/oauth-utils.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <signal.h>
+
+#include "libpq-int.h"
+#include "oauth-utils.h"
+
+#ifndef USE_DYNAMIC_OAUTH
+#error oauth-utils.c is not supported in static builds
+#endif
+
+static libpq_gettext_func libpq_gettext_impl;
+
+pgthreadlock_t pg_g_threadlock;
+conn_errorMessage_func conn_errorMessage;
+
+/*-
+ * Initializes libpq-oauth by setting necessary callbacks.
+ *
+ * The current implementation relies on the following private implementation
+ * details of libpq:
+ *
+ * - pg_g_threadlock: protects libcurl initialization if the underlying Curl
+ *   installation is not threadsafe
+ *
+ * - libpq_gettext: translates error messages using libpq's message domain
+ *
+ * - conn->errorMessage: holds translated errors for the connection. This is
+ *   handled through a translation shim, which avoids either depending on the
+ *   offset of the errorMessage in PGconn, or needing to export the variadic
+ *   libpq_append_conn_error().
+ */
+void
+libpq_oauth_init(pgthreadlock_t threadlock_impl,
+				 libpq_gettext_func gettext_impl,
+				 conn_errorMessage_func errmsg_impl)
+{
+	pg_g_threadlock = threadlock_impl;
+	libpq_gettext_impl = gettext_impl;
+	conn_errorMessage = errmsg_impl;
+}
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  This is a copy of libpq's internal API.
+ */
+void
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+	PQExpBuffer errorMessage = conn_errorMessage(conn);
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(errorMessage, '\n');
+}
+
+#ifdef ENABLE_NLS
+
+/*
+ * A shim that defers to the actual libpq_gettext().
+ */
+char *
+libpq_gettext(const char *msgid)
+{
+	if (!libpq_gettext_impl)
+	{
+		/*
+		 * Possible if the libpq build didn't enable NLS but the libpq-oauth
+		 * build did. That's an odd mismatch, but we can handle it.
+		 *
+		 * Note that callers of libpq_gettext() have to treat the return value
+		 * as if it were const, because builds without NLS simply pass through
+		 * their argument.
+		 */
+		return unconstify(char *, msgid);
+	}
+
+	return libpq_gettext_impl(msgid);
+}
+
+#endif							/* ENABLE_NLS */
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+/*
+ * Duplicate SOCK_ERRNO* definitions from libpq-int.h, for use by
+ * pq_block/reset_sigpipe().
+ */
+#ifdef WIN32
+#define SOCK_ERRNO (WSAGetLastError())
+#define SOCK_ERRNO_SET(e) WSASetLastError(e)
+#else
+#define SOCK_ERRNO errno
+#define SOCK_ERRNO_SET(e) (errno = (e))
+#endif
+
+/*
+ *	Block SIGPIPE for this thread. This is a copy of libpq's internal API.
+ */
+int
+pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+
+/*
+ *	Discard any pending SIGPIPE and reset the signal mask. This is a copy of
+ *	libpq's internal API.
+ */
+void
+pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
diff --git a/src/interfaces/libpq-oauth/oauth-utils.h b/src/interfaces/libpq-oauth/oauth-utils.h
new file mode 100644
index 00000000000..e2a9d01237d
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.h
@@ -0,0 +1,38 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.h
+ *
+ *	  Definitions providing missing libpq internal APIs
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_UTILS_H
+#define OAUTH_UTILS_H
+
+#include "libpq-fe.h"
+#include "pqexpbuffer.h"
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/* Initializes libpq-oauth. */
+extern PGDLLEXPORT void libpq_oauth_init(pgthreadlock_t threadlock,
+										 libpq_gettext_func gettext_impl,
+										 conn_errorMessage_func errmsg_impl);
+
+/* Callback to safely obtain conn->errorMessage from a PGconn. */
+extern conn_errorMessage_func conn_errorMessage;
+
+/* Duplicated APIs, copied from libpq. */
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+extern bool oauth_unsafe_debugging_enabled(void);
+extern int	pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
+extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
+
+#endif							/* OAUTH_UTILS_H */
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..c6fe5fec7f6 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,7 +31,6 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
-	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -64,9 +63,11 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
+# The OAuth implementation differs depending on the type of library being built.
+OBJS_STATIC = fe-auth-oauth.o
+
+fe-auth-oauth_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
+OBJS_SHLIB = fe-auth-oauth_shlib.o
 
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
@@ -86,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -101,12 +102,26 @@ ifeq ($(with_ssl),openssl)
 PKG_CONFIG_REQUIRES_PRIVATE = libssl, libcrypto
 endif
 
+ifeq ($(with_libcurl),yes)
+# libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+# libpq-oauth needs libcurl. Put both into *.private.
+PKG_CONFIG_REQUIRES_PRIVATE += libcurl
+%.pc: override SHLIB_LINK_INTERNAL += -lpq-oauth
+endif
+
 all: all-lib libpq-refs-stamp
 
 # Shared library stuff
 include $(top_srcdir)/src/Makefile.shlib
 backend_src = $(top_srcdir)/src/backend
 
+# Add shlib-/stlib-specific objects.
+$(shlib): override OBJS += $(OBJS_SHLIB)
+$(shlib): $(OBJS_SHLIB)
+
+$(stlib): override OBJS += $(OBJS_STATIC)
+$(stlib): $(OBJS_STATIC)
+
 # Check for functions that libpq must not call, currently just exit().
 # (Ideally we'd reject abort() too, but there are various scenarios where
 # build toolchains insert abort() calls, e.g. to implement assert().)
@@ -115,8 +130,6 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
-# libcurl registers an exit handler in the memory debugging code when running
-# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -124,7 +137,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
@@ -138,6 +151,11 @@ fe-misc.o: fe-misc.c $(top_builddir)/src/port/pg_config_paths.h
 $(top_builddir)/src/port/pg_config_paths.h:
 	$(MAKE) -C $(top_builddir)/src/port pg_config_paths.h
 
+# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
+# objects.
+%_shlib.o: %.c %.o
+	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
+
 install: all installdirs install-lib
 	$(INSTALL_DATA) $(srcdir)/libpq-fe.h '$(DESTDIR)$(includedir)'
 	$(INSTALL_DATA) $(srcdir)/libpq-events.h '$(DESTDIR)$(includedir)'
@@ -171,6 +189,6 @@ uninstall: uninstall-lib
 clean distclean: clean-lib
 	$(MAKE) -C test $@
 	rm -rf tmp_check
-	rm -f $(OBJS) pthread.h libpq-refs-stamp
+	rm -f $(OBJS) $(OBJS_SHLIB) $(OBJS_STATIC) pthread.h libpq-refs-stamp
 # Might be left over from a Win32 client-only build
 	rm -f pg_config_paths.h
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..0625cf39e9a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,4 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cf1a25e2ccc..ccdd9139cf1 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -15,6 +15,10 @@
 
 #include "postgres_fe.h"
 
+#ifdef USE_DYNAMIC_OAUTH
+#include <dlfcn.h>
+#endif
+
 #include "common/base64.h"
 #include "common/hmac.h"
 #include "common/jsonapi.h"
@@ -22,6 +26,7 @@
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
 #include "mb/pg_wchar.h"
+#include "pg_config_paths.h"
 
 /* The exported OAuth callback mechanism. */
 static void *oauth_init(PGconn *conn, const char *password,
@@ -721,6 +726,186 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+/*-------------
+ * Builtin Flow
+ *
+ * There are three potential implementations of use_builtin_flow:
+ *
+ * 1) If the OAuth client is disabled at configuration time, return false.
+ *    Dependent clients must provide their own flow.
+ * 2) If the OAuth client is enabled and USE_DYNAMIC_OAUTH is defined, dlopen()
+ *    the libpq-oauth plugin and use its implementation.
+ * 3) Otherwise, use flow callbacks that are statically linked into the
+ *    executable.
+ */
+
+#if !defined(USE_LIBCURL)
+
+/*
+ * This configuration doesn't support the builtin flow.
+ */
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	return false;
+}
+
+#elif defined(USE_DYNAMIC_OAUTH)
+
+/*
+ * Use the builtin flow in the libpq-oauth plugin, which is loaded at runtime.
+ */
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/*
+ * This shim is injected into libpq-oauth so that it doesn't depend on the
+ * offset of conn->errorMessage.
+ *
+ * TODO: look into exporting libpq_append_conn_error or a comparable API from
+ * libpq, instead.
+ */
+static PQExpBuffer
+conn_errorMessage(PGconn *conn)
+{
+	return &conn->errorMessage;
+}
+
+/*
+ * Loads the libpq-oauth plugin via dlopen(), initializes it, and plugs its
+ * callbacks into the connection's async auth handlers.
+ *
+ * Failure to load here results in a relatively quiet connection error, to
+ * handle the use case where the build supports loading a flow but a user does
+ * not want to install it. Troubleshooting of linker/loader failures can be done
+ * via PGOAUTHDEBUG.
+ */
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	static bool initialized = false;
+	static pthread_mutex_t init_mutex = PTHREAD_MUTEX_INITIALIZER;
+	int			lockerr;
+
+	void		(*init) (pgthreadlock_t threadlock,
+						 libpq_gettext_func gettext_impl,
+						 conn_errorMessage_func errmsg_impl);
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+
+	/*
+	 * On macOS only, load the module using its absolute install path; the
+	 * standard search behavior is not very helpful for this use case. Unlike
+	 * on other platforms, DYLD_LIBRARY_PATH is used as a fallback even with
+	 * absolute paths (modulo SIP effects), so tests can continue to work.
+	 *
+	 * On the other platforms, load the module using only the basename, to
+	 * rely on the runtime linker's standard search behavior.
+	 */
+	const char *const module_name =
+#if defined(__darwin__)
+		LIBDIR "/libpq-oauth-" PG_MAJORVERSION "-" PG_MINORVERSION DLSUFFIX;
+#else
+		"libpq-oauth-" PG_MAJORVERSION "-" PG_MINORVERSION DLSUFFIX;
+#endif
+
+	state->builtin_flow = dlopen(module_name, RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		/*
+		 * For end users, this probably isn't an error condition, it just
+		 * means the flow isn't installed. Developers and package maintainers
+		 * may want to debug this via the PGOAUTHDEBUG envvar, though.
+		 *
+		 * Note that POSIX dlerror() isn't guaranteed to be threadsafe.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlopen for libpq-oauth: %s\n", dlerror());
+
+		return false;
+	}
+
+	if ((init = dlsym(state->builtin_flow, "libpq_oauth_init")) == NULL
+		|| (flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow")) == NULL
+		|| (cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow")) == NULL)
+	{
+		/*
+		 * This is more of an error condition than the one above, but due to
+		 * the dlerror() threadsafety issue, lock it behind PGOAUTHDEBUG too.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlsym for libpq-oauth: %s\n", dlerror());
+
+		dlclose(state->builtin_flow);
+		return false;
+	}
+
+	/*
+	 * Past this point, we do not unload the module. It stays in the process
+	 * permanently.
+	 */
+
+	/*
+	 * We need to inject necessary function pointers into the module. This
+	 * only needs to be done once -- even if the pointers are constant,
+	 * assigning them while another thread is executing the flows feels like
+	 * tempting fate.
+	 */
+	if ((lockerr = pthread_mutex_lock(&init_mutex)) != 0)
+	{
+		/* Should not happen... but don't continue if it does. */
+		Assert(false);
+
+		libpq_append_conn_error(conn, "failed to lock mutex (%d)", lockerr);
+		return false;
+	}
+
+	if (!initialized)
+	{
+		init(pg_g_threadlock,
+#ifdef ENABLE_NLS
+			 libpq_gettext,
+#else
+			 NULL,
+#endif
+			 conn_errorMessage);
+
+		initialized = true;
+	}
+
+	pthread_mutex_unlock(&init_mutex);
+
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+
+	return true;
+}
+
+#else
+
+/*
+ * Use the builtin flow in libpq-oauth.a (see libpq-oauth/oauth-curl.h).
+ */
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = pg_fe_run_oauth_flow;
+	conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+	return true;
+}
+
+#endif							/* USE_LIBCURL */
+
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +977,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
-		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		libpq_append_conn_error(conn, "no OAuth flows are available (try installing the libpq-oauth package)");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..687e664475f 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -33,12 +33,13 @@ typedef struct
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
+extern bool use_builtin_flow(PGconn *conn, fe_oauth_state *state);
 
 /* Mechanisms in fe-auth-oauth.c */
 extern const pg_fe_sasl_mech pg_oauth_mech;
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 292fecf3320..a74e885b169 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -38,10 +38,6 @@ if gssapi.found()
   )
 endif
 
-if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
-endif
-
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
@@ -50,6 +46,9 @@ export_file = custom_target('libpq.exports',
 libpq_inc = include_directories('.', '../../port')
 libpq_c_args = ['-DSO_MAJOR_VERSION=5']
 
+# The OAuth implementation differs depending on the type of library being built.
+libpq_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
+
 # Not using both_libraries() here as
 # 1) resource files should only be in the shared library
 # 2) we want the .pc file to include a dependency to {pgport,common}_static for
@@ -70,7 +69,7 @@ libpq_st = static_library('libpq',
 libpq_so = shared_library('libpq',
   libpq_sources + libpq_so_sources,
   include_directories: [libpq_inc, postgres_inc],
-  c_args: libpq_c_args,
+  c_args: libpq_c_args + libpq_so_c_args,
   c_pch: pch_postgres_fe_h,
   version: '5.' + pg_version_major.to_string(),
   soversion: host_system != 'windows' ? '5' : '',
@@ -86,12 +85,26 @@ libpq = declare_dependency(
   include_directories: [include_directories('.')]
 )
 
+private_deps = [
+  frontend_stlib_code,
+  libpq_deps,
+]
+
+if oauth_flow_supported
+  # libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+  # libpq-oauth needs libcurl. Put both into *.private.
+  private_deps += [
+    libpq_oauth_deps,
+    '-lpq-oauth',
+  ]
+endif
+
 pkgconfig.generate(
   name: 'libpq',
   description: 'PostgreSQL libpq library',
   url: pg_url,
   libraries: libpq,
-  libraries_private: [frontend_stlib_code, libpq_deps],
+  libraries_private: private_deps,
 )
 
 install_headers(
diff --git a/src/interfaces/libpq/nls.mk b/src/interfaces/libpq/nls.mk
index ae761265852..b87df277d93 100644
--- a/src/interfaces/libpq/nls.mk
+++ b/src/interfaces/libpq/nls.mk
@@ -13,15 +13,21 @@ GETTEXT_FILES    = fe-auth.c \
                    fe-secure-common.c \
                    fe-secure-gssapi.c \
                    fe-secure-openssl.c \
-                   win32.c
-GETTEXT_TRIGGERS = libpq_append_conn_error:2 \
+                   win32.c \
+                   ../libpq-oauth/oauth-curl.c \
+                   ../libpq-oauth/oauth-utils.c
+GETTEXT_TRIGGERS = actx_error:2 \
+                   libpq_append_conn_error:2 \
                    libpq_append_error:2 \
                    libpq_gettext \
                    libpq_ngettext:1,2 \
+                   oauth_parse_set_error:2 \
                    pqInternalNotice:2
-GETTEXT_FLAGS    = libpq_append_conn_error:2:c-format \
+GETTEXT_FLAGS    = actx_error:2:c-format \
+                   libpq_append_conn_error:2:c-format \
                    libpq_append_error:2:c-format \
                    libpq_gettext:1:pass-c-format \
                    libpq_ngettext:1:pass-c-format \
                    libpq_ngettext:2:pass-c-format \
+                   oauth_parse_set_error:2:c-format \
                    pqInternalNotice:2:c-format
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index e3adb5d8dc4..48d01a54dc6 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -204,6 +204,8 @@ pgxs_empty = [
   'LIBNUMA_CFLAGS', 'LIBNUMA_LIBS',
 
   'LIBURING_CFLAGS', 'LIBURING_LIBS',
+
+  'LIBCURL_CPPFLAGS', 'LIBCURL_LDFLAGS', 'LIBCURL_LDLIBS',
 ]
 
 if host_system == 'windows' and cc.get_argument_syntax() != 'msvc'
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index 36d1b26369f..e190f9cf15a 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -78,7 +78,7 @@ tests += {
     ],
     'env': {
       'PYTHON': python.path(),
-      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_libcurl': oauth_flow_supported ? 'yes' : 'no',
       'with_python': 'yes',
     },
   },
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
index 8dd502f41e1..21d4acc1926 100644
--- a/src/test/modules/oauth_validator/t/002_client.pl
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -110,7 +110,7 @@ if ($ENV{with_libcurl} ne 'yes')
 		"fails without custom hook installed",
 		flags => ["--no-hook"],
 		expected_stderr =>
-		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+		  qr/no OAuth flows are available \(try installing the libpq-oauth package\)/
 	);
 }
 
-- 
2.34.1

#361Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#360)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 22 Apr 2025, at 01:19, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

v8 also makes the following changes:

Thanks for this version, a few small comments:

+  if oauth_flow_supported
+    cdata.set('USE_LIBCURL', 1)
+  elif libcurlopt.enabled()
+    error('client OAuth is not supported on this platform')
+  endif
We already know that libcurlopt.enabled() is true here so maybe just doing
if-else-endif would make it more readable and save readers thinking it might
have changed?  Also, "client OAuth" reads a bit strange, how about "client-side
OAuth" or "OAuth flow module"?
-       appendPQExpBufferStr(&conn->errorMessage,
-                            libpq_gettext(actx->errctx));
-       appendPQExpBufferStr(&conn->errorMessage, ": ");
+       appendPQExpBufferStr(errbuf, libpq_gettext(actx->errctx));
+       appendPQExpBufferStr(errbuf, ": ");
I think we should take this opportunity to turn this into a appendPQExpBuffer()
with a format string instead of two calls.
+       len = errbuf->len;
+       if (len >= 2 && errbuf->data[len - 2] == '\n')
Now that the actual variable, errbuf->len, is short and very descriptive I
wonder if we shouldn't just use this as it makes the code even clearer IMO.

--
Daniel Gustafsson

#362Ivan Kush
ivan.kush@tantorlabs.com
In reply to: Jacob Champion (#358)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi Jacob, thank you for detailed explanation and links!

Am I right that classic OAuth flow "create user account based on a
token" is implemented using custom validators?

1) In pg_hba.conf set user to all and  "delegate_ident_mapping=1"

"local all all oauth issuer=$issuer scope=$scope delegate_ident_mapping=1"

2) Write a custom validator that will "execute" in C `CREATE USER
token.name WITH token.listofOptions` after verification of a token.

On 25-04-21 19:57, Jacob Champion wrote:

We have some options for dealing with them, since their documentation
instructs clients to hardcode their API entry points instead of using
discovery. (That makes it easy for us to figure out when we're talking
to Google, and potentially switch to a quirks mode.)

What do you mean by "discovery"? OpenID link that returns endpoint?

Google has this link

https://accounts.google.com/.well-known/openid-configuration

OUTPUT:
    {
        "issuer": "https://accounts.google.com&quot;,
        "authorization_endpoint":
"https://accounts.google.com/o/oauth2/v2/auth&quot;,
        "device_authorization_endpoint":
"https://oauth2.googleapis.com/device/code&quot;,
        "token_endpoint": "https://oauth2.googleapis.com/token&quot;,
        "userinfo_endpoint":
"https://openidconnect.googleapis.com/v1/userinfo&quot;,
        "revocation_endpoint": "https://oauth2.googleapis.com/revoke&quot;,
        "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs&quot;,
............
    }

Here it's described

https://developers.google.com/identity/openid-connect/openid-connect

But! Before we do that: How do you intend to authorize tokens issued
by Google? Last I checked, they still had no way to register an
application-specific scope, making it very dangerous IMO to use a
public flow [2].

I've also thought as Antonin about
https://www.googleapis.com/oauth2/v3/userinfo for verification

As I understand from [2], the current problem is security, Google
doesn't want to add new scopes.

--
Best wishes,
Ivan Kush
Tantor Labs LLC

#363Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#361)
3 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Apr 22, 2025 at 3:02 AM Daniel Gustafsson <daniel@yesql.se> wrote:

+  if oauth_flow_supported
+    cdata.set('USE_LIBCURL', 1)
+  elif libcurlopt.enabled()
+    error('client OAuth is not supported on this platform')
+  endif
We already know that libcurlopt.enabled() is true here so maybe just doing
if-else-endif would make it more readable and save readers thinking it might
have changed?

Features are tri-state, so libcurlopt.disabled() and
libcurlopt.enabled() can both be false. :( My intent is to fall
through nicely in the case where -Dlibcurl=auto.

(Our minimum version of Meson is too old to switch to syntax that
makes this more readable, like .allowed(), .require(), .disable_if(),
etc...)

Also, "client OAuth" reads a bit strange, how about "client-side
OAuth" or "OAuth flow module"?
...
I think we should take this opportunity to turn this into a appendPQExpBuffer()
with a format string instead of two calls.
...
Now that the actual variable, errbuf->len, is short and very descriptive I
wonder if we shouldn't just use this as it makes the code even clearer IMO.

All three done in v9, attached.

Thanks!
--Jacob

Attachments:

since-v8.diff.txttext/plain; charset=US-ASCII; name=since-v8.diff.txtDownload
1:  5f87f11b18e = 1:  5f87f11b18e Add minor-version counterpart to (PG_)MAJORVERSION
2:  4c9cc7f69af ! 2:  9e37fd7c217 oauth: Move the builtin flow into a separate module
    @@ configure.ac: if test "$PORTNAME" = "win32" ; then
     +if test "$with_libcurl" = yes ; then
     +  # Error out early if this platform can't support libpq-oauth.
     +  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
    -+    AC_MSG_ERROR([client OAuth is not supported on this platform])
    ++    AC_MSG_ERROR([client-side OAuth is not supported on this platform])
     +  fi
     +fi
     +
    @@ meson.build: if not libcurlopt.disabled()
     +  if oauth_flow_supported
     +    cdata.set('USE_LIBCURL', 1)
     +  elif libcurlopt.enabled()
    -+    error('client OAuth is not supported on this platform')
    ++    error('client-side OAuth is not supported on this platform')
     +  endif
     +
      else
    @@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
      	 * also the documentation for struct async_ctx.
      	 */
      	if (actx->errctx)
    - 	{
    +-	{
     -		appendPQExpBufferStr(&conn->errorMessage,
     -							 libpq_gettext(actx->errctx));
     -		appendPQExpBufferStr(&conn->errorMessage, ": ");
    -+		appendPQExpBufferStr(errbuf, libpq_gettext(actx->errctx));
    -+		appendPQExpBufferStr(errbuf, ": ");
    - 	}
    +-	}
    ++		appendPQExpBuffer(errbuf, "%s: ", libpq_gettext(actx->errctx));
      
      	if (PQExpBufferDataBroken(actx->errbuf))
     -		appendPQExpBufferStr(&conn->errorMessage,
    @@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
      
      	if (actx->curl_err[0])
      	{
    - 		size_t		len;
    - 
    +-		size_t		len;
    +-
     -		appendPQExpBuffer(&conn->errorMessage,
     -						  " (libcurl: %s)", actx->curl_err);
     +		appendPQExpBuffer(errbuf, " (libcurl: %s)", actx->curl_err);
    @@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
      		/* Sometimes libcurl adds a newline to the error buffer. :( */
     -		len = conn->errorMessage.len;
     -		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
    -+		len = errbuf->len;
    -+		if (len >= 2 && errbuf->data[len - 2] == '\n')
    ++		if (errbuf->len >= 2 && errbuf->data[errbuf->len - 2] == '\n')
      		{
     -			conn->errorMessage.data[len - 2] = ')';
     -			conn->errorMessage.data[len - 1] = '\0';
     -			conn->errorMessage.len--;
    -+			errbuf->data[len - 2] = ')';
    -+			errbuf->data[len - 1] = '\0';
    ++			errbuf->data[errbuf->len - 2] = ')';
    ++			errbuf->data[errbuf->len - 1] = '\0';
     +			errbuf->len--;
      		}
      	}
v9-0001-Add-minor-version-counterpart-to-PG_-MAJORVERSION.patchapplication/octet-stream; name=v9-0001-Add-minor-version-counterpart-to-PG_-MAJORVERSION.patchDownload
From 5f87f11b18ea83615c342c832caace49bf7e3897 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Mon, 21 Apr 2025 13:43:08 -0700
Subject: [PATCH v9 1/2] Add minor-version counterpart to (PG_)MAJORVERSION

An upcoming commit will name a library, libpq-oauth, using the major and
minor versions. Make the minor version accessible from the Makefiles and
as a string constant in the code.
---
 configure                  | 7 +++++++
 configure.ac               | 2 ++
 meson.build                | 1 +
 src/Makefile.global.in     | 1 +
 src/include/pg_config.h.in | 3 +++
 src/makefiles/meson.build  | 1 +
 6 files changed, 15 insertions(+)

diff --git a/configure b/configure
index 0936010718d..3d783793dfa 100755
--- a/configure
+++ b/configure
@@ -792,6 +792,7 @@ build_os
 build_vendor
 build_cpu
 build
+PG_MINORVERSION
 PG_MAJORVERSION
 target_alias
 host_alias
@@ -2877,6 +2878,12 @@ cat >>confdefs.h <<_ACEOF
 _ACEOF
 
 
+
+cat >>confdefs.h <<_ACEOF
+#define PG_MINORVERSION "$PG_MINORVERSION"
+_ACEOF
+
+
 cat >>confdefs.h <<_ACEOF
 #define PG_MINORVERSION_NUM $PG_MINORVERSION
 _ACEOF
diff --git a/configure.ac b/configure.ac
index 2a78cddd825..1cb3a0ff042 100644
--- a/configure.ac
+++ b/configure.ac
@@ -35,6 +35,8 @@ test -n "$PG_MINORVERSION" || PG_MINORVERSION=0
 AC_SUBST(PG_MAJORVERSION)
 AC_DEFINE_UNQUOTED(PG_MAJORVERSION, "$PG_MAJORVERSION", [PostgreSQL major version as a string])
 AC_DEFINE_UNQUOTED(PG_MAJORVERSION_NUM, $PG_MAJORVERSION, [PostgreSQL major version number])
+AC_SUBST(PG_MINORVERSION)
+AC_DEFINE_UNQUOTED(PG_MINORVERSION, "$PG_MINORVERSION", [PostgreSQL minor version as a string])
 AC_DEFINE_UNQUOTED(PG_MINORVERSION_NUM, $PG_MINORVERSION, [PostgreSQL minor version number])
 
 PGAC_ARG_REQ(with, extra-version, [STRING], [append STRING to version],
diff --git a/meson.build b/meson.build
index a1516e54529..18423a7c13e 100644
--- a/meson.build
+++ b/meson.build
@@ -148,6 +148,7 @@ pg_version += get_option('extra_version')
 cdata.set_quoted('PG_VERSION', pg_version)
 cdata.set_quoted('PG_MAJORVERSION', pg_version_major.to_string())
 cdata.set('PG_MAJORVERSION_NUM', pg_version_major)
+cdata.set_quoted('PG_MINORVERSION', pg_version_minor.to_string())
 cdata.set('PG_MINORVERSION_NUM', pg_version_minor)
 cdata.set('PG_VERSION_NUM', pg_version_num)
 # PG_VERSION_STR is built later, it depends on compiler test results
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 6722fbdf365..54b4a07712e 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -40,6 +40,7 @@ maintainer-clean: distclean
 # PostgreSQL version number
 VERSION = @PACKAGE_VERSION@
 MAJORVERSION = @PG_MAJORVERSION@
+MINORVERSION = @PG_MINORVERSION@
 VERSION_NUM = @PG_VERSION_NUM@
 
 PACKAGE_URL = @PACKAGE_URL@
diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in
index c3cc9fa856d..4fe37d228c5 100644
--- a/src/include/pg_config.h.in
+++ b/src/include/pg_config.h.in
@@ -602,6 +602,9 @@
 /* PostgreSQL major version number */
 #undef PG_MAJORVERSION_NUM
 
+/* PostgreSQL minor version as a string */
+#undef PG_MINORVERSION
+
 /* PostgreSQL minor version number */
 #undef PG_MINORVERSION_NUM
 
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 55da678ec27..e3adb5d8dc4 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -36,6 +36,7 @@ pgxs_kv = {
   'PACKAGE_URL': pg_url,
   'PACKAGE_VERSION': pg_version,
   'PG_MAJORVERSION': pg_version_major,
+  'PG_MINORVERSION': pg_version_minor,
   'PG_VERSION_NUM': pg_version_num,
   'configure_input': 'meson',
 
-- 
2.34.1

v9-0002-oauth-Move-the-builtin-flow-into-a-separate-modul.patchapplication/octet-stream; name=v9-0002-oauth-Move-the-builtin-flow-into-a-separate-modul.patchDownload
From 9e37fd7c2171d8fe1880c0b00625b7ca6a833062 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH v9 2/2] oauth: Move the builtin flow into a separate module

The additional packaging footprint of the OAuth Curl dependency, as well
as the existence of libcurl in the address space even if OAuth isn't
ever used by a client, has raised some concerns. Split off this
dependency into a separate loadable module called libpq-oauth.

When configured using --with-libcurl, libpq.so searches for this new
module via dlopen(). End users may choose not to install the libpq-oauth
module, in which case the default flow is disabled.

For static applications using libpq.a, the libpq-oauth staticlib is a
mandatory link-time dependency for --with-libcurl builds. libpq.pc has
been updated accordingly.

The default flow relies on some libpq internals. Some of these can be
safely duplicated (such as the SIGPIPE handlers), but others need to be
shared between libpq and libpq-oauth for thread-safety. To avoid exporting
these internals to all libpq clients forever, these dependencies are
instead injected from the libpq side via an initialization function.
This also lets libpq communicate the offset of conn->errorMessage to
libpq-oauth, so that we can function without crashing if the module on
the search path came from a different build of Postgres.

This ABI is considered "private". The module has no SONAME or version
symlinks, and it's named libpq-oauth-<major>-<minor>.so to avoid mixing
and matching across Postgres versions, in case internal struct order
needs to change. (Future improvements may promote this "OAuth flow
plugin" to a first-class concept, at which point we would need a public
API to replace this anyway.)

Additionally, NLS support for error messages in b3f0be788a was
incomplete, because the new error macros weren't being scanned by
xgettext. Fix that now.

Per request from Tom Lane and Bruce Momjian. Based on an initial patch
by Daniel Gustafsson, who also contributed docs changes. The "bare"
dlopen() concept came from Thomas Munro. Many many people reviewed the
design and implementation; thank you!

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Wolfgang Walther <walther@technowledgy.de>
Discussion: https://postgr.es/m/641687.1742360249%40sss.pgh.pa.us
---
 config/programs.m4                            |  17 +-
 configure                                     |  50 ++++-
 configure.ac                                  |  26 ++-
 doc/src/sgml/installation.sgml                |   8 +
 doc/src/sgml/libpq.sgml                       |  30 ++-
 meson.build                                   |  32 ++-
 src/Makefile.global.in                        |   3 +
 src/interfaces/Makefile                       |  12 ++
 src/interfaces/libpq-oauth/Makefile           |  83 +++++++
 src/interfaces/libpq-oauth/README             |  43 ++++
 src/interfaces/libpq-oauth/exports.txt        |   4 +
 src/interfaces/libpq-oauth/meson.build        |  45 ++++
 .../oauth-curl.c}                             | 101 ++++-----
 src/interfaces/libpq-oauth/oauth-curl.h       |  24 +++
 src/interfaces/libpq-oauth/oauth-utils.c      | 202 ++++++++++++++++++
 src/interfaces/libpq-oauth/oauth-utils.h      |  38 ++++
 src/interfaces/libpq/Makefile                 |  36 +++-
 src/interfaces/libpq/exports.txt              |   1 +
 src/interfaces/libpq/fe-auth-oauth.c          | 197 ++++++++++++++++-
 src/interfaces/libpq/fe-auth-oauth.h          |   5 +-
 src/interfaces/libpq/meson.build              |  25 ++-
 src/interfaces/libpq/nls.mk                   |  12 +-
 src/makefiles/meson.build                     |   2 +
 src/test/modules/oauth_validator/meson.build  |   2 +-
 .../modules/oauth_validator/t/002_client.pl   |   2 +-
 25 files changed, 884 insertions(+), 116 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/README
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 rename src/interfaces/{libpq/fe-auth-oauth-curl.c => libpq-oauth/oauth-curl.c} (97%)
 create mode 100644 src/interfaces/libpq-oauth/oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.c
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.h

diff --git a/config/programs.m4 b/config/programs.m4
index 0a07feb37cc..0ad1e58b48d 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -286,9 +286,20 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
   AC_CHECK_HEADER(curl/curl.h, [],
 				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
-  AC_CHECK_LIB(curl, curl_multi_init, [],
+  AC_CHECK_LIB(curl, curl_multi_init, [
+				 AC_DEFINE([HAVE_LIBCURL], [1], [Define to 1 if you have the `curl' library (-lcurl).])
+				 AC_SUBST(LIBCURL_LDLIBS, -lcurl)
+			   ],
 			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
@@ -338,4 +349,8 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 *** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
 *** to use it with libpq.])
   fi
+
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 3d783793dfa..eedd18e6d9a 100755
--- a/configure
+++ b/configure
@@ -655,6 +655,7 @@ UUID_LIBS
 LDAP_LIBS_BE
 LDAP_LIBS_FE
 with_ssl
+LIBCURL_LDLIBS
 PTHREAD_CFLAGS
 PTHREAD_LIBS
 PTHREAD_CC
@@ -711,6 +712,8 @@ with_libxml
 LIBNUMA_LIBS
 LIBNUMA_CFLAGS
 with_libnuma
+LIBCURL_LDFLAGS
+LIBCURL_CPPFLAGS
 LIBCURL_LIBS
 LIBCURL_CFLAGS
 with_libcurl
@@ -9060,19 +9063,27 @@ $as_echo "yes" >&6; }
 
 fi
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+
+
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
@@ -12711,9 +12722,6 @@ fi
 
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
 
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
@@ -12761,17 +12769,26 @@ fi
 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
 $as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
 if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
-  cat >>confdefs.h <<_ACEOF
-#define HAVE_LIBCURL 1
-_ACEOF
 
-  LIBS="-lcurl $LIBS"
+
+$as_echo "#define HAVE_LIBCURL 1" >>confdefs.h
+
+				 LIBCURL_LDLIBS=-lcurl
+
 
 else
   as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
 fi
 
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
@@ -12875,6 +12892,10 @@ $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
 *** to use it with libpq." "$LINENO" 5
   fi
 
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
+
 fi
 
 if test "$with_gssapi" = yes ; then
@@ -14523,6 +14544,13 @@ done
 
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    as_fn_error $? "client OAuth is not supported on this platform" "$LINENO" 5
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/configure.ac b/configure.ac
index 1cb3a0ff042..d1c8dd536cd 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1035,19 +1035,27 @@ if test "$with_libcurl" = yes ; then
   # to explicitly set TLS 1.3 ciphersuites).
   PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+  AC_SUBST(LIBCURL_CPPFLAGS)
+  AC_SUBST(LIBCURL_LDFLAGS)
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     AC_MSG_WARN([*** OAuth support tests require --with-python to run])
@@ -1356,9 +1364,6 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
   PGAC_CHECK_LIBCURL
 fi
@@ -1656,6 +1661,13 @@ if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    AC_MSG_ERROR([client-side OAuth is not supported on this platform])
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 077bcc20759..d928b103d22 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -313,6 +313,14 @@
      </para>
     </listitem>
 
+    <listitem>
+     <para>
+      You need <productname>Curl</productname> to build an optional module
+      which implements the <link linkend="libpq-oauth">OAuth Device
+      Authorization flow</link> for client applications.
+     </para>
+    </listitem>
+
     <listitem>
      <para>
       You need <productname>LZ4</productname>, if you want to support
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 3be66789ba7..cd748902f4d 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10226,15 +10226,20 @@ void PQinitSSL(int do_ssl);
   <title>OAuth Support</title>
 
   <para>
-   libpq implements support for the OAuth v2 Device Authorization client flow,
+   <application>libpq</application> implements support for the OAuth v2 Device Authorization client flow,
    documented in
    <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
-   which it will attempt to use by default if the server
+   as an optional module. See the <link linkend="configure-option-with-libcurl">
+   installation documentation</link> for information on how to enable support
+   for Device Authorization as a builtin flow.
+  </para>
+  <para>
+   When support is enabled and the optional module installed, <application>libpq</application>
+   will use the builtin flow by default if the server
    <link linkend="auth-oauth">requests a bearer token</link> during
    authentication. This flow can be utilized even if the system running the
    client application does not have a usable web browser, for example when
-   running a client via <application>SSH</application>. Client applications may implement their own flows
-   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
+   running a client via <acronym>SSH</acronym>.
   </para>
   <para>
    The builtin flow will, by default, print a URL to visit and a user code to
@@ -10251,6 +10256,11 @@ Visit https://example.com/device and enter the code: ABCD-EFGH
    they match expectations, before continuing. Permissions should not be given
    to untrusted third parties.
   </para>
+  <para>
+   Client applications may implement their own flows to customize interaction
+   and integration with applications. See <xref linkend="libpq-oauth-authdata-hooks"/>
+   for more information on how add a custom flow to <application>libpq</application>.
+  </para>
   <para>
    For an OAuth client flow to be usable, the connection string must at minimum
    contain <xref linkend="libpq-connect-oauth-issuer"/> and
@@ -10366,7 +10376,9 @@ typedef struct _PGpromptOAuthDevice
 </synopsis>
         </para>
         <para>
-         The OAuth Device Authorization flow included in <application>libpq</application>
+         The OAuth Device Authorization flow which
+         <link linkend="configure-option-with-libcurl">can be included</link>
+         in <application>libpq</application>
          requires the end user to visit a URL with a browser, then enter a code
          which permits <application>libpq</application> to connect to the server
          on their behalf. The default prompt simply prints the
@@ -10378,7 +10390,8 @@ typedef struct _PGpromptOAuthDevice
          This callback is only invoked during the builtin device
          authorization flow. If the application installs a
          <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
-         flow</link>, this authdata type will not be used.
+         flow</link>, or <application>libpq</application> was not built with
+         support for the builtin flow, this authdata type will not be used.
         </para>
         <para>
          If a non-NULL <structfield>verification_uri_complete</structfield> is
@@ -10400,8 +10413,9 @@ typedef struct _PGpromptOAuthDevice
        </term>
        <listitem>
         <para>
-         Replaces the entire OAuth flow with a custom implementation. The hook
-         should either directly return a Bearer token for the current
+         Adds a custom implementation of a flow, replacing the builtin flow if
+         it is <link linkend="configure-option-with-libcurl">installed</link>.
+         The hook should either directly return a Bearer token for the current
          user/issuer/scope combination, if one is available without blocking, or
          else set up an asynchronous callback to retrieve one.
         </para>
diff --git a/meson.build b/meson.build
index 18423a7c13e..2798922c6f0 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -861,13 +862,13 @@ endif
 ###############################################################
 
 libcurlopt = get_option('libcurl')
+oauth_flow_supported = false
+
 if not libcurlopt.disabled()
   # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
   # to explicitly set TLS 1.3 ciphersuites).
   libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
   if libcurl.found()
-    cdata.set('USE_LIBCURL', 1)
-
     # Check to see whether the current platform supports thread-safe Curl
     # initialization.
     libcurl_threadsafe_init = false
@@ -939,6 +940,22 @@ if not libcurlopt.disabled()
     endif
   endif
 
+  # Check that the current platform supports our builtin flow. This requires
+  # libcurl and one of either epoll or kqueue.
+  oauth_flow_supported = (
+    libcurl.found()
+    and (cc.check_header('sys/event.h', required: false,
+                         args: test_c_args, include_directories: postgres_inc)
+         or cc.check_header('sys/epoll.h', required: false,
+                            args: test_c_args, include_directories: postgres_inc))
+  )
+
+  if oauth_flow_supported
+    cdata.set('USE_LIBCURL', 1)
+  elif libcurlopt.enabled()
+    error('client-side OAuth is not supported on this platform')
+  endif
+
 else
   libcurl = not_found_dep
 endif
@@ -3273,17 +3290,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 54b4a07712e..f4caece04df 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -348,6 +348,9 @@ perl_embed_ldflags	= @perl_embed_ldflags@
 
 AWK	= @AWK@
 LN_S	= @LN_S@
+LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@
+LIBCURL_LDFLAGS = @LIBCURL_LDFLAGS@
+LIBCURL_LDLIBS = @LIBCURL_LDLIBS@
 MSGFMT  = @MSGFMT@
 MSGFMT_FLAGS = @MSGFMT_FLAGS@
 MSGMERGE = @MSGMERGE@
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..e6822caa206 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,19 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+else
+ALWAYS_SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
+$(recurse_always)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..98acaff1a3b
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,83 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization OAuth support"
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+NAME = pq-oauth-$(MAJORVERSION)-$(MINORVERSION)
+
+# Force the name "libpq-oauth" for both the static and shared libraries. The
+# staticlib doesn't need version information in its name.
+override shlib := lib$(NAME)$(DLSUFFIX)
+override stlib := libpq-oauth.a
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(LIBCURL_CPPFLAGS) $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES)
+
+OBJS_STATIC = oauth-curl.o
+
+# The shared library needs additional glue symbols.
+OBJS_SHLIB = \
+	oauth-curl_shlib.o \
+	oauth-utils.o \
+
+oauth-utils.o: override CPPFLAGS += -DUSE_DYNAMIC_OAUTH
+oauth-curl_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
+
+# Add shlib-/stlib-specific objects.
+$(shlib): override OBJS += $(OBJS_SHLIB)
+$(shlib): $(OBJS_SHLIB)
+
+$(stlib): override OBJS += $(OBJS_STATIC)
+$(stlib): $(OBJS_STATIC)
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)
+SHLIB_PREREQS = submake-libpq
+SHLIB_EXPORTS = exports.txt
+
+# Disable -bundle_loader on macOS.
+BE_DLLLIBS =
+
+# By default, a library without an SONAME doesn't get a static library, so we
+# add it to the build explicitly.
+all: all-lib all-static-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
+# objects.
+%_shlib.o: %.c %.o
+	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
+
+# Ignore the standard rules for SONAME-less installation; we want both the
+# static and shared libraries to go into libdir.
+install: all installdirs $(stlib) $(shlib)
+	$(INSTALL_SHLIB) $(shlib) '$(DESTDIR)$(libdir)/$(shlib)'
+	$(INSTALL_STLIB) $(stlib) '$(DESTDIR)$(libdir)/$(stlib)'
+
+installdirs:
+	$(MKDIR_P) '$(DESTDIR)$(libdir)'
+
+uninstall:
+	rm -f '$(DESTDIR)$(libdir)/$(stlib)'
+	rm -f '$(DESTDIR)$(libdir)/$(shlib)'
+
+clean distclean: clean-lib
+	rm -f $(OBJS) $(OBJS_STATIC) $(OBJS_SHLIB)
diff --git a/src/interfaces/libpq-oauth/README b/src/interfaces/libpq-oauth/README
new file mode 100644
index 00000000000..fdc1320d152
--- /dev/null
+++ b/src/interfaces/libpq-oauth/README
@@ -0,0 +1,43 @@
+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
+later split out as its own shared library in order to isolate its dependency on
+libcurl. (End users who don't want the Curl dependency can simply choose not to
+install this module.)
+
+If a connection string allows the use of OAuth, and the server asks for it, and
+a libpq client has not installed its own custom OAuth flow, libpq will attempt
+to delay-load this module using dlopen() and the following ABI. Failure to load
+results in a failed connection.
+
+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+across releases; the name of the module (libpq-oauth-MAJOR-MINOR) reflects this.
+The module exports the following symbols:
+
+- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+- void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+pg_fe_run_oauth_flow and pg_fe_cleanup_oauth_flow are implementations of
+conn->async_auth and conn->cleanup_async_auth, respectively.
+
+- void libpq_oauth_init(pgthreadlock_t threadlock,
+						libpq_gettext_func gettext_impl,
+						conn_errorMessage_func errmsg_impl);
+
+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
+libpq_gettext(), which must be injected by libpq using this initialization
+function before the flow is run.
+
+It also relies on libpq to expose conn->errorMessage, via the errmsg_impl. This
+is done to decouple the module ABI from the offset of errorMessage, which can
+change positions depending on configure-time options. This way we can safely
+search the standard dlopen() paths (e.g. RPATH, LD_LIBRARY_PATH, the SO cache)
+for an implementation module to use, even if that module wasn't compiled at the
+same time as libpq.
+
+= Static Build =
+
+The static library libpq.a does not perform any dynamic loading. If the builtin
+flow is enabled, the application is expected to link against libpq-oauth.a
+directly to provide the necessary symbols.
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..6891a83dbf9
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+libpq_oauth_init          1
+pg_fe_run_oauth_flow      2
+pg_fe_cleanup_oauth_flow  3
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..d97f893178a
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,45 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+if not oauth_flow_supported
+  subdir_done()
+endif
+
+libpq_oauth_sources = files(
+  'oauth-curl.c',
+)
+
+# The shared library needs additional glue symbols.
+libpq_oauth_so_sources = files(
+  'oauth-utils.c',
+)
+libpq_oauth_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+
+libpq_oauth_st = static_library('libpq-oauth',
+  libpq_oauth_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_stlib_code, libpq_oauth_deps],
+  kwargs: default_lib_args,
+)
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+libpq_oauth_name = 'libpq-oauth-@0@-@1@'.format(pg_version_major, pg_version_minor)
+
+libpq_oauth_so = shared_module(libpq_oauth_name,
+  libpq_oauth_sources + libpq_oauth_so_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_args: libpq_so_c_args,
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/oauth-curl.c
similarity index 97%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/oauth-curl.c
index c195e00cd28..7b38395ec5f 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/oauth-curl.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * fe-auth-oauth-curl.c
+ * oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *	  src/interfaces/libpq-oauth/oauth-curl.c
  *
  *-------------------------------------------------------------------------
  */
@@ -17,20 +17,25 @@
 
 #include <curl/curl.h>
 #include <math.h>
-#ifdef HAVE_SYS_EPOLL_H
+#include <unistd.h>
+
+#if defined(HAVE_SYS_EPOLL_H)
 #include <sys/epoll.h>
 #include <sys/timerfd.h>
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 #include <sys/event.h>
+#else
+#error libpq-oauth is not supported on this platform
 #endif
-#include <unistd.h>
 
 #include "common/jsonapi.h"
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
-#include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "oauth-curl.h"
+#ifdef USE_DYNAMIC_OAUTH
+#include "oauth-utils.h"
+#endif
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -1110,7 +1115,7 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 static bool
 setup_multiplexer(struct async_ctx *actx)
 {
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {.events = EPOLLIN};
 
 	actx->mux = epoll_create1(EPOLL_CLOEXEC);
@@ -1134,8 +1139,7 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	actx->mux = kqueue();
 	if (actx->mux < 0)
 	{
@@ -1158,10 +1162,9 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
+#else
+#error setup_multiplexer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
-	return false;
 }
 
 /*
@@ -1174,7 +1177,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 {
 	struct async_ctx *actx = ctx;
 
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {0};
 	int			res;
 	int			op = EPOLL_CTL_ADD;
@@ -1230,8 +1233,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev[2] = {0};
 	struct kevent ev_out[2];
 	struct timespec timeout = {0};
@@ -1312,10 +1314,9 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
+#else
+#error register_socket is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
-	return -1;
 }
 
 /*
@@ -1334,7 +1335,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 static bool
 set_timer(struct async_ctx *actx, long timeout)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timeout < 0)
@@ -1363,8 +1364,7 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev;
 
 #ifdef __NetBSD__
@@ -1419,10 +1419,9 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
+#else
+#error set_timer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return false;
 }
 
 /*
@@ -1433,7 +1432,7 @@ set_timer(struct async_ctx *actx, long timeout)
 static int
 timer_expired(struct async_ctx *actx)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timerfd_gettime(actx->timerfd, &spec) < 0)
@@ -1453,8 +1452,7 @@ timer_expired(struct async_ctx *actx)
 	/* If the remaining time to expiration is zero, we're done. */
 	return (spec.it_value.tv_sec == 0
 			&& spec.it_value.tv_nsec == 0);
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	int			res;
 
 	/* Is the timer queue ready? */
@@ -1466,10 +1464,9 @@ timer_expired(struct async_ctx *actx)
 	}
 
 	return (res > 0);
+#else
+#error timer_expired is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return -1;
 }
 
 /*
@@ -2487,8 +2484,9 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 		.verification_uri_complete = actx->authz.verification_uri_complete,
 		.expires_in = actx->authz.expires_in,
 	};
+	PQauthDataHook_type hook = PQgetAuthDataHook();
 
-	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+	res = hook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
 
 	if (!res)
 	{
@@ -2635,6 +2633,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 {
 	fe_oauth_state *state = conn->sasl_state;
 	struct async_ctx *actx;
+	PQExpBuffer errbuf;
 
 	if (!initialize_curl(conn))
 		return PGRES_POLLING_FAILED;
@@ -2825,41 +2824,43 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 
 error_return:
 
+	/*
+	 * For the dynamic module build, we can't safely rely on the offset of
+	 * conn->errorMessage, since it depends on build options like USE_SSL et
+	 * al. libpq gives us a translator function instead.
+	 */
+#ifdef USE_DYNAMIC_OAUTH
+	errbuf = conn_errorMessage(conn);
+#else
+	errbuf = &conn->errorMessage;
+#endif
+
 	/*
 	 * Assemble the three parts of our error: context, body, and detail. See
 	 * also the documentation for struct async_ctx.
 	 */
 	if (actx->errctx)
-	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 libpq_gettext(actx->errctx));
-		appendPQExpBufferStr(&conn->errorMessage, ": ");
-	}
+		appendPQExpBuffer(errbuf, "%s: ", libpq_gettext(actx->errctx));
 
 	if (PQExpBufferDataBroken(actx->errbuf))
-		appendPQExpBufferStr(&conn->errorMessage,
-							 libpq_gettext("out of memory"));
+		appendPQExpBufferStr(errbuf, libpq_gettext("out of memory"));
 	else
-		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+		appendPQExpBufferStr(errbuf, actx->errbuf.data);
 
 	if (actx->curl_err[0])
 	{
-		size_t		len;
-
-		appendPQExpBuffer(&conn->errorMessage,
-						  " (libcurl: %s)", actx->curl_err);
+		appendPQExpBuffer(errbuf, " (libcurl: %s)", actx->curl_err);
 
 		/* Sometimes libcurl adds a newline to the error buffer. :( */
-		len = conn->errorMessage.len;
-		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		if (errbuf->len >= 2 && errbuf->data[errbuf->len - 2] == '\n')
 		{
-			conn->errorMessage.data[len - 2] = ')';
-			conn->errorMessage.data[len - 1] = '\0';
-			conn->errorMessage.len--;
+			errbuf->data[errbuf->len - 2] = ')';
+			errbuf->data[errbuf->len - 1] = '\0';
+			errbuf->len--;
 		}
 	}
 
-	appendPQExpBufferChar(&conn->errorMessage, '\n');
+	appendPQExpBufferChar(errbuf, '\n');
 
 	return PGRES_POLLING_FAILED;
 }
diff --git a/src/interfaces/libpq-oauth/oauth-curl.h b/src/interfaces/libpq-oauth/oauth-curl.h
new file mode 100644
index 00000000000..248d0424ad0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-curl.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_CURL_H
+#define OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+/* Exported async-auth callbacks. */
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.c b/src/interfaces/libpq-oauth/oauth-utils.c
new file mode 100644
index 00000000000..1f85a6b0479
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.c
@@ -0,0 +1,202 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.c
+ *
+ *	  "Glue" helpers providing a copy of some internal APIs from libpq. At
+ *	  some point in the future, we might be able to deduplicate.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq-oauth/oauth-utils.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <signal.h>
+
+#include "libpq-int.h"
+#include "oauth-utils.h"
+
+#ifndef USE_DYNAMIC_OAUTH
+#error oauth-utils.c is not supported in static builds
+#endif
+
+static libpq_gettext_func libpq_gettext_impl;
+
+pgthreadlock_t pg_g_threadlock;
+conn_errorMessage_func conn_errorMessage;
+
+/*-
+ * Initializes libpq-oauth by setting necessary callbacks.
+ *
+ * The current implementation relies on the following private implementation
+ * details of libpq:
+ *
+ * - pg_g_threadlock: protects libcurl initialization if the underlying Curl
+ *   installation is not threadsafe
+ *
+ * - libpq_gettext: translates error messages using libpq's message domain
+ *
+ * - conn->errorMessage: holds translated errors for the connection. This is
+ *   handled through a translation shim, which avoids either depending on the
+ *   offset of the errorMessage in PGconn, or needing to export the variadic
+ *   libpq_append_conn_error().
+ */
+void
+libpq_oauth_init(pgthreadlock_t threadlock_impl,
+				 libpq_gettext_func gettext_impl,
+				 conn_errorMessage_func errmsg_impl)
+{
+	pg_g_threadlock = threadlock_impl;
+	libpq_gettext_impl = gettext_impl;
+	conn_errorMessage = errmsg_impl;
+}
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  This is a copy of libpq's internal API.
+ */
+void
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+	PQExpBuffer errorMessage = conn_errorMessage(conn);
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(errorMessage, '\n');
+}
+
+#ifdef ENABLE_NLS
+
+/*
+ * A shim that defers to the actual libpq_gettext().
+ */
+char *
+libpq_gettext(const char *msgid)
+{
+	if (!libpq_gettext_impl)
+	{
+		/*
+		 * Possible if the libpq build didn't enable NLS but the libpq-oauth
+		 * build did. That's an odd mismatch, but we can handle it.
+		 *
+		 * Note that callers of libpq_gettext() have to treat the return value
+		 * as if it were const, because builds without NLS simply pass through
+		 * their argument.
+		 */
+		return unconstify(char *, msgid);
+	}
+
+	return libpq_gettext_impl(msgid);
+}
+
+#endif							/* ENABLE_NLS */
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+/*
+ * Duplicate SOCK_ERRNO* definitions from libpq-int.h, for use by
+ * pq_block/reset_sigpipe().
+ */
+#ifdef WIN32
+#define SOCK_ERRNO (WSAGetLastError())
+#define SOCK_ERRNO_SET(e) WSASetLastError(e)
+#else
+#define SOCK_ERRNO errno
+#define SOCK_ERRNO_SET(e) (errno = (e))
+#endif
+
+/*
+ *	Block SIGPIPE for this thread. This is a copy of libpq's internal API.
+ */
+int
+pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+
+/*
+ *	Discard any pending SIGPIPE and reset the signal mask. This is a copy of
+ *	libpq's internal API.
+ */
+void
+pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
diff --git a/src/interfaces/libpq-oauth/oauth-utils.h b/src/interfaces/libpq-oauth/oauth-utils.h
new file mode 100644
index 00000000000..e2a9d01237d
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.h
@@ -0,0 +1,38 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.h
+ *
+ *	  Definitions providing missing libpq internal APIs
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_UTILS_H
+#define OAUTH_UTILS_H
+
+#include "libpq-fe.h"
+#include "pqexpbuffer.h"
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/* Initializes libpq-oauth. */
+extern PGDLLEXPORT void libpq_oauth_init(pgthreadlock_t threadlock,
+										 libpq_gettext_func gettext_impl,
+										 conn_errorMessage_func errmsg_impl);
+
+/* Callback to safely obtain conn->errorMessage from a PGconn. */
+extern conn_errorMessage_func conn_errorMessage;
+
+/* Duplicated APIs, copied from libpq. */
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+extern bool oauth_unsafe_debugging_enabled(void);
+extern int	pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
+extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
+
+#endif							/* OAUTH_UTILS_H */
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..c6fe5fec7f6 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,7 +31,6 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
-	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -64,9 +63,11 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
+# The OAuth implementation differs depending on the type of library being built.
+OBJS_STATIC = fe-auth-oauth.o
+
+fe-auth-oauth_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
+OBJS_SHLIB = fe-auth-oauth_shlib.o
 
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
@@ -86,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -101,12 +102,26 @@ ifeq ($(with_ssl),openssl)
 PKG_CONFIG_REQUIRES_PRIVATE = libssl, libcrypto
 endif
 
+ifeq ($(with_libcurl),yes)
+# libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+# libpq-oauth needs libcurl. Put both into *.private.
+PKG_CONFIG_REQUIRES_PRIVATE += libcurl
+%.pc: override SHLIB_LINK_INTERNAL += -lpq-oauth
+endif
+
 all: all-lib libpq-refs-stamp
 
 # Shared library stuff
 include $(top_srcdir)/src/Makefile.shlib
 backend_src = $(top_srcdir)/src/backend
 
+# Add shlib-/stlib-specific objects.
+$(shlib): override OBJS += $(OBJS_SHLIB)
+$(shlib): $(OBJS_SHLIB)
+
+$(stlib): override OBJS += $(OBJS_STATIC)
+$(stlib): $(OBJS_STATIC)
+
 # Check for functions that libpq must not call, currently just exit().
 # (Ideally we'd reject abort() too, but there are various scenarios where
 # build toolchains insert abort() calls, e.g. to implement assert().)
@@ -115,8 +130,6 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
-# libcurl registers an exit handler in the memory debugging code when running
-# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -124,7 +137,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
@@ -138,6 +151,11 @@ fe-misc.o: fe-misc.c $(top_builddir)/src/port/pg_config_paths.h
 $(top_builddir)/src/port/pg_config_paths.h:
 	$(MAKE) -C $(top_builddir)/src/port pg_config_paths.h
 
+# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
+# objects.
+%_shlib.o: %.c %.o
+	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
+
 install: all installdirs install-lib
 	$(INSTALL_DATA) $(srcdir)/libpq-fe.h '$(DESTDIR)$(includedir)'
 	$(INSTALL_DATA) $(srcdir)/libpq-events.h '$(DESTDIR)$(includedir)'
@@ -171,6 +189,6 @@ uninstall: uninstall-lib
 clean distclean: clean-lib
 	$(MAKE) -C test $@
 	rm -rf tmp_check
-	rm -f $(OBJS) pthread.h libpq-refs-stamp
+	rm -f $(OBJS) $(OBJS_SHLIB) $(OBJS_STATIC) pthread.h libpq-refs-stamp
 # Might be left over from a Win32 client-only build
 	rm -f pg_config_paths.h
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..0625cf39e9a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,4 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index cf1a25e2ccc..ccdd9139cf1 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -15,6 +15,10 @@
 
 #include "postgres_fe.h"
 
+#ifdef USE_DYNAMIC_OAUTH
+#include <dlfcn.h>
+#endif
+
 #include "common/base64.h"
 #include "common/hmac.h"
 #include "common/jsonapi.h"
@@ -22,6 +26,7 @@
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
 #include "mb/pg_wchar.h"
+#include "pg_config_paths.h"
 
 /* The exported OAuth callback mechanism. */
 static void *oauth_init(PGconn *conn, const char *password,
@@ -721,6 +726,186 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+/*-------------
+ * Builtin Flow
+ *
+ * There are three potential implementations of use_builtin_flow:
+ *
+ * 1) If the OAuth client is disabled at configuration time, return false.
+ *    Dependent clients must provide their own flow.
+ * 2) If the OAuth client is enabled and USE_DYNAMIC_OAUTH is defined, dlopen()
+ *    the libpq-oauth plugin and use its implementation.
+ * 3) Otherwise, use flow callbacks that are statically linked into the
+ *    executable.
+ */
+
+#if !defined(USE_LIBCURL)
+
+/*
+ * This configuration doesn't support the builtin flow.
+ */
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	return false;
+}
+
+#elif defined(USE_DYNAMIC_OAUTH)
+
+/*
+ * Use the builtin flow in the libpq-oauth plugin, which is loaded at runtime.
+ */
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
+
+/*
+ * This shim is injected into libpq-oauth so that it doesn't depend on the
+ * offset of conn->errorMessage.
+ *
+ * TODO: look into exporting libpq_append_conn_error or a comparable API from
+ * libpq, instead.
+ */
+static PQExpBuffer
+conn_errorMessage(PGconn *conn)
+{
+	return &conn->errorMessage;
+}
+
+/*
+ * Loads the libpq-oauth plugin via dlopen(), initializes it, and plugs its
+ * callbacks into the connection's async auth handlers.
+ *
+ * Failure to load here results in a relatively quiet connection error, to
+ * handle the use case where the build supports loading a flow but a user does
+ * not want to install it. Troubleshooting of linker/loader failures can be done
+ * via PGOAUTHDEBUG.
+ */
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	static bool initialized = false;
+	static pthread_mutex_t init_mutex = PTHREAD_MUTEX_INITIALIZER;
+	int			lockerr;
+
+	void		(*init) (pgthreadlock_t threadlock,
+						 libpq_gettext_func gettext_impl,
+						 conn_errorMessage_func errmsg_impl);
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+
+	/*
+	 * On macOS only, load the module using its absolute install path; the
+	 * standard search behavior is not very helpful for this use case. Unlike
+	 * on other platforms, DYLD_LIBRARY_PATH is used as a fallback even with
+	 * absolute paths (modulo SIP effects), so tests can continue to work.
+	 *
+	 * On the other platforms, load the module using only the basename, to
+	 * rely on the runtime linker's standard search behavior.
+	 */
+	const char *const module_name =
+#if defined(__darwin__)
+		LIBDIR "/libpq-oauth-" PG_MAJORVERSION "-" PG_MINORVERSION DLSUFFIX;
+#else
+		"libpq-oauth-" PG_MAJORVERSION "-" PG_MINORVERSION DLSUFFIX;
+#endif
+
+	state->builtin_flow = dlopen(module_name, RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		/*
+		 * For end users, this probably isn't an error condition, it just
+		 * means the flow isn't installed. Developers and package maintainers
+		 * may want to debug this via the PGOAUTHDEBUG envvar, though.
+		 *
+		 * Note that POSIX dlerror() isn't guaranteed to be threadsafe.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlopen for libpq-oauth: %s\n", dlerror());
+
+		return false;
+	}
+
+	if ((init = dlsym(state->builtin_flow, "libpq_oauth_init")) == NULL
+		|| (flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow")) == NULL
+		|| (cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow")) == NULL)
+	{
+		/*
+		 * This is more of an error condition than the one above, but due to
+		 * the dlerror() threadsafety issue, lock it behind PGOAUTHDEBUG too.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlsym for libpq-oauth: %s\n", dlerror());
+
+		dlclose(state->builtin_flow);
+		return false;
+	}
+
+	/*
+	 * Past this point, we do not unload the module. It stays in the process
+	 * permanently.
+	 */
+
+	/*
+	 * We need to inject necessary function pointers into the module. This
+	 * only needs to be done once -- even if the pointers are constant,
+	 * assigning them while another thread is executing the flows feels like
+	 * tempting fate.
+	 */
+	if ((lockerr = pthread_mutex_lock(&init_mutex)) != 0)
+	{
+		/* Should not happen... but don't continue if it does. */
+		Assert(false);
+
+		libpq_append_conn_error(conn, "failed to lock mutex (%d)", lockerr);
+		return false;
+	}
+
+	if (!initialized)
+	{
+		init(pg_g_threadlock,
+#ifdef ENABLE_NLS
+			 libpq_gettext,
+#else
+			 NULL,
+#endif
+			 conn_errorMessage);
+
+		initialized = true;
+	}
+
+	pthread_mutex_unlock(&init_mutex);
+
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+
+	return true;
+}
+
+#else
+
+/*
+ * Use the builtin flow in libpq-oauth.a (see libpq-oauth/oauth-curl.h).
+ */
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = pg_fe_run_oauth_flow;
+	conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+	return true;
+}
+
+#endif							/* USE_LIBCURL */
+
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +977,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
-		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		libpq_append_conn_error(conn, "no OAuth flows are available (try installing the libpq-oauth package)");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..687e664475f 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -33,12 +33,13 @@ typedef struct
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
+extern bool use_builtin_flow(PGconn *conn, fe_oauth_state *state);
 
 /* Mechanisms in fe-auth-oauth.c */
 extern const pg_fe_sasl_mech pg_oauth_mech;
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 292fecf3320..a74e885b169 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -38,10 +38,6 @@ if gssapi.found()
   )
 endif
 
-if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
-endif
-
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
@@ -50,6 +46,9 @@ export_file = custom_target('libpq.exports',
 libpq_inc = include_directories('.', '../../port')
 libpq_c_args = ['-DSO_MAJOR_VERSION=5']
 
+# The OAuth implementation differs depending on the type of library being built.
+libpq_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
+
 # Not using both_libraries() here as
 # 1) resource files should only be in the shared library
 # 2) we want the .pc file to include a dependency to {pgport,common}_static for
@@ -70,7 +69,7 @@ libpq_st = static_library('libpq',
 libpq_so = shared_library('libpq',
   libpq_sources + libpq_so_sources,
   include_directories: [libpq_inc, postgres_inc],
-  c_args: libpq_c_args,
+  c_args: libpq_c_args + libpq_so_c_args,
   c_pch: pch_postgres_fe_h,
   version: '5.' + pg_version_major.to_string(),
   soversion: host_system != 'windows' ? '5' : '',
@@ -86,12 +85,26 @@ libpq = declare_dependency(
   include_directories: [include_directories('.')]
 )
 
+private_deps = [
+  frontend_stlib_code,
+  libpq_deps,
+]
+
+if oauth_flow_supported
+  # libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+  # libpq-oauth needs libcurl. Put both into *.private.
+  private_deps += [
+    libpq_oauth_deps,
+    '-lpq-oauth',
+  ]
+endif
+
 pkgconfig.generate(
   name: 'libpq',
   description: 'PostgreSQL libpq library',
   url: pg_url,
   libraries: libpq,
-  libraries_private: [frontend_stlib_code, libpq_deps],
+  libraries_private: private_deps,
 )
 
 install_headers(
diff --git a/src/interfaces/libpq/nls.mk b/src/interfaces/libpq/nls.mk
index ae761265852..b87df277d93 100644
--- a/src/interfaces/libpq/nls.mk
+++ b/src/interfaces/libpq/nls.mk
@@ -13,15 +13,21 @@ GETTEXT_FILES    = fe-auth.c \
                    fe-secure-common.c \
                    fe-secure-gssapi.c \
                    fe-secure-openssl.c \
-                   win32.c
-GETTEXT_TRIGGERS = libpq_append_conn_error:2 \
+                   win32.c \
+                   ../libpq-oauth/oauth-curl.c \
+                   ../libpq-oauth/oauth-utils.c
+GETTEXT_TRIGGERS = actx_error:2 \
+                   libpq_append_conn_error:2 \
                    libpq_append_error:2 \
                    libpq_gettext \
                    libpq_ngettext:1,2 \
+                   oauth_parse_set_error:2 \
                    pqInternalNotice:2
-GETTEXT_FLAGS    = libpq_append_conn_error:2:c-format \
+GETTEXT_FLAGS    = actx_error:2:c-format \
+                   libpq_append_conn_error:2:c-format \
                    libpq_append_error:2:c-format \
                    libpq_gettext:1:pass-c-format \
                    libpq_ngettext:1:pass-c-format \
                    libpq_ngettext:2:pass-c-format \
+                   oauth_parse_set_error:2:c-format \
                    pqInternalNotice:2:c-format
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index e3adb5d8dc4..48d01a54dc6 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -204,6 +204,8 @@ pgxs_empty = [
   'LIBNUMA_CFLAGS', 'LIBNUMA_LIBS',
 
   'LIBURING_CFLAGS', 'LIBURING_LIBS',
+
+  'LIBCURL_CPPFLAGS', 'LIBCURL_LDFLAGS', 'LIBCURL_LDLIBS',
 ]
 
 if host_system == 'windows' and cc.get_argument_syntax() != 'msvc'
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index 36d1b26369f..e190f9cf15a 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -78,7 +78,7 @@ tests += {
     ],
     'env': {
       'PYTHON': python.path(),
-      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_libcurl': oauth_flow_supported ? 'yes' : 'no',
       'with_python': 'yes',
     },
   },
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
index 8dd502f41e1..21d4acc1926 100644
--- a/src/test/modules/oauth_validator/t/002_client.pl
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -110,7 +110,7 @@ if ($ENV{with_libcurl} ne 'yes')
 		"fails without custom hook installed",
 		flags => ["--no-hook"],
 		expected_stderr =>
-		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+		  qr/no OAuth flows are available \(try installing the libpq-oauth package\)/
 	);
 }
 
-- 
2.34.1

#364Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#360)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

- Per ABI comment upthread, we are back to major-minor versioning for
the shared library (e.g. libpq-oauth-18-0.so). 0001 adds the macros
and makefile variables to make this easy, and 0002 is the bulk of the
change now.

This will cause problems when programs are running while packages are
updated on disk. That program then tries to dlopen 18-0.so when there
is already 18-1.so installed. Relevant when the first oauth connection
is made way after startup.

This is trading one problem for another, but within-a-major ABI
changes should be much rarer than normal minor updates with
applications restarting only later.

Alternatively, there could be a dedicated SONAME for the plugin that
only changes when necessary, but perhaps the simple "18.so" solution
is good enough.

Christoph

#365Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#364)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Apr 23, 2025 at 8:39 AM Christoph Berg <myon@debian.org> wrote:

This will cause problems when programs are running while packages are
updated on disk. That program then tries to dlopen 18-0.so when there
is already 18-1.so installed. Relevant when the first oauth connection
is made way after startup.

Ugh, good point. This hazard applies to the previous suggestion of
pkglibdir, too, but in that case it would have been silent...

This is trading one problem for another, but within-a-major ABI
changes should be much rarer than normal minor updates with
applications restarting only later.

But the consequences are much worse for a silent ABI mismatch. Imagine
if libpq-oauth examines the wrong pointer inside PGconn for a
security-critical check.

Alternatively, there could be a dedicated SONAME for the plugin that
only changes when necessary, but perhaps the simple "18.so" solution
is good enough.

I don't think SONAME helps us, does it? We're not using it in dlopen().

We could all agree to bump the second number in the filename whenever
there's an internal ABI change. That works from a technical
perspective, but it's hard to test and enforce and... just not forget.
Or, I may still be able to thread the needle with a fuller lookup
table, and remove the dependency on libpq-int.h; it's just not going
to be incredibly pretty. Thinking...

Thanks so much for your continued review!

--Jacob

#366Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#365)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

But the consequences are much worse for a silent ABI mismatch. Imagine
if libpq-oauth examines the wrong pointer inside PGconn for a
security-critical check.

True.

Alternatively, there could be a dedicated SONAME for the plugin that
only changes when necessary, but perhaps the simple "18.so" solution
is good enough.

I don't think SONAME helps us, does it? We're not using it in dlopen().

That was paraphrasing, with SONAME I meant "library file name that
changes when the ABI changes".

We could all agree to bump the second number in the filename whenever
there's an internal ABI change. That works from a technical
perspective, but it's hard to test and enforce and... just not forget.

It's hopefully not harder than checking ABI compatibility of any other
libpq change, just a different number. If that number is in the
meson.build in the same directory, people should be able to connect
the dots.

Btw, if we have that number, we might as well drop the MAJOR part as
well... apt.pg.o is always shipping the latest libpq5, so major libpq
upgrades while apps are running are going to happen. (But this is just
once a year and much less problematic than minor upgrades and I'm not
going to complain if MAJOR is kept.)

Or, I may still be able to thread the needle with a fuller lookup
table, and remove the dependency on libpq-int.h; it's just not going
to be incredibly pretty. Thinking...

Don't overdesign it...

Christoph

#367Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#366)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Apr 23, 2025 at 9:38 AM Christoph Berg <myon@debian.org> wrote:

We could all agree to bump the second number in the filename whenever
there's an internal ABI change. That works from a technical
perspective, but it's hard to test and enforce and... just not forget.

It's hopefully not harder than checking ABI compatibility of any other
libpq change, just a different number. If that number is in the
meson.build in the same directory, people should be able to connect
the dots.

I think it is harder, simply because no one has to do it today, and
that change would sign them up to do it, forever, adding to the
backport checklist. It's one thing if there's a bunch of committers
who pile into the thread right now saying "yes, that's okay", but I
don't really feel comfortable making that decision for them right this
instant.

If we had robust ABI compatibility checks as part of the farm [1]/messages/by-id/B142EE8A-5D38-48B9-A4BB-82D69A854B55@justatheory.com, I
think we could do that. Doesn't feel like an 18 thing, though.

Btw, if we have that number, we might as well drop the MAJOR part as
well... apt.pg.o is always shipping the latest libpq5, so major libpq
upgrades while apps are running are going to happen. (But this is just
once a year and much less problematic than minor upgrades and I'm not
going to complain if MAJOR is kept.)

I don't want to introduce another testing matrix dimension if I can
avoid it. ("I have this bug where libpq.so.5.18 is using
libpq-oauth.so from PG20 and I had no idea it was doing that and the
problem went away when I restarted and...")

And the intent is for this to be temporary until we have a user-facing
API. If this is the solution we go with, I think it'd wise to prepare
for a -19 version of libpq-oauth, but I'm going to try my best to get
custom modules in ASAP. People are going to be annoyed that v1 of the
feature doesn't let them swap the flow for our utilities. Ideally they
only have to deal with that for a single major release.

Also: since the libpq-oauth-18 and libpq-oauth-19 packages can be
installed side-by-side safely, isn't the upgrade hazard significantly
diminished? (If a user uninstalls the previous libpq-oauth version
while they're still running that version of libpq in memory, _and_
they've somehow never used OAuth until right that instant... it's easy
enough for them to undo their mistake while the application is still
running.)

Or, I may still be able to thread the needle with a fuller lookup
table, and remove the dependency on libpq-int.h; it's just not going
to be incredibly pretty. Thinking...

Don't overdesign it...

Oh, I agree... but my "minimal" ABI designs have all had corner cases
so far. I may need to just bite the bullet.

Are there any readers who feel like an internal ABI version for
`struct pg_conn`, bumped during breaking backports, would be
acceptable? (More definitively: are there any readers who would veto
that?) You're still signing up for delayed errors in the long-lived
client case, so it's not a magic bullet, but the breakage is easy to
see and it's not a crash. The client application "just" has to restart
after a libpq upgrade.

Thanks,
--Jacob

[1]: /messages/by-id/B142EE8A-5D38-48B9-A4BB-82D69A854B55@justatheory.com

#368Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#367)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

Also: since the libpq-oauth-18 and libpq-oauth-19 packages can be
installed side-by-side safely, isn't the upgrade hazard significantly
diminished? (If a user uninstalls the previous libpq-oauth version
while they're still running that version of libpq in memory, _and_
they've somehow never used OAuth until right that instant... it's easy
enough for them to undo their mistake while the application is still
running.)

Uhm, so far the plan was to have one "libpq-oauth" package, not several.
Since shipping a single libpq5.deb package for all PG majors has worked well
for the past decades, I wouldn't want to complicate that now.

Christoph

#369Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#368)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Apr 23, 2025 at 1:13 PM Christoph Berg <myon@debian.org> wrote:

Uhm, so far the plan was to have one "libpq-oauth" package, not several.

I think the system is overconstrained at that point. If you want to
support clients that delay-load the ABI they're compiled against,
_and_ have them continue to work seamlessly after the system has
upgraded the ABI underneath them, without restarting the client... is
there any option other than side-by-side installation?

Since shipping a single libpq5.deb package for all PG majors has worked well
for the past decades, I wouldn't want to complicate that now.

I'm not sure if it's possible to ship a client-side module system
without something getting more complicated, though... I'm trying hard
not to overcomplicate it for you, but I also don't think the
complexity is going to remain the same.

--Jacob

#370Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#369)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

I think the system is overconstrained at that point. If you want to
support clients that delay-load the ABI they're compiled against,
_and_ have them continue to work seamlessly after the system has
upgraded the ABI underneath them, without restarting the client... is
there any option other than side-by-side installation?

My point is that we should be trying to change the ABI-as-coded-in-the-
filename as rarely as possible. Then side-by-side should not be required.

Christoph

#371Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#370)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Apr 25, 2025 at 2:03 AM Christoph Berg <myon@debian.org> wrote:

My point is that we should be trying to change the ABI-as-coded-in-the-
filename as rarely as possible.

I agree, but I'm also trying to say I can't unilaterally declare
pieces of our internal structs to be covered by an ABI guarantee.
Maybe the rest of the ABI will never change because it'll be perfect,
but I point to the immediately preceding thread as evidence against
the likelihood of perfection on the first try. I'm trying to build in
air bags so we don't have to regret a mistake.

Then side-by-side should not be required.

It's still required _during_ an ABI bump, though, if you don't want
things to break. Right?

--Jacob

#372Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#367)
2 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Apr 23, 2025 at 10:46 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Are there any readers who feel like an internal ABI version for
`struct pg_conn`, bumped during breaking backports, would be
acceptable? (More definitively: are there any readers who would veto
that?)

To keep things moving: I assume this is unacceptable. So v10 redirects
every access to a PGconn struct member through a shim, similarly to
how conn->errorMessage was translated in v9. This adds plenty of new
boilerplate, but not a whole lot of complexity. To try to keep us
honest, libpq-int.h has been removed from the libpq-oauth includes.

This will now handle in-place minor version upgrades that swap pg_conn
internals around, so I've gone back to -MAJOR versioning alone.
fe_oauth_state is still exported; it now has an ABI warning above it.
(I figure that's easier to draw a line around during backports,
compared to everything in PGconn. We can still break things there
during major version upgrades.)

Thanks,
--Jacob

Attachments:

since-v9.diff.txttext/plain; charset=US-ASCII; name=since-v9.diff.txtDownload
1:  5f87f11b18e < -:  ----------- Add minor-version counterpart to (PG_)MAJORVERSION
2:  9e37fd7c217 ! 1:  e86e93f7ac8 oauth: Move the builtin flow into a separate module
    @@ Commit message
     
         The default flow relies on some libpq internals. Some of these can be
         safely duplicated (such as the SIGPIPE handlers), but others need to be
    -    shared between libpq and libpq-oauth for thread-safety. To avoid exporting
    -    these internals to all libpq clients forever, these dependencies are
    -    instead injected from the libpq side via an initialization function.
    -    This also lets libpq communicate the offset of conn->errorMessage to
    -    libpq-oauth, so that we can function without crashing if the module on
    -    the search path came from a different build of Postgres.
    +    shared between libpq and libpq-oauth for thread-safety. To avoid
    +    exporting these internals to all libpq clients forever, these
    +    dependencies are instead injected from the libpq side via an
    +    initialization function. This also lets libpq communicate the offsets of
    +    PGconn struct members to libpq-oauth, so that we can function without
    +    crashing if the module on the search path came from a different build of
    +    Postgres. (A minor-version upgrade could swap the libpq-oauth module out
    +    from under a long-running libpq client before it does its first load of
    +    the OAuth flow.)
     
         This ABI is considered "private". The module has no SONAME or version
    -    symlinks, and it's named libpq-oauth-<major>-<minor>.so to avoid mixing
    -    and matching across Postgres versions, in case internal struct order
    -    needs to change. (Future improvements may promote this "OAuth flow
    -    plugin" to a first-class concept, at which point we would need a public
    -    API to replace this anyway.)
    +    symlinks, and it's named libpq-oauth-<major>.so to avoid mixing and
    +    matching across Postgres versions. (Future improvements may promote this
    +    "OAuth flow plugin" to a first-class concept, at which point we would
    +    need a public API to replace this anyway.)
     
         Additionally, NLS support for error messages in b3f0be788a was
         incomplete, because the new error macros weren't being scanned by
    @@ src/interfaces/libpq-oauth/Makefile (new)
     +
     +# This is an internal module; we don't want an SONAME and therefore do not set
     +# SO_MAJOR_VERSION.
    -+NAME = pq-oauth-$(MAJORVERSION)-$(MINORVERSION)
    ++NAME = pq-oauth-$(MAJORVERSION)
     +
     +# Force the name "libpq-oauth" for both the static and shared libraries. The
     +# staticlib doesn't need version information in its name.
    @@ src/interfaces/libpq-oauth/README (new)
     += Load-Time ABI =
     +
     +This module ABI is an internal implementation detail, so it's subject to change
    -+across releases; the name of the module (libpq-oauth-MAJOR-MINOR) reflects this.
    ++across major releases; the name of the module (libpq-oauth-MAJOR) reflects this.
     +The module exports the following symbols:
     +
     +- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
    @@ src/interfaces/libpq-oauth/README (new)
     +
     +- void libpq_oauth_init(pgthreadlock_t threadlock,
     +						libpq_gettext_func gettext_impl,
    -+						conn_errorMessage_func errmsg_impl);
    ++						conn_errorMessage_func errmsg_impl,
    ++						conn_oauth_client_id_func clientid_impl,
    ++						conn_oauth_client_secret_func clientsecret_impl,
    ++						conn_oauth_discovery_uri_func discoveryuri_impl,
    ++						conn_oauth_issuer_id_func issuerid_impl,
    ++						conn_oauth_scope_func scope_impl,
    ++						conn_sasl_state_func saslstate_impl,
    ++						set_conn_altsock_func setaltsock_impl,
    ++						set_conn_oauth_token_func settoken_impl);
     +
     +At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
     +libpq_gettext(), which must be injected by libpq using this initialization
     +function before the flow is run.
     +
    -+It also relies on libpq to expose conn->errorMessage, via the errmsg_impl. This
    -+is done to decouple the module ABI from the offset of errorMessage, which can
    -+change positions depending on configure-time options. This way we can safely
    -+search the standard dlopen() paths (e.g. RPATH, LD_LIBRARY_PATH, the SO cache)
    -+for an implementation module to use, even if that module wasn't compiled at the
    -+same time as libpq.
    ++It also relies on access to several members of the PGconn struct. Not only can
    ++these change positions across minor versions, but the offsets aren't necessarily
    ++stable within a single minor release (conn->errorMessage, for instance, can
    ++change offsets depending on configure-time options). Therefore the necessary
    ++accessors (named conn_*) and mutators (set_conn_*) are injected here. With this
    ++approach, we can safely search the standard dlopen() paths (e.g. RPATH,
    ++LD_LIBRARY_PATH, the SO cache) for an implementation module to use, even if that
    ++module wasn't compiled at the same time as libpq -- which becomes especially
    ++important during "live upgrade" situations where a running libpq application has
    ++the libpq-oauth module updated out from under it before it's first loaded from
    ++disk.
     +
     += Static Build =
     +
     +The static library libpq.a does not perform any dynamic loading. If the builtin
     +flow is enabled, the application is expected to link against libpq-oauth.a
    -+directly to provide the necessary symbols.
    ++directly to provide the necessary symbols. (libpq.a and libpq-oauth.a must be
    ++part of the same build. Unlike the dynamic module, there are no translation
    ++shims provided.)
     
      ## src/interfaces/libpq-oauth/exports.txt (new) ##
     @@
    @@ src/interfaces/libpq-oauth/meson.build (new)
     +
     +# This is an internal module; we don't want an SONAME and therefore do not set
     +# SO_MAJOR_VERSION.
    -+libpq_oauth_name = 'libpq-oauth-@0@-@1@'.format(pg_version_major, pg_version_minor)
    ++libpq_oauth_name = 'libpq-oauth-@0@'.format(pg_version_major)
     +
     +libpq_oauth_so = shared_module(libpq_oauth_name,
     +  libpq_oauth_sources + libpq_oauth_so_sources,
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c => src/interfaces/libpq-oauth/oauth-cu
     -#include <unistd.h>
      
      #include "common/jsonapi.h"
    - #include "fe-auth.h"
    +-#include "fe-auth.h"
      #include "fe-auth-oauth.h"
     -#include "libpq-int.h"
      #include "mb/pg_wchar.h"
     +#include "oauth-curl.h"
    ++
     +#ifdef USE_DYNAMIC_OAUTH
    ++
    ++/*
    ++ * The module build is decoupled from libpq-int.h, to try to avoid inadvertent
    ++ * ABI breaks during minor version bumps. Replacements for the missing internals
    ++ * are provided by oauth-utils.
    ++ */
     +#include "oauth-utils.h"
    ++
    ++#else							/* !USE_DYNAMIC_OAUTH */
    ++
    ++/*
    ++ * Static builds may rely on PGconn offsets directly. Keep these aligned with
    ++ * the bank of callbacks in oauth-utils.h.
    ++ */
    ++#include "libpq-int.h"
    ++
    ++#define conn_errorMessage(CONN) (&CONN->errorMessage)
    ++#define conn_oauth_client_id(CONN) (CONN->oauth_client_id)
    ++#define conn_oauth_client_secret(CONN) (CONN->oauth_client_secret)
    ++#define conn_oauth_discovery_uri(CONN) (CONN->oauth_discovery_uri)
    ++#define conn_oauth_issuer_id(CONN) (CONN->oauth_issuer_id)
    ++#define conn_oauth_scope(CONN) (CONN->oauth_scope)
    ++#define conn_sasl_state(CONN) (CONN->sasl_state)
    ++
    ++#define set_conn_altsock(CONN, VAL) do { CONN->altsock = VAL; } while (0)
    ++#define set_conn_oauth_token(CONN, VAL) do { CONN->oauth_token = VAL; } while (0)
    ++
    ++#endif							/* !USE_DYNAMIC_OAUTH */
    ++
    ++/* One final guardrail against accidental inclusion... */
    ++#if defined(USE_DYNAMIC_OAUTH) && defined(LIBPQ_INT_H)
    ++#error do not rely on libpq-int.h in libpq-oauth.so
     +#endif
      
      /*
       * It's generally prudent to set a maximum response size to buffer in memory,
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: free_async_ctx(PGconn *conn, struct async_ctx *actx)
    + void
    + pg_fe_cleanup_oauth_flow(PGconn *conn)
    + {
    +-	fe_oauth_state *state = conn->sasl_state;
    ++	fe_oauth_state *state = conn_sasl_state(conn);
    + 
    + 	if (state->async_ctx)
    + 	{
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_cleanup_oauth_flow(PGconn *conn)
    + 		state->async_ctx = NULL;
    + 	}
    + 
    +-	conn->altsock = PGINVALID_SOCKET;
    ++	set_conn_altsock(conn, PGINVALID_SOCKET);
    + }
    + 
    + /*
     @@ src/interfaces/libpq-oauth/oauth-curl.c: parse_access_token(struct async_ctx *actx, struct token *tok)
      static bool
      setup_multiplexer(struct async_ctx *actx)
    @@ src/interfaces/libpq-oauth/oauth-curl.c: timer_expired(struct async_ctx *actx)
      }
      
      /*
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: static bool
    + check_issuer(struct async_ctx *actx, PGconn *conn)
    + {
    + 	const struct provider *provider = &actx->provider;
    ++	const char *oauth_issuer_id = conn_oauth_issuer_id(conn);
    + 
    +-	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
    ++	Assert(oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
    + 	Assert(provider->issuer);	/* ensured by parse_provider() */
    + 
    + 	/*---
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: check_issuer(struct async_ctx *actx, PGconn *conn)
    + 	 *    sent to. This comparison MUST use simple string comparison as defined
    + 	 *    in Section 6.2.1 of [RFC3986].
    + 	 */
    +-	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
    ++	if (strcmp(oauth_issuer_id, provider->issuer) != 0)
    + 	{
    + 		actx_error(actx,
    + 				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
    +-				   provider->issuer, conn->oauth_issuer_id);
    ++				   provider->issuer, oauth_issuer_id);
    + 		return false;
    + 	}
    + 
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: check_for_device_flow(struct async_ctx *actx)
    + static bool
    + add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
    + {
    ++	const char *oauth_client_id = conn_oauth_client_id(conn);
    ++	const char *oauth_client_secret = conn_oauth_client_secret(conn);
    ++
    + 	bool		success = false;
    + 	char	   *username = NULL;
    + 	char	   *password = NULL;
    + 
    +-	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
    ++	if (oauth_client_secret)	/* Zero-length secrets are permitted! */
    + 	{
    + 		/*----
    + 		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *c
    + 		 * would it be redundant, but some providers in the wild (e.g. Okta)
    + 		 * refuse to accept it.
    + 		 */
    +-		username = urlencode(conn->oauth_client_id);
    +-		password = urlencode(conn->oauth_client_secret);
    ++		username = urlencode(oauth_client_id);
    ++		password = urlencode(oauth_client_secret);
    + 
    + 		if (!username || !password)
    + 		{
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *c
    + 		 * If we're not otherwise authenticating, client_id is REQUIRED in the
    + 		 * request body.
    + 		 */
    +-		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
    ++		build_urlencoded(reqbody, "client_id", oauth_client_id);
    + 
    + 		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
    + 		actx->used_basic_auth = false;
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: cleanup:
    + static bool
    + start_device_authz(struct async_ctx *actx, PGconn *conn)
    + {
    ++	const char *oauth_scope = conn_oauth_scope(conn);
    + 	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
    + 	PQExpBuffer work_buffer = &actx->work_data;
    + 
    +-	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
    ++	Assert(conn_oauth_client_id(conn)); /* ensured by setup_oauth_parameters() */
    + 	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
    + 
    + 	/* Construct our request body. */
    + 	resetPQExpBuffer(work_buffer);
    +-	if (conn->oauth_scope && conn->oauth_scope[0])
    +-		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
    ++	if (oauth_scope && oauth_scope[0])
    ++		build_urlencoded(work_buffer, "scope", oauth_scope);
    + 
    + 	if (!add_client_identification(actx, work_buffer, conn))
    + 		return false;
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: start_token_request(struct async_ctx *actx, PGconn *conn)
    + 	const char *device_code = actx->authz.device_code;
    + 	PQExpBuffer work_buffer = &actx->work_data;
    + 
    +-	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
    ++	Assert(conn_oauth_client_id(conn)); /* ensured by setup_oauth_parameters() */
    + 	Assert(token_uri);			/* ensured by parse_provider() */
    + 	Assert(device_code);		/* ensured by parse_device_authz() */
    + 
     @@ src/interfaces/libpq-oauth/oauth-curl.c: prompt_user(struct async_ctx *actx, PGconn *conn)
      		.verification_uri_complete = actx->authz.verification_uri_complete,
      		.expires_in = actx->authz.expires_in,
    @@ src/interfaces/libpq-oauth/oauth-curl.c: prompt_user(struct async_ctx *actx, PGc
      
      	if (!res)
      	{
    -@@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: done:
    + static PostgresPollingStatusType
    + pg_fe_run_oauth_flow_impl(PGconn *conn)
      {
    - 	fe_oauth_state *state = conn->sasl_state;
    +-	fe_oauth_state *state = conn->sasl_state;
    ++	fe_oauth_state *state = conn_sasl_state(conn);
      	struct async_ctx *actx;
    ++	char	   *oauth_token = NULL;
     +	PQExpBuffer errbuf;
      
      	if (!initialize_curl(conn))
      		return PGRES_POLLING_FAILED;
     @@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
    + 	do
    + 	{
    + 		/* By default, the multiplexer is the altsock. Reassign as desired. */
    +-		conn->altsock = actx->mux;
    ++		set_conn_altsock(conn, actx->mux);
    + 
    + 		switch (actx->step)
    + 		{
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
    + 				 */
    + 				if (!timer_expired(actx))
    + 				{
    +-					conn->altsock = actx->timerfd;
    ++					set_conn_altsock(conn, actx->timerfd);
    + 					return PGRES_POLLING_READING;
    + 				}
      
    - error_return:
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
    + 		{
    + 			case OAUTH_STEP_INIT:
    + 				actx->errctx = "failed to fetch OpenID discovery document";
    +-				if (!start_discovery(actx, conn->oauth_discovery_uri))
    ++				if (!start_discovery(actx, conn_oauth_discovery_uri(conn)))
    + 					goto error_return;
      
    -+	/*
    -+	 * For the dynamic module build, we can't safely rely on the offset of
    -+	 * conn->errorMessage, since it depends on build options like USE_SSL et
    -+	 * al. libpq gives us a translator function instead.
    -+	 */
    -+#ifdef USE_DYNAMIC_OAUTH
    + 				actx->step = OAUTH_STEP_DISCOVERY;
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
    + 				break;
    + 
    + 			case OAUTH_STEP_TOKEN_REQUEST:
    +-				if (!handle_token_response(actx, &conn->oauth_token))
    ++				if (!handle_token_response(actx, &oauth_token))
    + 					goto error_return;
    + 
    ++				/*
    ++				 * Hook any oauth_token into the PGconn immediately so that
    ++				 * the allocation isn't lost in case of an error.
    ++				 */
    ++				set_conn_oauth_token(conn, oauth_token);
    ++
    + 				if (!actx->user_prompted)
    + 				{
    + 					/*
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
    + 					actx->user_prompted = true;
    + 				}
    + 
    +-				if (conn->oauth_token)
    ++				if (oauth_token)
    + 					break;		/* done! */
    + 
    + 				/*
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
    + 				 * the client wait directly on the timerfd rather than the
    + 				 * multiplexer.
    + 				 */
    +-				conn->altsock = actx->timerfd;
    ++				set_conn_altsock(conn, actx->timerfd);
    + 
    + 				actx->step = OAUTH_STEP_WAIT_INTERVAL;
    + 				actx->running = 1;
    +@@ src/interfaces/libpq-oauth/oauth-curl.c: pg_fe_run_oauth_flow_impl(PGconn *conn)
    + 		 * point, actx->running will be set. But there are some corner cases
    + 		 * where we can immediately loop back around; see start_request().
    + 		 */
    +-	} while (!conn->oauth_token && !actx->running);
    ++	} while (!oauth_token && !actx->running);
    + 
    + 	/* If we've stored a token, we're done. Otherwise come back later. */
    +-	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
    ++	return oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
    + 
    + error_return:
     +	errbuf = conn_errorMessage(conn);
    -+#else
    -+	errbuf = &conn->errorMessage;
    -+#endif
    -+
    + 
      	/*
      	 * Assemble the three parts of our error: context, body, and detail. See
      	 * also the documentation for struct async_ctx.
    @@ src/interfaces/libpq-oauth/oauth-utils.c (new)
     +
     +#include <signal.h>
     +
    -+#include "libpq-int.h"
     +#include "oauth-utils.h"
     +
     +#ifndef USE_DYNAMIC_OAUTH
     +#error oauth-utils.c is not supported in static builds
     +#endif
     +
    -+static libpq_gettext_func libpq_gettext_impl;
    ++#ifdef LIBPQ_INT_H
    ++#error do not rely on libpq-int.h in libpq-oauth
    ++#endif
    ++
    ++/*
    ++ * Function pointers set by libpq_oauth_init().
    ++ */
     +
     +pgthreadlock_t pg_g_threadlock;
    ++static libpq_gettext_func libpq_gettext_impl;
    ++
     +conn_errorMessage_func conn_errorMessage;
    ++conn_oauth_client_id_func conn_oauth_client_id;
    ++conn_oauth_client_secret_func conn_oauth_client_secret;
    ++conn_oauth_discovery_uri_func conn_oauth_discovery_uri;
    ++conn_oauth_issuer_id_func conn_oauth_issuer_id;
    ++conn_oauth_scope_func conn_oauth_scope;
    ++conn_sasl_state_func conn_sasl_state;
    ++
    ++set_conn_altsock_func set_conn_altsock;
    ++set_conn_oauth_token_func set_conn_oauth_token;
     +
     +/*-
     + * Initializes libpq-oauth by setting necessary callbacks.
    @@ src/interfaces/libpq-oauth/oauth-utils.c (new)
     + *
     + * - libpq_gettext: translates error messages using libpq's message domain
     + *
    -+ * - conn->errorMessage: holds translated errors for the connection. This is
    -+ *   handled through a translation shim, which avoids either depending on the
    -+ *   offset of the errorMessage in PGconn, or needing to export the variadic
    -+ *   libpq_append_conn_error().
    ++ * The implementation also needs access to several members of the PGconn struct,
    ++ * which are not guaranteed to stay in place across minor versions. Accessors
    ++ * (named conn_*) and mutators (named set_conn_*) are injected here.
     + */
     +void
     +libpq_oauth_init(pgthreadlock_t threadlock_impl,
     +				 libpq_gettext_func gettext_impl,
    -+				 conn_errorMessage_func errmsg_impl)
    ++				 conn_errorMessage_func errmsg_impl,
    ++				 conn_oauth_client_id_func clientid_impl,
    ++				 conn_oauth_client_secret_func clientsecret_impl,
    ++				 conn_oauth_discovery_uri_func discoveryuri_impl,
    ++				 conn_oauth_issuer_id_func issuerid_impl,
    ++				 conn_oauth_scope_func scope_impl,
    ++				 conn_sasl_state_func saslstate_impl,
    ++				 set_conn_altsock_func setaltsock_impl,
    ++				 set_conn_oauth_token_func settoken_impl)
     +{
     +	pg_g_threadlock = threadlock_impl;
     +	libpq_gettext_impl = gettext_impl;
     +	conn_errorMessage = errmsg_impl;
    ++	conn_oauth_client_id = clientid_impl;
    ++	conn_oauth_client_secret = clientsecret_impl;
    ++	conn_oauth_discovery_uri = discoveryuri_impl;
    ++	conn_oauth_issuer_id = issuerid_impl;
    ++	conn_oauth_scope = scope_impl;
    ++	conn_sasl_state = saslstate_impl;
    ++	set_conn_altsock = setaltsock_impl;
    ++	set_conn_oauth_token = settoken_impl;
     +}
     +
     +/*
    @@ src/interfaces/libpq-oauth/oauth-utils.h (new)
     +#ifndef OAUTH_UTILS_H
     +#define OAUTH_UTILS_H
     +
    ++#include "fe-auth-oauth.h"
     +#include "libpq-fe.h"
     +#include "pqexpbuffer.h"
     +
    ++/*
    ++ * A bank of callbacks to safely access members of PGconn, which are all passed
    ++ * to libpq_oauth_init() by libpq.
    ++ *
    ++ * Keep these aligned with the definitions in fe-auth-oauth.c as well as the
    ++ * static declarations in oauth-curl.c.
    ++ */
    ++#define DECLARE_GETTER(TYPE, MEMBER) \
    ++	typedef TYPE (*conn_ ## MEMBER ## _func) (PGconn *conn); \
    ++	extern conn_ ## MEMBER ## _func conn_ ## MEMBER;
    ++
    ++#define DECLARE_SETTER(TYPE, MEMBER) \
    ++	typedef void (*set_conn_ ## MEMBER ## _func) (PGconn *conn, TYPE val); \
    ++	extern set_conn_ ## MEMBER ## _func set_conn_ ## MEMBER;
    ++
    ++DECLARE_GETTER(PQExpBuffer, errorMessage);
    ++DECLARE_GETTER(char *, oauth_client_id);
    ++DECLARE_GETTER(char *, oauth_client_secret);
    ++DECLARE_GETTER(char *, oauth_discovery_uri);
    ++DECLARE_GETTER(char *, oauth_issuer_id);
    ++DECLARE_GETTER(char *, oauth_scope);
    ++DECLARE_GETTER(fe_oauth_state *, sasl_state);
    ++
    ++DECLARE_SETTER(pgsocket, altsock);
    ++DECLARE_SETTER(char *, oauth_token);
    ++
    ++#undef DECLARE_GETTER
    ++#undef DECLARE_SETTER
    ++
     +typedef char *(*libpq_gettext_func) (const char *msgid);
    -+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
     +
     +/* Initializes libpq-oauth. */
     +extern PGDLLEXPORT void libpq_oauth_init(pgthreadlock_t threadlock,
     +										 libpq_gettext_func gettext_impl,
    -+										 conn_errorMessage_func errmsg_impl);
    ++										 conn_errorMessage_func errmsg_impl,
    ++										 conn_oauth_client_id_func clientid_impl,
    ++										 conn_oauth_client_secret_func clientsecret_impl,
    ++										 conn_oauth_discovery_uri_func discoveryuri_impl,
    ++										 conn_oauth_issuer_id_func issuerid_impl,
    ++										 conn_oauth_scope_func scope_impl,
    ++										 conn_sasl_state_func saslstate_impl,
    ++										 set_conn_altsock_func setaltsock_impl,
    ++										 set_conn_oauth_token_func settoken_impl);
     +
    -+/* Callback to safely obtain conn->errorMessage from a PGconn. */
    -+extern conn_errorMessage_func conn_errorMessage;
    ++/*
    ++ * Duplicated APIs, copied from libpq (primarily libpq-int.h, which we cannot
    ++ * depend on here).
    ++ */
    ++
    ++typedef enum
    ++{
    ++	PG_BOOL_UNKNOWN = 0,		/* Currently unknown */
    ++	PG_BOOL_YES,				/* Yes (true) */
    ++	PG_BOOL_NO					/* No (false) */
    ++} PGTernaryBool;
     +
    -+/* Duplicated APIs, copied from libpq. */
     +extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
     +extern bool oauth_unsafe_debugging_enabled(void);
     +extern int	pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
     +extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
     +
    ++#ifdef ENABLE_NLS
    ++extern char *libpq_gettext(const char *msgid) pg_attribute_format_arg(1);
    ++#else
    ++#define libpq_gettext(x) (x)
    ++#endif
    ++
    ++extern pgthreadlock_t pg_g_threadlock;
    ++
    ++#define pglock_thread()		pg_g_threadlock(true)
    ++#define pgunlock_thread()	pg_g_threadlock(false)
    ++
     +#endif							/* OAUTH_UTILS_H */
     
      ## src/interfaces/libpq/Makefile ##
    @@ src/interfaces/libpq/fe-auth-oauth.c: cleanup_user_oauth_flow(PGconn *conn)
     + */
     +
     +typedef char *(*libpq_gettext_func) (const char *msgid);
    -+typedef PQExpBuffer (*conn_errorMessage_func) (PGconn *conn);
     +
     +/*
    -+ * This shim is injected into libpq-oauth so that it doesn't depend on the
    -+ * offset of conn->errorMessage.
    -+ *
    -+ * TODO: look into exporting libpq_append_conn_error or a comparable API from
    -+ * libpq, instead.
    ++ * Define accessor/mutator shims to inject into libpq-oauth, so that it doesn't
    ++ * depend on the offsets within PGconn. (These have changed during minor version
    ++ * updates in the past.)
     + */
    -+static PQExpBuffer
    -+conn_errorMessage(PGconn *conn)
    -+{
    -+	return &conn->errorMessage;
    -+}
    ++
    ++#define DEFINE_GETTER(TYPE, MEMBER) \
    ++	typedef TYPE (*conn_ ## MEMBER ## _func) (PGconn *conn); \
    ++	static TYPE conn_ ## MEMBER(PGconn *conn) { return conn->MEMBER; }
    ++
    ++/* Like DEFINE_GETTER, but returns a pointer to the member. */
    ++#define DEFINE_GETTER_P(TYPE, MEMBER) \
    ++	typedef TYPE (*conn_ ## MEMBER ## _func) (PGconn *conn); \
    ++	static TYPE conn_ ## MEMBER(PGconn *conn) { return &conn->MEMBER; }
    ++
    ++#define DEFINE_SETTER(TYPE, MEMBER) \
    ++	typedef void (*set_conn_ ## MEMBER ## _func) (PGconn *conn, TYPE val); \
    ++	static void set_conn_ ## MEMBER(PGconn *conn, TYPE val) { conn->MEMBER = val; }
    ++
    ++DEFINE_GETTER_P(PQExpBuffer, errorMessage);
    ++DEFINE_GETTER(char *, oauth_client_id);
    ++DEFINE_GETTER(char *, oauth_client_secret);
    ++DEFINE_GETTER(char *, oauth_discovery_uri);
    ++DEFINE_GETTER(char *, oauth_issuer_id);
    ++DEFINE_GETTER(char *, oauth_scope);
    ++DEFINE_GETTER(fe_oauth_state *, sasl_state);
    ++
    ++DEFINE_SETTER(pgsocket, altsock);
    ++DEFINE_SETTER(char *, oauth_token);
     +
     +/*
     + * Loads the libpq-oauth plugin via dlopen(), initializes it, and plugs its
    @@ src/interfaces/libpq/fe-auth-oauth.c: cleanup_user_oauth_flow(PGconn *conn)
     +
     +	void		(*init) (pgthreadlock_t threadlock,
     +						 libpq_gettext_func gettext_impl,
    -+						 conn_errorMessage_func errmsg_impl);
    ++						 conn_errorMessage_func errmsg_impl,
    ++						 conn_oauth_client_id_func clientid_impl,
    ++						 conn_oauth_client_secret_func clientsecret_impl,
    ++						 conn_oauth_discovery_uri_func discoveryuri_impl,
    ++						 conn_oauth_issuer_id_func issuerid_impl,
    ++						 conn_oauth_scope_func scope_impl,
    ++						 conn_sasl_state_func saslstate_impl,
    ++						 set_conn_altsock_func setaltsock_impl,
    ++						 set_conn_oauth_token_func settoken_impl);
     +	PostgresPollingStatusType (*flow) (PGconn *conn);
     +	void		(*cleanup) (PGconn *conn);
     +
    @@ src/interfaces/libpq/fe-auth-oauth.c: cleanup_user_oauth_flow(PGconn *conn)
     +	 */
     +	const char *const module_name =
     +#if defined(__darwin__)
    -+		LIBDIR "/libpq-oauth-" PG_MAJORVERSION "-" PG_MINORVERSION DLSUFFIX;
    ++		LIBDIR "/libpq-oauth-" PG_MAJORVERSION DLSUFFIX;
     +#else
    -+		"libpq-oauth-" PG_MAJORVERSION "-" PG_MINORVERSION DLSUFFIX;
    ++		"libpq-oauth-" PG_MAJORVERSION DLSUFFIX;
     +#endif
     +
     +	state->builtin_flow = dlopen(module_name, RTLD_NOW | RTLD_LOCAL);
    @@ src/interfaces/libpq/fe-auth-oauth.c: cleanup_user_oauth_flow(PGconn *conn)
     +#else
     +			 NULL,
     +#endif
    -+			 conn_errorMessage);
    ++			 conn_errorMessage,
    ++			 conn_oauth_client_id,
    ++			 conn_oauth_client_secret,
    ++			 conn_oauth_discovery_uri,
    ++			 conn_oauth_issuer_id,
    ++			 conn_oauth_scope,
    ++			 conn_sasl_state,
    ++			 set_conn_altsock,
    ++			 set_conn_oauth_token);
     +
     +		initialized = true;
     +	}
    @@ src/interfaces/libpq/fe-auth-oauth.c: setup_token_request(PGconn *conn, fe_oauth
      	return true;
     
      ## src/interfaces/libpq/fe-auth-oauth.h ##
    -@@ src/interfaces/libpq/fe-auth-oauth.h: typedef struct
    +@@
    + #ifndef FE_AUTH_OAUTH_H
    + #define FE_AUTH_OAUTH_H
    + 
    ++#include "fe-auth-sasl.h"
    + #include "libpq-fe.h"
    +-#include "libpq-int.h"
    + 
    + 
    + enum fe_oauth_step
    +@@ src/interfaces/libpq/fe-auth-oauth.h: enum fe_oauth_step
    + 	FE_OAUTH_SERVER_ERROR,
    + };
    + 
    ++/*
    ++ * This struct is exported to the libpq-oauth module. If changes are needed
    ++ * during backports to stable branches, please keep ABI compatibility (no
    ++ * changes to existing members, add new members at the end, etc.).
    ++ */
    + typedef struct
    + {
    + 	enum fe_oauth_step step;
      
      	PGconn	   *conn;
      	void	   *async_ctx;
v10-0001-oauth-Move-the-builtin-flow-into-a-separate-modu.patchapplication/x-patch; name=v10-0001-oauth-Move-the-builtin-flow-into-a-separate-modu.patchDownload
From e86e93f7ac8e0ee746b95d804b8367a6ea4c9d30 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH v10] oauth: Move the builtin flow into a separate module

The additional packaging footprint of the OAuth Curl dependency, as well
as the existence of libcurl in the address space even if OAuth isn't
ever used by a client, has raised some concerns. Split off this
dependency into a separate loadable module called libpq-oauth.

When configured using --with-libcurl, libpq.so searches for this new
module via dlopen(). End users may choose not to install the libpq-oauth
module, in which case the default flow is disabled.

For static applications using libpq.a, the libpq-oauth staticlib is a
mandatory link-time dependency for --with-libcurl builds. libpq.pc has
been updated accordingly.

The default flow relies on some libpq internals. Some of these can be
safely duplicated (such as the SIGPIPE handlers), but others need to be
shared between libpq and libpq-oauth for thread-safety. To avoid
exporting these internals to all libpq clients forever, these
dependencies are instead injected from the libpq side via an
initialization function. This also lets libpq communicate the offsets of
PGconn struct members to libpq-oauth, so that we can function without
crashing if the module on the search path came from a different build of
Postgres. (A minor-version upgrade could swap the libpq-oauth module out
from under a long-running libpq client before it does its first load of
the OAuth flow.)

This ABI is considered "private". The module has no SONAME or version
symlinks, and it's named libpq-oauth-<major>.so to avoid mixing and
matching across Postgres versions. (Future improvements may promote this
"OAuth flow plugin" to a first-class concept, at which point we would
need a public API to replace this anyway.)

Additionally, NLS support for error messages in b3f0be788a was
incomplete, because the new error macros weren't being scanned by
xgettext. Fix that now.

Per request from Tom Lane and Bruce Momjian. Based on an initial patch
by Daniel Gustafsson, who also contributed docs changes. The "bare"
dlopen() concept came from Thomas Munro. Many many people reviewed the
design and implementation; thank you!

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Wolfgang Walther <walther@technowledgy.de>
Discussion: https://postgr.es/m/641687.1742360249%40sss.pgh.pa.us
---
 config/programs.m4                            |  17 +-
 configure                                     |  50 +++-
 configure.ac                                  |  26 +-
 doc/src/sgml/installation.sgml                |   8 +
 doc/src/sgml/libpq.sgml                       |  30 ++-
 meson.build                                   |  32 ++-
 src/Makefile.global.in                        |   3 +
 src/interfaces/Makefile                       |  12 +
 src/interfaces/libpq-oauth/Makefile           |  83 +++++++
 src/interfaces/libpq-oauth/README             |  58 +++++
 src/interfaces/libpq-oauth/exports.txt        |   4 +
 src/interfaces/libpq-oauth/meson.build        |  45 ++++
 .../oauth-curl.c}                             | 180 ++++++++------
 src/interfaces/libpq-oauth/oauth-curl.h       |  24 ++
 src/interfaces/libpq-oauth/oauth-utils.c      | 233 ++++++++++++++++++
 src/interfaces/libpq-oauth/oauth-utils.h      |  94 +++++++
 src/interfaces/libpq/Makefile                 |  36 ++-
 src/interfaces/libpq/exports.txt              |   1 +
 src/interfaces/libpq/fe-auth-oauth.c          | 229 ++++++++++++++++-
 src/interfaces/libpq/fe-auth-oauth.h          |  12 +-
 src/interfaces/libpq/meson.build              |  25 +-
 src/interfaces/libpq/nls.mk                   |  12 +-
 src/makefiles/meson.build                     |   2 +
 src/test/modules/oauth_validator/meson.build  |   2 +-
 .../modules/oauth_validator/t/002_client.pl   |   2 +-
 25 files changed, 1080 insertions(+), 140 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/README
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 rename src/interfaces/{libpq/fe-auth-oauth-curl.c => libpq-oauth/oauth-curl.c} (94%)
 create mode 100644 src/interfaces/libpq-oauth/oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.c
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.h

diff --git a/config/programs.m4 b/config/programs.m4
index 0a07feb37cc..0ad1e58b48d 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -286,9 +286,20 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
   AC_CHECK_HEADER(curl/curl.h, [],
 				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
-  AC_CHECK_LIB(curl, curl_multi_init, [],
+  AC_CHECK_LIB(curl, curl_multi_init, [
+				 AC_DEFINE([HAVE_LIBCURL], [1], [Define to 1 if you have the `curl' library (-lcurl).])
+				 AC_SUBST(LIBCURL_LDLIBS, -lcurl)
+			   ],
 			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
@@ -338,4 +349,8 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 *** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
 *** to use it with libpq.])
   fi
+
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0936010718d..a4c4bcb40ea 100755
--- a/configure
+++ b/configure
@@ -655,6 +655,7 @@ UUID_LIBS
 LDAP_LIBS_BE
 LDAP_LIBS_FE
 with_ssl
+LIBCURL_LDLIBS
 PTHREAD_CFLAGS
 PTHREAD_LIBS
 PTHREAD_CC
@@ -711,6 +712,8 @@ with_libxml
 LIBNUMA_LIBS
 LIBNUMA_CFLAGS
 with_libnuma
+LIBCURL_LDFLAGS
+LIBCURL_CPPFLAGS
 LIBCURL_LIBS
 LIBCURL_CFLAGS
 with_libcurl
@@ -9053,19 +9056,27 @@ $as_echo "yes" >&6; }
 
 fi
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+
+
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
@@ -12704,9 +12715,6 @@ fi
 
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
 
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
@@ -12754,17 +12762,26 @@ fi
 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
 $as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
 if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
-  cat >>confdefs.h <<_ACEOF
-#define HAVE_LIBCURL 1
-_ACEOF
 
-  LIBS="-lcurl $LIBS"
+
+$as_echo "#define HAVE_LIBCURL 1" >>confdefs.h
+
+				 LIBCURL_LDLIBS=-lcurl
+
 
 else
   as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
 fi
 
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
@@ -12868,6 +12885,10 @@ $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
 *** to use it with libpq." "$LINENO" 5
   fi
 
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
+
 fi
 
 if test "$with_gssapi" = yes ; then
@@ -14516,6 +14537,13 @@ done
 
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    as_fn_error $? "client OAuth is not supported on this platform" "$LINENO" 5
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/configure.ac b/configure.ac
index 2a78cddd825..c0471030e90 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1033,19 +1033,27 @@ if test "$with_libcurl" = yes ; then
   # to explicitly set TLS 1.3 ciphersuites).
   PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+  AC_SUBST(LIBCURL_CPPFLAGS)
+  AC_SUBST(LIBCURL_LDFLAGS)
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     AC_MSG_WARN([*** OAuth support tests require --with-python to run])
@@ -1354,9 +1362,6 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
   PGAC_CHECK_LIBCURL
 fi
@@ -1654,6 +1659,13 @@ if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    AC_MSG_ERROR([client-side OAuth is not supported on this platform])
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 077bcc20759..d928b103d22 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -313,6 +313,14 @@
      </para>
     </listitem>
 
+    <listitem>
+     <para>
+      You need <productname>Curl</productname> to build an optional module
+      which implements the <link linkend="libpq-oauth">OAuth Device
+      Authorization flow</link> for client applications.
+     </para>
+    </listitem>
+
     <listitem>
      <para>
       You need <productname>LZ4</productname>, if you want to support
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 8cdd2997d43..695fe958c3e 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10226,15 +10226,20 @@ void PQinitSSL(int do_ssl);
   <title>OAuth Support</title>
 
   <para>
-   libpq implements support for the OAuth v2 Device Authorization client flow,
+   <application>libpq</application> implements support for the OAuth v2 Device Authorization client flow,
    documented in
    <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
-   which it will attempt to use by default if the server
+   as an optional module. See the <link linkend="configure-option-with-libcurl">
+   installation documentation</link> for information on how to enable support
+   for Device Authorization as a builtin flow.
+  </para>
+  <para>
+   When support is enabled and the optional module installed, <application>libpq</application>
+   will use the builtin flow by default if the server
    <link linkend="auth-oauth">requests a bearer token</link> during
    authentication. This flow can be utilized even if the system running the
    client application does not have a usable web browser, for example when
-   running a client via <application>SSH</application>. Client applications may implement their own flows
-   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
+   running a client via <acronym>SSH</acronym>.
   </para>
   <para>
    The builtin flow will, by default, print a URL to visit and a user code to
@@ -10251,6 +10256,11 @@ Visit https://example.com/device and enter the code: ABCD-EFGH
    they match expectations, before continuing. Permissions should not be given
    to untrusted third parties.
   </para>
+  <para>
+   Client applications may implement their own flows to customize interaction
+   and integration with applications. See <xref linkend="libpq-oauth-authdata-hooks"/>
+   for more information on how add a custom flow to <application>libpq</application>.
+  </para>
   <para>
    For an OAuth client flow to be usable, the connection string must at minimum
    contain <xref linkend="libpq-connect-oauth-issuer"/> and
@@ -10366,7 +10376,9 @@ typedef struct _PGpromptOAuthDevice
 </synopsis>
         </para>
         <para>
-         The OAuth Device Authorization flow included in <application>libpq</application>
+         The OAuth Device Authorization flow which
+         <link linkend="configure-option-with-libcurl">can be included</link>
+         in <application>libpq</application>
          requires the end user to visit a URL with a browser, then enter a code
          which permits <application>libpq</application> to connect to the server
          on their behalf. The default prompt simply prints the
@@ -10378,7 +10390,8 @@ typedef struct _PGpromptOAuthDevice
          This callback is only invoked during the builtin device
          authorization flow. If the application installs a
          <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
-         flow</link>, this authdata type will not be used.
+         flow</link>, or <application>libpq</application> was not built with
+         support for the builtin flow, this authdata type will not be used.
         </para>
         <para>
          If a non-NULL <structfield>verification_uri_complete</structfield> is
@@ -10400,8 +10413,9 @@ typedef struct _PGpromptOAuthDevice
        </term>
        <listitem>
         <para>
-         Replaces the entire OAuth flow with a custom implementation. The hook
-         should either directly return a Bearer token for the current
+         Adds a custom implementation of a flow, replacing the builtin flow if
+         it is <link linkend="configure-option-with-libcurl">installed</link>.
+         The hook should either directly return a Bearer token for the current
          user/issuer/scope combination, if one is available without blocking, or
          else set up an asynchronous callback to retrieve one.
         </para>
diff --git a/meson.build b/meson.build
index a1516e54529..29d46c8ad01 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -860,13 +861,13 @@ endif
 ###############################################################
 
 libcurlopt = get_option('libcurl')
+oauth_flow_supported = false
+
 if not libcurlopt.disabled()
   # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
   # to explicitly set TLS 1.3 ciphersuites).
   libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
   if libcurl.found()
-    cdata.set('USE_LIBCURL', 1)
-
     # Check to see whether the current platform supports thread-safe Curl
     # initialization.
     libcurl_threadsafe_init = false
@@ -938,6 +939,22 @@ if not libcurlopt.disabled()
     endif
   endif
 
+  # Check that the current platform supports our builtin flow. This requires
+  # libcurl and one of either epoll or kqueue.
+  oauth_flow_supported = (
+    libcurl.found()
+    and (cc.check_header('sys/event.h', required: false,
+                         args: test_c_args, include_directories: postgres_inc)
+         or cc.check_header('sys/epoll.h', required: false,
+                            args: test_c_args, include_directories: postgres_inc))
+  )
+
+  if oauth_flow_supported
+    cdata.set('USE_LIBCURL', 1)
+  elif libcurlopt.enabled()
+    error('client-side OAuth is not supported on this platform')
+  endif
+
 else
   libcurl = not_found_dep
 endif
@@ -3272,17 +3289,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 6722fbdf365..04952b533de 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -347,6 +347,9 @@ perl_embed_ldflags	= @perl_embed_ldflags@
 
 AWK	= @AWK@
 LN_S	= @LN_S@
+LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@
+LIBCURL_LDFLAGS = @LIBCURL_LDFLAGS@
+LIBCURL_LDLIBS = @LIBCURL_LDLIBS@
 MSGFMT  = @MSGFMT@
 MSGFMT_FLAGS = @MSGFMT_FLAGS@
 MSGMERGE = @MSGMERGE@
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..e6822caa206 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,19 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+else
+ALWAYS_SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
+$(recurse_always)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..3e4b34142e0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,83 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization OAuth support"
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+NAME = pq-oauth-$(MAJORVERSION)
+
+# Force the name "libpq-oauth" for both the static and shared libraries. The
+# staticlib doesn't need version information in its name.
+override shlib := lib$(NAME)$(DLSUFFIX)
+override stlib := libpq-oauth.a
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(LIBCURL_CPPFLAGS) $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES)
+
+OBJS_STATIC = oauth-curl.o
+
+# The shared library needs additional glue symbols.
+OBJS_SHLIB = \
+	oauth-curl_shlib.o \
+	oauth-utils.o \
+
+oauth-utils.o: override CPPFLAGS += -DUSE_DYNAMIC_OAUTH
+oauth-curl_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
+
+# Add shlib-/stlib-specific objects.
+$(shlib): override OBJS += $(OBJS_SHLIB)
+$(shlib): $(OBJS_SHLIB)
+
+$(stlib): override OBJS += $(OBJS_STATIC)
+$(stlib): $(OBJS_STATIC)
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)
+SHLIB_PREREQS = submake-libpq
+SHLIB_EXPORTS = exports.txt
+
+# Disable -bundle_loader on macOS.
+BE_DLLLIBS =
+
+# By default, a library without an SONAME doesn't get a static library, so we
+# add it to the build explicitly.
+all: all-lib all-static-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
+# objects.
+%_shlib.o: %.c %.o
+	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
+
+# Ignore the standard rules for SONAME-less installation; we want both the
+# static and shared libraries to go into libdir.
+install: all installdirs $(stlib) $(shlib)
+	$(INSTALL_SHLIB) $(shlib) '$(DESTDIR)$(libdir)/$(shlib)'
+	$(INSTALL_STLIB) $(stlib) '$(DESTDIR)$(libdir)/$(stlib)'
+
+installdirs:
+	$(MKDIR_P) '$(DESTDIR)$(libdir)'
+
+uninstall:
+	rm -f '$(DESTDIR)$(libdir)/$(stlib)'
+	rm -f '$(DESTDIR)$(libdir)/$(shlib)'
+
+clean distclean: clean-lib
+	rm -f $(OBJS) $(OBJS_STATIC) $(OBJS_SHLIB)
diff --git a/src/interfaces/libpq-oauth/README b/src/interfaces/libpq-oauth/README
new file mode 100644
index 00000000000..4579b45c0f9
--- /dev/null
+++ b/src/interfaces/libpq-oauth/README
@@ -0,0 +1,58 @@
+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
+later split out as its own shared library in order to isolate its dependency on
+libcurl. (End users who don't want the Curl dependency can simply choose not to
+install this module.)
+
+If a connection string allows the use of OAuth, and the server asks for it, and
+a libpq client has not installed its own custom OAuth flow, libpq will attempt
+to delay-load this module using dlopen() and the following ABI. Failure to load
+results in a failed connection.
+
+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+across major releases; the name of the module (libpq-oauth-MAJOR) reflects this.
+The module exports the following symbols:
+
+- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+- void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+pg_fe_run_oauth_flow and pg_fe_cleanup_oauth_flow are implementations of
+conn->async_auth and conn->cleanup_async_auth, respectively.
+
+- void libpq_oauth_init(pgthreadlock_t threadlock,
+						libpq_gettext_func gettext_impl,
+						conn_errorMessage_func errmsg_impl,
+						conn_oauth_client_id_func clientid_impl,
+						conn_oauth_client_secret_func clientsecret_impl,
+						conn_oauth_discovery_uri_func discoveryuri_impl,
+						conn_oauth_issuer_id_func issuerid_impl,
+						conn_oauth_scope_func scope_impl,
+						conn_sasl_state_func saslstate_impl,
+						set_conn_altsock_func setaltsock_impl,
+						set_conn_oauth_token_func settoken_impl);
+
+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
+libpq_gettext(), which must be injected by libpq using this initialization
+function before the flow is run.
+
+It also relies on access to several members of the PGconn struct. Not only can
+these change positions across minor versions, but the offsets aren't necessarily
+stable within a single minor release (conn->errorMessage, for instance, can
+change offsets depending on configure-time options). Therefore the necessary
+accessors (named conn_*) and mutators (set_conn_*) are injected here. With this
+approach, we can safely search the standard dlopen() paths (e.g. RPATH,
+LD_LIBRARY_PATH, the SO cache) for an implementation module to use, even if that
+module wasn't compiled at the same time as libpq -- which becomes especially
+important during "live upgrade" situations where a running libpq application has
+the libpq-oauth module updated out from under it before it's first loaded from
+disk.
+
+= Static Build =
+
+The static library libpq.a does not perform any dynamic loading. If the builtin
+flow is enabled, the application is expected to link against libpq-oauth.a
+directly to provide the necessary symbols. (libpq.a and libpq-oauth.a must be
+part of the same build. Unlike the dynamic module, there are no translation
+shims provided.)
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..6891a83dbf9
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+libpq_oauth_init          1
+pg_fe_run_oauth_flow      2
+pg_fe_cleanup_oauth_flow  3
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..9e7301a7f63
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,45 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+if not oauth_flow_supported
+  subdir_done()
+endif
+
+libpq_oauth_sources = files(
+  'oauth-curl.c',
+)
+
+# The shared library needs additional glue symbols.
+libpq_oauth_so_sources = files(
+  'oauth-utils.c',
+)
+libpq_oauth_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+
+libpq_oauth_st = static_library('libpq-oauth',
+  libpq_oauth_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_stlib_code, libpq_oauth_deps],
+  kwargs: default_lib_args,
+)
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+libpq_oauth_name = 'libpq-oauth-@0@'.format(pg_version_major)
+
+libpq_oauth_so = shared_module(libpq_oauth_name,
+  libpq_oauth_sources + libpq_oauth_so_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_args: libpq_so_c_args,
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/oauth-curl.c
similarity index 94%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/oauth-curl.c
index c195e00cd28..3acdcc52de8 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/oauth-curl.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * fe-auth-oauth-curl.c
+ * oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *	  src/interfaces/libpq-oauth/oauth-curl.c
  *
  *-------------------------------------------------------------------------
  */
@@ -17,20 +17,56 @@
 
 #include <curl/curl.h>
 #include <math.h>
-#ifdef HAVE_SYS_EPOLL_H
+#include <unistd.h>
+
+#if defined(HAVE_SYS_EPOLL_H)
 #include <sys/epoll.h>
 #include <sys/timerfd.h>
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 #include <sys/event.h>
+#else
+#error libpq-oauth is not supported on this platform
 #endif
-#include <unistd.h>
 
 #include "common/jsonapi.h"
-#include "fe-auth.h"
 #include "fe-auth-oauth.h"
-#include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "oauth-curl.h"
+
+#ifdef USE_DYNAMIC_OAUTH
+
+/*
+ * The module build is decoupled from libpq-int.h, to try to avoid inadvertent
+ * ABI breaks during minor version bumps. Replacements for the missing internals
+ * are provided by oauth-utils.
+ */
+#include "oauth-utils.h"
+
+#else							/* !USE_DYNAMIC_OAUTH */
+
+/*
+ * Static builds may rely on PGconn offsets directly. Keep these aligned with
+ * the bank of callbacks in oauth-utils.h.
+ */
+#include "libpq-int.h"
+
+#define conn_errorMessage(CONN) (&CONN->errorMessage)
+#define conn_oauth_client_id(CONN) (CONN->oauth_client_id)
+#define conn_oauth_client_secret(CONN) (CONN->oauth_client_secret)
+#define conn_oauth_discovery_uri(CONN) (CONN->oauth_discovery_uri)
+#define conn_oauth_issuer_id(CONN) (CONN->oauth_issuer_id)
+#define conn_oauth_scope(CONN) (CONN->oauth_scope)
+#define conn_sasl_state(CONN) (CONN->sasl_state)
+
+#define set_conn_altsock(CONN, VAL) do { CONN->altsock = VAL; } while (0)
+#define set_conn_oauth_token(CONN, VAL) do { CONN->oauth_token = VAL; } while (0)
+
+#endif							/* !USE_DYNAMIC_OAUTH */
+
+/* One final guardrail against accidental inclusion... */
+#if defined(USE_DYNAMIC_OAUTH) && defined(LIBPQ_INT_H)
+#error do not rely on libpq-int.h in libpq-oauth.so
+#endif
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -303,7 +339,7 @@ free_async_ctx(PGconn *conn, struct async_ctx *actx)
 void
 pg_fe_cleanup_oauth_flow(PGconn *conn)
 {
-	fe_oauth_state *state = conn->sasl_state;
+	fe_oauth_state *state = conn_sasl_state(conn);
 
 	if (state->async_ctx)
 	{
@@ -311,7 +347,7 @@ pg_fe_cleanup_oauth_flow(PGconn *conn)
 		state->async_ctx = NULL;
 	}
 
-	conn->altsock = PGINVALID_SOCKET;
+	set_conn_altsock(conn, PGINVALID_SOCKET);
 }
 
 /*
@@ -1110,7 +1146,7 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 static bool
 setup_multiplexer(struct async_ctx *actx)
 {
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {.events = EPOLLIN};
 
 	actx->mux = epoll_create1(EPOLL_CLOEXEC);
@@ -1134,8 +1170,7 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	actx->mux = kqueue();
 	if (actx->mux < 0)
 	{
@@ -1158,10 +1193,9 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
+#else
+#error setup_multiplexer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
-	return false;
 }
 
 /*
@@ -1174,7 +1208,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 {
 	struct async_ctx *actx = ctx;
 
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {0};
 	int			res;
 	int			op = EPOLL_CTL_ADD;
@@ -1230,8 +1264,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev[2] = {0};
 	struct kevent ev_out[2];
 	struct timespec timeout = {0};
@@ -1312,10 +1345,9 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
+#else
+#error register_socket is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
-	return -1;
 }
 
 /*
@@ -1334,7 +1366,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 static bool
 set_timer(struct async_ctx *actx, long timeout)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timeout < 0)
@@ -1363,8 +1395,7 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev;
 
 #ifdef __NetBSD__
@@ -1419,10 +1450,9 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
+#else
+#error set_timer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return false;
 }
 
 /*
@@ -1433,7 +1463,7 @@ set_timer(struct async_ctx *actx, long timeout)
 static int
 timer_expired(struct async_ctx *actx)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timerfd_gettime(actx->timerfd, &spec) < 0)
@@ -1453,8 +1483,7 @@ timer_expired(struct async_ctx *actx)
 	/* If the remaining time to expiration is zero, we're done. */
 	return (spec.it_value.tv_sec == 0
 			&& spec.it_value.tv_nsec == 0);
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	int			res;
 
 	/* Is the timer queue ready? */
@@ -1466,10 +1495,9 @@ timer_expired(struct async_ctx *actx)
 	}
 
 	return (res > 0);
+#else
+#error timer_expired is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return -1;
 }
 
 /*
@@ -2070,8 +2098,9 @@ static bool
 check_issuer(struct async_ctx *actx, PGconn *conn)
 {
 	const struct provider *provider = &actx->provider;
+	const char *oauth_issuer_id = conn_oauth_issuer_id(conn);
 
-	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
 	Assert(provider->issuer);	/* ensured by parse_provider() */
 
 	/*---
@@ -2091,11 +2120,11 @@ check_issuer(struct async_ctx *actx, PGconn *conn)
 	 *    sent to. This comparison MUST use simple string comparison as defined
 	 *    in Section 6.2.1 of [RFC3986].
 	 */
-	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	if (strcmp(oauth_issuer_id, provider->issuer) != 0)
 	{
 		actx_error(actx,
 				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
-				   provider->issuer, conn->oauth_issuer_id);
+				   provider->issuer, oauth_issuer_id);
 		return false;
 	}
 
@@ -2172,11 +2201,14 @@ check_for_device_flow(struct async_ctx *actx)
 static bool
 add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
 {
+	const char *oauth_client_id = conn_oauth_client_id(conn);
+	const char *oauth_client_secret = conn_oauth_client_secret(conn);
+
 	bool		success = false;
 	char	   *username = NULL;
 	char	   *password = NULL;
 
-	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	if (oauth_client_secret)	/* Zero-length secrets are permitted! */
 	{
 		/*----
 		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
@@ -2204,8 +2236,8 @@ add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *c
 		 * would it be redundant, but some providers in the wild (e.g. Okta)
 		 * refuse to accept it.
 		 */
-		username = urlencode(conn->oauth_client_id);
-		password = urlencode(conn->oauth_client_secret);
+		username = urlencode(oauth_client_id);
+		password = urlencode(oauth_client_secret);
 
 		if (!username || !password)
 		{
@@ -2225,7 +2257,7 @@ add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *c
 		 * If we're not otherwise authenticating, client_id is REQUIRED in the
 		 * request body.
 		 */
-		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+		build_urlencoded(reqbody, "client_id", oauth_client_id);
 
 		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
 		actx->used_basic_auth = false;
@@ -2253,16 +2285,17 @@ cleanup:
 static bool
 start_device_authz(struct async_ctx *actx, PGconn *conn)
 {
+	const char *oauth_scope = conn_oauth_scope(conn);
 	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
 	PQExpBuffer work_buffer = &actx->work_data;
 
-	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(conn_oauth_client_id(conn)); /* ensured by setup_oauth_parameters() */
 	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
 
 	/* Construct our request body. */
 	resetPQExpBuffer(work_buffer);
-	if (conn->oauth_scope && conn->oauth_scope[0])
-		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+	if (oauth_scope && oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", oauth_scope);
 
 	if (!add_client_identification(actx, work_buffer, conn))
 		return false;
@@ -2344,7 +2377,7 @@ start_token_request(struct async_ctx *actx, PGconn *conn)
 	const char *device_code = actx->authz.device_code;
 	PQExpBuffer work_buffer = &actx->work_data;
 
-	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(conn_oauth_client_id(conn)); /* ensured by setup_oauth_parameters() */
 	Assert(token_uri);			/* ensured by parse_provider() */
 	Assert(device_code);		/* ensured by parse_device_authz() */
 
@@ -2487,8 +2520,9 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 		.verification_uri_complete = actx->authz.verification_uri_complete,
 		.expires_in = actx->authz.expires_in,
 	};
+	PQauthDataHook_type hook = PQgetAuthDataHook();
 
-	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+	res = hook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
 
 	if (!res)
 	{
@@ -2633,8 +2667,10 @@ done:
 static PostgresPollingStatusType
 pg_fe_run_oauth_flow_impl(PGconn *conn)
 {
-	fe_oauth_state *state = conn->sasl_state;
+	fe_oauth_state *state = conn_sasl_state(conn);
 	struct async_ctx *actx;
+	char	   *oauth_token = NULL;
+	PQExpBuffer errbuf;
 
 	if (!initialize_curl(conn))
 		return PGRES_POLLING_FAILED;
@@ -2676,7 +2712,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 	do
 	{
 		/* By default, the multiplexer is the altsock. Reassign as desired. */
-		conn->altsock = actx->mux;
+		set_conn_altsock(conn, actx->mux);
 
 		switch (actx->step)
 		{
@@ -2712,7 +2748,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 				 */
 				if (!timer_expired(actx))
 				{
-					conn->altsock = actx->timerfd;
+					set_conn_altsock(conn, actx->timerfd);
 					return PGRES_POLLING_READING;
 				}
 
@@ -2732,7 +2768,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 		{
 			case OAUTH_STEP_INIT:
 				actx->errctx = "failed to fetch OpenID discovery document";
-				if (!start_discovery(actx, conn->oauth_discovery_uri))
+				if (!start_discovery(actx, conn_oauth_discovery_uri(conn)))
 					goto error_return;
 
 				actx->step = OAUTH_STEP_DISCOVERY;
@@ -2768,9 +2804,15 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 				break;
 
 			case OAUTH_STEP_TOKEN_REQUEST:
-				if (!handle_token_response(actx, &conn->oauth_token))
+				if (!handle_token_response(actx, &oauth_token))
 					goto error_return;
 
+				/*
+				 * Hook any oauth_token into the PGconn immediately so that
+				 * the allocation isn't lost in case of an error.
+				 */
+				set_conn_oauth_token(conn, oauth_token);
+
 				if (!actx->user_prompted)
 				{
 					/*
@@ -2783,7 +2825,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 					actx->user_prompted = true;
 				}
 
-				if (conn->oauth_token)
+				if (oauth_token)
 					break;		/* done! */
 
 				/*
@@ -2798,7 +2840,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 				 * the client wait directly on the timerfd rather than the
 				 * multiplexer.
 				 */
-				conn->altsock = actx->timerfd;
+				set_conn_altsock(conn, actx->timerfd);
 
 				actx->step = OAUTH_STEP_WAIT_INTERVAL;
 				actx->running = 1;
@@ -2818,48 +2860,40 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 		 * point, actx->running will be set. But there are some corner cases
 		 * where we can immediately loop back around; see start_request().
 		 */
-	} while (!conn->oauth_token && !actx->running);
+	} while (!oauth_token && !actx->running);
 
 	/* If we've stored a token, we're done. Otherwise come back later. */
-	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+	return oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
 
 error_return:
+	errbuf = conn_errorMessage(conn);
 
 	/*
 	 * Assemble the three parts of our error: context, body, and detail. See
 	 * also the documentation for struct async_ctx.
 	 */
 	if (actx->errctx)
-	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 libpq_gettext(actx->errctx));
-		appendPQExpBufferStr(&conn->errorMessage, ": ");
-	}
+		appendPQExpBuffer(errbuf, "%s: ", libpq_gettext(actx->errctx));
 
 	if (PQExpBufferDataBroken(actx->errbuf))
-		appendPQExpBufferStr(&conn->errorMessage,
-							 libpq_gettext("out of memory"));
+		appendPQExpBufferStr(errbuf, libpq_gettext("out of memory"));
 	else
-		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+		appendPQExpBufferStr(errbuf, actx->errbuf.data);
 
 	if (actx->curl_err[0])
 	{
-		size_t		len;
-
-		appendPQExpBuffer(&conn->errorMessage,
-						  " (libcurl: %s)", actx->curl_err);
+		appendPQExpBuffer(errbuf, " (libcurl: %s)", actx->curl_err);
 
 		/* Sometimes libcurl adds a newline to the error buffer. :( */
-		len = conn->errorMessage.len;
-		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		if (errbuf->len >= 2 && errbuf->data[errbuf->len - 2] == '\n')
 		{
-			conn->errorMessage.data[len - 2] = ')';
-			conn->errorMessage.data[len - 1] = '\0';
-			conn->errorMessage.len--;
+			errbuf->data[errbuf->len - 2] = ')';
+			errbuf->data[errbuf->len - 1] = '\0';
+			errbuf->len--;
 		}
 	}
 
-	appendPQExpBufferChar(&conn->errorMessage, '\n');
+	appendPQExpBufferChar(errbuf, '\n');
 
 	return PGRES_POLLING_FAILED;
 }
diff --git a/src/interfaces/libpq-oauth/oauth-curl.h b/src/interfaces/libpq-oauth/oauth-curl.h
new file mode 100644
index 00000000000..248d0424ad0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-curl.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_CURL_H
+#define OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+/* Exported async-auth callbacks. */
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.c b/src/interfaces/libpq-oauth/oauth-utils.c
new file mode 100644
index 00000000000..57d543ac06f
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.c
@@ -0,0 +1,233 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.c
+ *
+ *	  "Glue" helpers providing a copy of some internal APIs from libpq. At
+ *	  some point in the future, we might be able to deduplicate.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq-oauth/oauth-utils.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <signal.h>
+
+#include "oauth-utils.h"
+
+#ifndef USE_DYNAMIC_OAUTH
+#error oauth-utils.c is not supported in static builds
+#endif
+
+#ifdef LIBPQ_INT_H
+#error do not rely on libpq-int.h in libpq-oauth
+#endif
+
+/*
+ * Function pointers set by libpq_oauth_init().
+ */
+
+pgthreadlock_t pg_g_threadlock;
+static libpq_gettext_func libpq_gettext_impl;
+
+conn_errorMessage_func conn_errorMessage;
+conn_oauth_client_id_func conn_oauth_client_id;
+conn_oauth_client_secret_func conn_oauth_client_secret;
+conn_oauth_discovery_uri_func conn_oauth_discovery_uri;
+conn_oauth_issuer_id_func conn_oauth_issuer_id;
+conn_oauth_scope_func conn_oauth_scope;
+conn_sasl_state_func conn_sasl_state;
+
+set_conn_altsock_func set_conn_altsock;
+set_conn_oauth_token_func set_conn_oauth_token;
+
+/*-
+ * Initializes libpq-oauth by setting necessary callbacks.
+ *
+ * The current implementation relies on the following private implementation
+ * details of libpq:
+ *
+ * - pg_g_threadlock: protects libcurl initialization if the underlying Curl
+ *   installation is not threadsafe
+ *
+ * - libpq_gettext: translates error messages using libpq's message domain
+ *
+ * The implementation also needs access to several members of the PGconn struct,
+ * which are not guaranteed to stay in place across minor versions. Accessors
+ * (named conn_*) and mutators (named set_conn_*) are injected here.
+ */
+void
+libpq_oauth_init(pgthreadlock_t threadlock_impl,
+				 libpq_gettext_func gettext_impl,
+				 conn_errorMessage_func errmsg_impl,
+				 conn_oauth_client_id_func clientid_impl,
+				 conn_oauth_client_secret_func clientsecret_impl,
+				 conn_oauth_discovery_uri_func discoveryuri_impl,
+				 conn_oauth_issuer_id_func issuerid_impl,
+				 conn_oauth_scope_func scope_impl,
+				 conn_sasl_state_func saslstate_impl,
+				 set_conn_altsock_func setaltsock_impl,
+				 set_conn_oauth_token_func settoken_impl)
+{
+	pg_g_threadlock = threadlock_impl;
+	libpq_gettext_impl = gettext_impl;
+	conn_errorMessage = errmsg_impl;
+	conn_oauth_client_id = clientid_impl;
+	conn_oauth_client_secret = clientsecret_impl;
+	conn_oauth_discovery_uri = discoveryuri_impl;
+	conn_oauth_issuer_id = issuerid_impl;
+	conn_oauth_scope = scope_impl;
+	conn_sasl_state = saslstate_impl;
+	set_conn_altsock = setaltsock_impl;
+	set_conn_oauth_token = settoken_impl;
+}
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  This is a copy of libpq's internal API.
+ */
+void
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+	PQExpBuffer errorMessage = conn_errorMessage(conn);
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(errorMessage, '\n');
+}
+
+#ifdef ENABLE_NLS
+
+/*
+ * A shim that defers to the actual libpq_gettext().
+ */
+char *
+libpq_gettext(const char *msgid)
+{
+	if (!libpq_gettext_impl)
+	{
+		/*
+		 * Possible if the libpq build didn't enable NLS but the libpq-oauth
+		 * build did. That's an odd mismatch, but we can handle it.
+		 *
+		 * Note that callers of libpq_gettext() have to treat the return value
+		 * as if it were const, because builds without NLS simply pass through
+		 * their argument.
+		 */
+		return unconstify(char *, msgid);
+	}
+
+	return libpq_gettext_impl(msgid);
+}
+
+#endif							/* ENABLE_NLS */
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+/*
+ * Duplicate SOCK_ERRNO* definitions from libpq-int.h, for use by
+ * pq_block/reset_sigpipe().
+ */
+#ifdef WIN32
+#define SOCK_ERRNO (WSAGetLastError())
+#define SOCK_ERRNO_SET(e) WSASetLastError(e)
+#else
+#define SOCK_ERRNO errno
+#define SOCK_ERRNO_SET(e) (errno = (e))
+#endif
+
+/*
+ *	Block SIGPIPE for this thread. This is a copy of libpq's internal API.
+ */
+int
+pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+
+/*
+ *	Discard any pending SIGPIPE and reset the signal mask. This is a copy of
+ *	libpq's internal API.
+ */
+void
+pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
diff --git a/src/interfaces/libpq-oauth/oauth-utils.h b/src/interfaces/libpq-oauth/oauth-utils.h
new file mode 100644
index 00000000000..f4ffefef208
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.h
@@ -0,0 +1,94 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.h
+ *
+ *	  Definitions providing missing libpq internal APIs
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_UTILS_H
+#define OAUTH_UTILS_H
+
+#include "fe-auth-oauth.h"
+#include "libpq-fe.h"
+#include "pqexpbuffer.h"
+
+/*
+ * A bank of callbacks to safely access members of PGconn, which are all passed
+ * to libpq_oauth_init() by libpq.
+ *
+ * Keep these aligned with the definitions in fe-auth-oauth.c as well as the
+ * static declarations in oauth-curl.c.
+ */
+#define DECLARE_GETTER(TYPE, MEMBER) \
+	typedef TYPE (*conn_ ## MEMBER ## _func) (PGconn *conn); \
+	extern conn_ ## MEMBER ## _func conn_ ## MEMBER;
+
+#define DECLARE_SETTER(TYPE, MEMBER) \
+	typedef void (*set_conn_ ## MEMBER ## _func) (PGconn *conn, TYPE val); \
+	extern set_conn_ ## MEMBER ## _func set_conn_ ## MEMBER;
+
+DECLARE_GETTER(PQExpBuffer, errorMessage);
+DECLARE_GETTER(char *, oauth_client_id);
+DECLARE_GETTER(char *, oauth_client_secret);
+DECLARE_GETTER(char *, oauth_discovery_uri);
+DECLARE_GETTER(char *, oauth_issuer_id);
+DECLARE_GETTER(char *, oauth_scope);
+DECLARE_GETTER(fe_oauth_state *, sasl_state);
+
+DECLARE_SETTER(pgsocket, altsock);
+DECLARE_SETTER(char *, oauth_token);
+
+#undef DECLARE_GETTER
+#undef DECLARE_SETTER
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+
+/* Initializes libpq-oauth. */
+extern PGDLLEXPORT void libpq_oauth_init(pgthreadlock_t threadlock,
+										 libpq_gettext_func gettext_impl,
+										 conn_errorMessage_func errmsg_impl,
+										 conn_oauth_client_id_func clientid_impl,
+										 conn_oauth_client_secret_func clientsecret_impl,
+										 conn_oauth_discovery_uri_func discoveryuri_impl,
+										 conn_oauth_issuer_id_func issuerid_impl,
+										 conn_oauth_scope_func scope_impl,
+										 conn_sasl_state_func saslstate_impl,
+										 set_conn_altsock_func setaltsock_impl,
+										 set_conn_oauth_token_func settoken_impl);
+
+/*
+ * Duplicated APIs, copied from libpq (primarily libpq-int.h, which we cannot
+ * depend on here).
+ */
+
+typedef enum
+{
+	PG_BOOL_UNKNOWN = 0,		/* Currently unknown */
+	PG_BOOL_YES,				/* Yes (true) */
+	PG_BOOL_NO					/* No (false) */
+} PGTernaryBool;
+
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+extern bool oauth_unsafe_debugging_enabled(void);
+extern int	pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
+extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
+
+#ifdef ENABLE_NLS
+extern char *libpq_gettext(const char *msgid) pg_attribute_format_arg(1);
+#else
+#define libpq_gettext(x) (x)
+#endif
+
+extern pgthreadlock_t pg_g_threadlock;
+
+#define pglock_thread()		pg_g_threadlock(true)
+#define pgunlock_thread()	pg_g_threadlock(false)
+
+#endif							/* OAUTH_UTILS_H */
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..c6fe5fec7f6 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,7 +31,6 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
-	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -64,9 +63,11 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
+# The OAuth implementation differs depending on the type of library being built.
+OBJS_STATIC = fe-auth-oauth.o
+
+fe-auth-oauth_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
+OBJS_SHLIB = fe-auth-oauth_shlib.o
 
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
@@ -86,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -101,12 +102,26 @@ ifeq ($(with_ssl),openssl)
 PKG_CONFIG_REQUIRES_PRIVATE = libssl, libcrypto
 endif
 
+ifeq ($(with_libcurl),yes)
+# libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+# libpq-oauth needs libcurl. Put both into *.private.
+PKG_CONFIG_REQUIRES_PRIVATE += libcurl
+%.pc: override SHLIB_LINK_INTERNAL += -lpq-oauth
+endif
+
 all: all-lib libpq-refs-stamp
 
 # Shared library stuff
 include $(top_srcdir)/src/Makefile.shlib
 backend_src = $(top_srcdir)/src/backend
 
+# Add shlib-/stlib-specific objects.
+$(shlib): override OBJS += $(OBJS_SHLIB)
+$(shlib): $(OBJS_SHLIB)
+
+$(stlib): override OBJS += $(OBJS_STATIC)
+$(stlib): $(OBJS_STATIC)
+
 # Check for functions that libpq must not call, currently just exit().
 # (Ideally we'd reject abort() too, but there are various scenarios where
 # build toolchains insert abort() calls, e.g. to implement assert().)
@@ -115,8 +130,6 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
-# libcurl registers an exit handler in the memory debugging code when running
-# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -124,7 +137,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
@@ -138,6 +151,11 @@ fe-misc.o: fe-misc.c $(top_builddir)/src/port/pg_config_paths.h
 $(top_builddir)/src/port/pg_config_paths.h:
 	$(MAKE) -C $(top_builddir)/src/port pg_config_paths.h
 
+# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
+# objects.
+%_shlib.o: %.c %.o
+	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
+
 install: all installdirs install-lib
 	$(INSTALL_DATA) $(srcdir)/libpq-fe.h '$(DESTDIR)$(includedir)'
 	$(INSTALL_DATA) $(srcdir)/libpq-events.h '$(DESTDIR)$(includedir)'
@@ -171,6 +189,6 @@ uninstall: uninstall-lib
 clean distclean: clean-lib
 	$(MAKE) -C test $@
 	rm -rf tmp_check
-	rm -f $(OBJS) pthread.h libpq-refs-stamp
+	rm -f $(OBJS) $(OBJS_SHLIB) $(OBJS_STATIC) pthread.h libpq-refs-stamp
 # Might be left over from a Win32 client-only build
 	rm -f pg_config_paths.h
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..0625cf39e9a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,4 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index ab6a45e2aba..9fbff89a21d 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -15,6 +15,10 @@
 
 #include "postgres_fe.h"
 
+#ifdef USE_DYNAMIC_OAUTH
+#include <dlfcn.h>
+#endif
+
 #include "common/base64.h"
 #include "common/hmac.h"
 #include "common/jsonapi.h"
@@ -22,6 +26,7 @@
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
 #include "mb/pg_wchar.h"
+#include "pg_config_paths.h"
 
 /* The exported OAuth callback mechanism. */
 static void *oauth_init(PGconn *conn, const char *password,
@@ -721,6 +726,218 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+/*-------------
+ * Builtin Flow
+ *
+ * There are three potential implementations of use_builtin_flow:
+ *
+ * 1) If the OAuth client is disabled at configuration time, return false.
+ *    Dependent clients must provide their own flow.
+ * 2) If the OAuth client is enabled and USE_DYNAMIC_OAUTH is defined, dlopen()
+ *    the libpq-oauth plugin and use its implementation.
+ * 3) Otherwise, use flow callbacks that are statically linked into the
+ *    executable.
+ */
+
+#if !defined(USE_LIBCURL)
+
+/*
+ * This configuration doesn't support the builtin flow.
+ */
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	return false;
+}
+
+#elif defined(USE_DYNAMIC_OAUTH)
+
+/*
+ * Use the builtin flow in the libpq-oauth plugin, which is loaded at runtime.
+ */
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+
+/*
+ * Define accessor/mutator shims to inject into libpq-oauth, so that it doesn't
+ * depend on the offsets within PGconn. (These have changed during minor version
+ * updates in the past.)
+ */
+
+#define DEFINE_GETTER(TYPE, MEMBER) \
+	typedef TYPE (*conn_ ## MEMBER ## _func) (PGconn *conn); \
+	static TYPE conn_ ## MEMBER(PGconn *conn) { return conn->MEMBER; }
+
+/* Like DEFINE_GETTER, but returns a pointer to the member. */
+#define DEFINE_GETTER_P(TYPE, MEMBER) \
+	typedef TYPE (*conn_ ## MEMBER ## _func) (PGconn *conn); \
+	static TYPE conn_ ## MEMBER(PGconn *conn) { return &conn->MEMBER; }
+
+#define DEFINE_SETTER(TYPE, MEMBER) \
+	typedef void (*set_conn_ ## MEMBER ## _func) (PGconn *conn, TYPE val); \
+	static void set_conn_ ## MEMBER(PGconn *conn, TYPE val) { conn->MEMBER = val; }
+
+DEFINE_GETTER_P(PQExpBuffer, errorMessage);
+DEFINE_GETTER(char *, oauth_client_id);
+DEFINE_GETTER(char *, oauth_client_secret);
+DEFINE_GETTER(char *, oauth_discovery_uri);
+DEFINE_GETTER(char *, oauth_issuer_id);
+DEFINE_GETTER(char *, oauth_scope);
+DEFINE_GETTER(fe_oauth_state *, sasl_state);
+
+DEFINE_SETTER(pgsocket, altsock);
+DEFINE_SETTER(char *, oauth_token);
+
+/*
+ * Loads the libpq-oauth plugin via dlopen(), initializes it, and plugs its
+ * callbacks into the connection's async auth handlers.
+ *
+ * Failure to load here results in a relatively quiet connection error, to
+ * handle the use case where the build supports loading a flow but a user does
+ * not want to install it. Troubleshooting of linker/loader failures can be done
+ * via PGOAUTHDEBUG.
+ */
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	static bool initialized = false;
+	static pthread_mutex_t init_mutex = PTHREAD_MUTEX_INITIALIZER;
+	int			lockerr;
+
+	void		(*init) (pgthreadlock_t threadlock,
+						 libpq_gettext_func gettext_impl,
+						 conn_errorMessage_func errmsg_impl,
+						 conn_oauth_client_id_func clientid_impl,
+						 conn_oauth_client_secret_func clientsecret_impl,
+						 conn_oauth_discovery_uri_func discoveryuri_impl,
+						 conn_oauth_issuer_id_func issuerid_impl,
+						 conn_oauth_scope_func scope_impl,
+						 conn_sasl_state_func saslstate_impl,
+						 set_conn_altsock_func setaltsock_impl,
+						 set_conn_oauth_token_func settoken_impl);
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+
+	/*
+	 * On macOS only, load the module using its absolute install path; the
+	 * standard search behavior is not very helpful for this use case. Unlike
+	 * on other platforms, DYLD_LIBRARY_PATH is used as a fallback even with
+	 * absolute paths (modulo SIP effects), so tests can continue to work.
+	 *
+	 * On the other platforms, load the module using only the basename, to
+	 * rely on the runtime linker's standard search behavior.
+	 */
+	const char *const module_name =
+#if defined(__darwin__)
+		LIBDIR "/libpq-oauth-" PG_MAJORVERSION DLSUFFIX;
+#else
+		"libpq-oauth-" PG_MAJORVERSION DLSUFFIX;
+#endif
+
+	state->builtin_flow = dlopen(module_name, RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		/*
+		 * For end users, this probably isn't an error condition, it just
+		 * means the flow isn't installed. Developers and package maintainers
+		 * may want to debug this via the PGOAUTHDEBUG envvar, though.
+		 *
+		 * Note that POSIX dlerror() isn't guaranteed to be threadsafe.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlopen for libpq-oauth: %s\n", dlerror());
+
+		return false;
+	}
+
+	if ((init = dlsym(state->builtin_flow, "libpq_oauth_init")) == NULL
+		|| (flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow")) == NULL
+		|| (cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow")) == NULL)
+	{
+		/*
+		 * This is more of an error condition than the one above, but due to
+		 * the dlerror() threadsafety issue, lock it behind PGOAUTHDEBUG too.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlsym for libpq-oauth: %s\n", dlerror());
+
+		dlclose(state->builtin_flow);
+		return false;
+	}
+
+	/*
+	 * Past this point, we do not unload the module. It stays in the process
+	 * permanently.
+	 */
+
+	/*
+	 * We need to inject necessary function pointers into the module. This
+	 * only needs to be done once -- even if the pointers are constant,
+	 * assigning them while another thread is executing the flows feels like
+	 * tempting fate.
+	 */
+	if ((lockerr = pthread_mutex_lock(&init_mutex)) != 0)
+	{
+		/* Should not happen... but don't continue if it does. */
+		Assert(false);
+
+		libpq_append_conn_error(conn, "failed to lock mutex (%d)", lockerr);
+		return false;
+	}
+
+	if (!initialized)
+	{
+		init(pg_g_threadlock,
+#ifdef ENABLE_NLS
+			 libpq_gettext,
+#else
+			 NULL,
+#endif
+			 conn_errorMessage,
+			 conn_oauth_client_id,
+			 conn_oauth_client_secret,
+			 conn_oauth_discovery_uri,
+			 conn_oauth_issuer_id,
+			 conn_oauth_scope,
+			 conn_sasl_state,
+			 set_conn_altsock,
+			 set_conn_oauth_token);
+
+		initialized = true;
+	}
+
+	pthread_mutex_unlock(&init_mutex);
+
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+
+	return true;
+}
+
+#else
+
+/*
+ * Use the builtin flow in libpq-oauth.a (see libpq-oauth/oauth-curl.h).
+ */
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = pg_fe_run_oauth_flow;
+	conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+	return true;
+}
+
+#endif							/* USE_LIBCURL */
+
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +1009,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
-		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		libpq_append_conn_error(conn, "no OAuth flows are available (try installing the libpq-oauth package)");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..0d59e91605b 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -15,8 +15,8 @@
 #ifndef FE_AUTH_OAUTH_H
 #define FE_AUTH_OAUTH_H
 
+#include "fe-auth-sasl.h"
 #include "libpq-fe.h"
-#include "libpq-int.h"
 
 
 enum fe_oauth_step
@@ -27,18 +27,24 @@ enum fe_oauth_step
 	FE_OAUTH_SERVER_ERROR,
 };
 
+/*
+ * This struct is exported to the libpq-oauth module. If changes are needed
+ * during backports to stable branches, please keep ABI compatibility (no
+ * changes to existing members, add new members at the end, etc.).
+ */
 typedef struct
 {
 	enum fe_oauth_step step;
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
+extern bool use_builtin_flow(PGconn *conn, fe_oauth_state *state);
 
 /* Mechanisms in fe-auth-oauth.c */
 extern const pg_fe_sasl_mech pg_oauth_mech;
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 292fecf3320..a74e885b169 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -38,10 +38,6 @@ if gssapi.found()
   )
 endif
 
-if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
-endif
-
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
@@ -50,6 +46,9 @@ export_file = custom_target('libpq.exports',
 libpq_inc = include_directories('.', '../../port')
 libpq_c_args = ['-DSO_MAJOR_VERSION=5']
 
+# The OAuth implementation differs depending on the type of library being built.
+libpq_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
+
 # Not using both_libraries() here as
 # 1) resource files should only be in the shared library
 # 2) we want the .pc file to include a dependency to {pgport,common}_static for
@@ -70,7 +69,7 @@ libpq_st = static_library('libpq',
 libpq_so = shared_library('libpq',
   libpq_sources + libpq_so_sources,
   include_directories: [libpq_inc, postgres_inc],
-  c_args: libpq_c_args,
+  c_args: libpq_c_args + libpq_so_c_args,
   c_pch: pch_postgres_fe_h,
   version: '5.' + pg_version_major.to_string(),
   soversion: host_system != 'windows' ? '5' : '',
@@ -86,12 +85,26 @@ libpq = declare_dependency(
   include_directories: [include_directories('.')]
 )
 
+private_deps = [
+  frontend_stlib_code,
+  libpq_deps,
+]
+
+if oauth_flow_supported
+  # libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+  # libpq-oauth needs libcurl. Put both into *.private.
+  private_deps += [
+    libpq_oauth_deps,
+    '-lpq-oauth',
+  ]
+endif
+
 pkgconfig.generate(
   name: 'libpq',
   description: 'PostgreSQL libpq library',
   url: pg_url,
   libraries: libpq,
-  libraries_private: [frontend_stlib_code, libpq_deps],
+  libraries_private: private_deps,
 )
 
 install_headers(
diff --git a/src/interfaces/libpq/nls.mk b/src/interfaces/libpq/nls.mk
index ae761265852..b87df277d93 100644
--- a/src/interfaces/libpq/nls.mk
+++ b/src/interfaces/libpq/nls.mk
@@ -13,15 +13,21 @@ GETTEXT_FILES    = fe-auth.c \
                    fe-secure-common.c \
                    fe-secure-gssapi.c \
                    fe-secure-openssl.c \
-                   win32.c
-GETTEXT_TRIGGERS = libpq_append_conn_error:2 \
+                   win32.c \
+                   ../libpq-oauth/oauth-curl.c \
+                   ../libpq-oauth/oauth-utils.c
+GETTEXT_TRIGGERS = actx_error:2 \
+                   libpq_append_conn_error:2 \
                    libpq_append_error:2 \
                    libpq_gettext \
                    libpq_ngettext:1,2 \
+                   oauth_parse_set_error:2 \
                    pqInternalNotice:2
-GETTEXT_FLAGS    = libpq_append_conn_error:2:c-format \
+GETTEXT_FLAGS    = actx_error:2:c-format \
+                   libpq_append_conn_error:2:c-format \
                    libpq_append_error:2:c-format \
                    libpq_gettext:1:pass-c-format \
                    libpq_ngettext:1:pass-c-format \
                    libpq_ngettext:2:pass-c-format \
+                   oauth_parse_set_error:2:c-format \
                    pqInternalNotice:2:c-format
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 55da678ec27..91a8de1ee9b 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -203,6 +203,8 @@ pgxs_empty = [
   'LIBNUMA_CFLAGS', 'LIBNUMA_LIBS',
 
   'LIBURING_CFLAGS', 'LIBURING_LIBS',
+
+  'LIBCURL_CPPFLAGS', 'LIBCURL_LDFLAGS', 'LIBCURL_LDLIBS',
 ]
 
 if host_system == 'windows' and cc.get_argument_syntax() != 'msvc'
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index 36d1b26369f..e190f9cf15a 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -78,7 +78,7 @@ tests += {
     ],
     'env': {
       'PYTHON': python.path(),
-      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_libcurl': oauth_flow_supported ? 'yes' : 'no',
       'with_python': 'yes',
     },
   },
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
index 8dd502f41e1..21d4acc1926 100644
--- a/src/test/modules/oauth_validator/t/002_client.pl
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -110,7 +110,7 @@ if ($ENV{with_libcurl} ne 'yes')
 		"fails without custom hook installed",
 		flags => ["--no-hook"],
 		expected_stderr =>
-		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+		  qr/no OAuth flows are available \(try installing the libpq-oauth package\)/
 	);
 }
 
-- 
2.34.1

#373Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#372)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 29 Apr 2025, at 02:10, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Wed, Apr 23, 2025 at 10:46 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Are there any readers who feel like an internal ABI version for
`struct pg_conn`, bumped during breaking backports, would be
acceptable? (More definitively: are there any readers who would veto
that?)

To keep things moving: I assume this is unacceptable. So v10 redirects
every access to a PGconn struct member through a shim, similarly to
how conn->errorMessage was translated in v9. This adds plenty of new
boilerplate, but not a whole lot of complexity. To try to keep us
honest, libpq-int.h has been removed from the libpq-oauth includes.

That admittedly seems like a win regardless.

This will now handle in-place minor version upgrades that swap pg_conn
internals around, so I've gone back to -MAJOR versioning alone.
fe_oauth_state is still exported; it now has an ABI warning above it.
(I figure that's easier to draw a line around during backports,
compared to everything in PGconn. We can still break things there
during major version upgrades.)

While I'm far from the expert on this subject (luckily there are such in this
thread), I am unable to see any sharp edges from reading and testing this
version of the patch. A few small comments:

+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
+later split out as its own shared library in order to isolate its dependency on
+libcurl. (End users who don't want the Curl dependency can simply choose not to
+install this module.)

We should either clarify that it was never shipped as part of libpq core, or
remove this altogether. I would vote for the latter since we typically don't
document changes that happen during the devcycle. How about something like:

+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It is maintained as its own shared library in order to
+isolate its dependency on libcurl. (End users who don't want the Curl dependency
+can simply choose not to install this module.)
+- void libpq_oauth_init(pgthreadlock_t threadlock,
+             <snip>
+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
+libpq_gettext(), which must be injected by libpq using this initialization
+function before the flow is run.

I think this explanatory paragraph should come before the function prototype.
The following paragraph on the setters/getters make sense where it is though.

+#if defined(USE_DYNAMIC_OAUTH) && defined(LIBPQ_INT_H)
+#error do not rely on libpq-int.h in libpq-oauth.so
+#endif

Nitpick, but it won't be .so everywhere. Would this be clearar if spelled out
with something like "do not rely on libpq-int.h when building libpq-oauth as
dynamic shared lib"?

--
Daniel Gustafsson

#374Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#373)
2 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Apr 30, 2025 at 5:55 AM Daniel Gustafsson <daniel@yesql.se> wrote:

To keep things moving: I assume this is unacceptable. So v10 redirects
every access to a PGconn struct member through a shim, similarly to
how conn->errorMessage was translated in v9. This adds plenty of new
boilerplate, but not a whole lot of complexity. To try to keep us
honest, libpq-int.h has been removed from the libpq-oauth includes.

That admittedly seems like a win regardless.

Yeah, it moves us much closer to the long-term goal.

We should either clarify that it was never shipped as part of libpq core, or
remove this altogether.

Done in v11, with your suggested wording.

I think this explanatory paragraph should come before the function prototype.

Done.

Nitpick, but it won't be .so everywhere. Would this be clearar if spelled out
with something like "do not rely on libpq-int.h when building libpq-oauth as
dynamic shared lib"?

I went with "do not rely on libpq-int.h in dynamic builds of
libpq-oauth", since devs are hopefully going to be the only people who
see it. I've also fixed up an errant #endif label right above it.

I'd ideally like to get a working split in for beta. Barring
objections, I plan to get this pushed tomorrow so that the buildfarm
has time to highlight any corner cases well before the Saturday
freeze. I still see the choice of naming (with its forced-ABI break
every major version) as needing more scrutiny, and probably worth a
Revisit entry.

The CI still looks happy, and I will spend today with VMs and more
testing on the Autoconf side. I'll try to peer at Alpine and musl
libc, too; dogfish and basilisk are the Curl-enabled animals that
caught my attention most.

Thanks!
--Jacob

Attachments:

since-v10.diff.txttext/plain; charset=US-ASCII; name=since-v10.diff.txtDownload
1:  e86e93f7ac8 ! 1:  5a1d1345919 oauth: Move the builtin flow into a separate module
    @@ Commit message
     
         Per request from Tom Lane and Bruce Momjian. Based on an initial patch
         by Daniel Gustafsson, who also contributed docs changes. The "bare"
    -    dlopen() concept came from Thomas Munro. Many many people reviewed the
    -    design and implementation; thank you!
    +    dlopen() concept came from Thomas Munro. Many people reviewed the design
    +    and implementation; thank you!
     
         Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
         Reviewed-by: Andres Freund <andres@anarazel.de>
         Reviewed-by: Christoph Berg <myon@debian.org>
    +    Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
         Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
         Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
         Reviewed-by: Wolfgang Walther <walther@technowledgy.de>
    @@ src/interfaces/libpq-oauth/Makefile (new)
      ## src/interfaces/libpq-oauth/README (new) ##
     @@
     +libpq-oauth is an optional module implementing the Device Authorization flow for
    -+OAuth clients (RFC 8628). It was originally developed as part of libpq core and
    -+later split out as its own shared library in order to isolate its dependency on
    -+libcurl. (End users who don't want the Curl dependency can simply choose not to
    -+install this module.)
    ++OAuth clients (RFC 8628). It is maintained as its own shared library in order to
    ++isolate its dependency on libcurl. (End users who don't want the Curl dependency
    ++can simply choose not to install this module.)
     +
     +If a connection string allows the use of OAuth, and the server asks for it, and
     +a libpq client has not installed its own custom OAuth flow, libpq will attempt
    @@ src/interfaces/libpq-oauth/README (new)
     +pg_fe_run_oauth_flow and pg_fe_cleanup_oauth_flow are implementations of
     +conn->async_auth and conn->cleanup_async_auth, respectively.
     +
    ++At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
    ++libpq_gettext(), which must be injected by libpq using this initialization
    ++function before the flow is run:
    ++
     +- void libpq_oauth_init(pgthreadlock_t threadlock,
     +						libpq_gettext_func gettext_impl,
     +						conn_errorMessage_func errmsg_impl,
    @@ src/interfaces/libpq-oauth/README (new)
     +						set_conn_altsock_func setaltsock_impl,
     +						set_conn_oauth_token_func settoken_impl);
     +
    -+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
    -+libpq_gettext(), which must be injected by libpq using this initialization
    -+function before the flow is run.
    -+
     +It also relies on access to several members of the PGconn struct. Not only can
     +these change positions across minor versions, but the offsets aren't necessarily
     +stable within a single minor release (conn->errorMessage, for instance, can
    @@ src/interfaces/libpq/fe-auth-oauth-curl.c => src/interfaces/libpq-oauth/oauth-cu
     +#define set_conn_altsock(CONN, VAL) do { CONN->altsock = VAL; } while (0)
     +#define set_conn_oauth_token(CONN, VAL) do { CONN->oauth_token = VAL; } while (0)
     +
    -+#endif							/* !USE_DYNAMIC_OAUTH */
    ++#endif							/* USE_DYNAMIC_OAUTH */
     +
     +/* One final guardrail against accidental inclusion... */
     +#if defined(USE_DYNAMIC_OAUTH) && defined(LIBPQ_INT_H)
    -+#error do not rely on libpq-int.h in libpq-oauth.so
    ++#error do not rely on libpq-int.h in dynamic builds of libpq-oauth
     +#endif
      
      /*
    @@ src/interfaces/libpq-oauth/oauth-utils.c (new)
     +#endif
     +
     +#ifdef LIBPQ_INT_H
    -+#error do not rely on libpq-int.h in libpq-oauth
    ++#error do not rely on libpq-int.h in dynamic builds of libpq-oauth
     +#endif
     +
     +/*
v11-0001-oauth-Move-the-builtin-flow-into-a-separate-modu.patchapplication/octet-stream; name=v11-0001-oauth-Move-the-builtin-flow-into-a-separate-modu.patchDownload
From 5a1d134591976ddb235aa4c077b2ea623f368f0a Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 26 Mar 2025 10:55:28 -0700
Subject: [PATCH v11] oauth: Move the builtin flow into a separate module

The additional packaging footprint of the OAuth Curl dependency, as well
as the existence of libcurl in the address space even if OAuth isn't
ever used by a client, has raised some concerns. Split off this
dependency into a separate loadable module called libpq-oauth.

When configured using --with-libcurl, libpq.so searches for this new
module via dlopen(). End users may choose not to install the libpq-oauth
module, in which case the default flow is disabled.

For static applications using libpq.a, the libpq-oauth staticlib is a
mandatory link-time dependency for --with-libcurl builds. libpq.pc has
been updated accordingly.

The default flow relies on some libpq internals. Some of these can be
safely duplicated (such as the SIGPIPE handlers), but others need to be
shared between libpq and libpq-oauth for thread-safety. To avoid
exporting these internals to all libpq clients forever, these
dependencies are instead injected from the libpq side via an
initialization function. This also lets libpq communicate the offsets of
PGconn struct members to libpq-oauth, so that we can function without
crashing if the module on the search path came from a different build of
Postgres. (A minor-version upgrade could swap the libpq-oauth module out
from under a long-running libpq client before it does its first load of
the OAuth flow.)

This ABI is considered "private". The module has no SONAME or version
symlinks, and it's named libpq-oauth-<major>.so to avoid mixing and
matching across Postgres versions. (Future improvements may promote this
"OAuth flow plugin" to a first-class concept, at which point we would
need a public API to replace this anyway.)

Additionally, NLS support for error messages in b3f0be788a was
incomplete, because the new error macros weren't being scanned by
xgettext. Fix that now.

Per request from Tom Lane and Bruce Momjian. Based on an initial patch
by Daniel Gustafsson, who also contributed docs changes. The "bare"
dlopen() concept came from Thomas Munro. Many people reviewed the design
and implementation; thank you!

Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Wolfgang Walther <walther@technowledgy.de>
Discussion: https://postgr.es/m/641687.1742360249%40sss.pgh.pa.us
---
 config/programs.m4                            |  17 +-
 configure                                     |  50 +++-
 configure.ac                                  |  26 +-
 doc/src/sgml/installation.sgml                |   8 +
 doc/src/sgml/libpq.sgml                       |  30 ++-
 meson.build                                   |  32 ++-
 src/Makefile.global.in                        |   3 +
 src/interfaces/Makefile                       |  12 +
 src/interfaces/libpq-oauth/Makefile           |  83 +++++++
 src/interfaces/libpq-oauth/README             |  57 +++++
 src/interfaces/libpq-oauth/exports.txt        |   4 +
 src/interfaces/libpq-oauth/meson.build        |  45 ++++
 .../oauth-curl.c}                             | 180 ++++++++------
 src/interfaces/libpq-oauth/oauth-curl.h       |  24 ++
 src/interfaces/libpq-oauth/oauth-utils.c      | 233 ++++++++++++++++++
 src/interfaces/libpq-oauth/oauth-utils.h      |  94 +++++++
 src/interfaces/libpq/Makefile                 |  36 ++-
 src/interfaces/libpq/exports.txt              |   1 +
 src/interfaces/libpq/fe-auth-oauth.c          | 229 ++++++++++++++++-
 src/interfaces/libpq/fe-auth-oauth.h          |  12 +-
 src/interfaces/libpq/meson.build              |  25 +-
 src/interfaces/libpq/nls.mk                   |  12 +-
 src/makefiles/meson.build                     |   2 +
 src/test/modules/oauth_validator/meson.build  |   2 +-
 .../modules/oauth_validator/t/002_client.pl   |   2 +-
 25 files changed, 1079 insertions(+), 140 deletions(-)
 create mode 100644 src/interfaces/libpq-oauth/Makefile
 create mode 100644 src/interfaces/libpq-oauth/README
 create mode 100644 src/interfaces/libpq-oauth/exports.txt
 create mode 100644 src/interfaces/libpq-oauth/meson.build
 rename src/interfaces/{libpq/fe-auth-oauth-curl.c => libpq-oauth/oauth-curl.c} (94%)
 create mode 100644 src/interfaces/libpq-oauth/oauth-curl.h
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.c
 create mode 100644 src/interfaces/libpq-oauth/oauth-utils.h

diff --git a/config/programs.m4 b/config/programs.m4
index 0a07feb37cc..0ad1e58b48d 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -286,9 +286,20 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
   AC_CHECK_HEADER(curl/curl.h, [],
 				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
-  AC_CHECK_LIB(curl, curl_multi_init, [],
+  AC_CHECK_LIB(curl, curl_multi_init, [
+				 AC_DEFINE([HAVE_LIBCURL], [1], [Define to 1 if you have the `curl' library (-lcurl).])
+				 AC_SUBST(LIBCURL_LDLIBS, -lcurl)
+			   ],
 			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   AC_CACHE_CHECK([for curl_global_init thread safety], [pgac_cv__libcurl_threadsafe_init],
@@ -338,4 +349,8 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 *** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
 *** to use it with libpq.])
   fi
+
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 0936010718d..a4c4bcb40ea 100755
--- a/configure
+++ b/configure
@@ -655,6 +655,7 @@ UUID_LIBS
 LDAP_LIBS_BE
 LDAP_LIBS_FE
 with_ssl
+LIBCURL_LDLIBS
 PTHREAD_CFLAGS
 PTHREAD_LIBS
 PTHREAD_CC
@@ -711,6 +712,8 @@ with_libxml
 LIBNUMA_LIBS
 LIBNUMA_CFLAGS
 with_libnuma
+LIBCURL_LDFLAGS
+LIBCURL_CPPFLAGS
 LIBCURL_LIBS
 LIBCURL_CFLAGS
 with_libcurl
@@ -9053,19 +9056,27 @@ $as_echo "yes" >&6; }
 
 fi
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+
+
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: *** OAuth support tests require --with-python to run" >&5
@@ -12704,9 +12715,6 @@ fi
 
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
 
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
@@ -12754,17 +12762,26 @@ fi
 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_curl_curl_multi_init" >&5
 $as_echo "$ac_cv_lib_curl_curl_multi_init" >&6; }
 if test "x$ac_cv_lib_curl_curl_multi_init" = xyes; then :
-  cat >>confdefs.h <<_ACEOF
-#define HAVE_LIBCURL 1
-_ACEOF
 
-  LIBS="-lcurl $LIBS"
+
+$as_echo "#define HAVE_LIBCURL 1" >>confdefs.h
+
+				 LIBCURL_LDLIBS=-lcurl
+
 
 else
   as_fn_error $? "library 'curl' does not provide curl_multi_init" "$LINENO" 5
 fi
 
 
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+  LIBS="$LIBCURL_LDLIBS $LIBS"
+
   # Check to see whether the current platform supports threadsafe Curl
   # initialization.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_global_init thread safety" >&5
@@ -12868,6 +12885,10 @@ $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
 *** to use it with libpq." "$LINENO" 5
   fi
 
+  CPPFLAGS=$pgac_save_CPPFLAGS
+  LDFLAGS=$pgac_save_LDFLAGS
+  LIBS=$pgac_save_LIBS
+
 fi
 
 if test "$with_gssapi" = yes ; then
@@ -14516,6 +14537,13 @@ done
 
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    as_fn_error $? "client OAuth is not supported on this platform" "$LINENO" 5
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/configure.ac b/configure.ac
index 2a78cddd825..c0471030e90 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1033,19 +1033,27 @@ if test "$with_libcurl" = yes ; then
   # to explicitly set TLS 1.3 ciphersuites).
   PKG_CHECK_MODULES(LIBCURL, [libcurl >= 7.61.0])
 
-  # We only care about -I, -D, and -L switches;
-  # note that -lcurl will be added by PGAC_CHECK_LIBCURL below.
+  # Curl's flags are kept separate from the standard CPPFLAGS/LDFLAGS. We use
+  # them only for libpq-oauth.
+  LIBCURL_CPPFLAGS=
+  LIBCURL_LDFLAGS=
+
+  # We only care about -I, -D, and -L switches. Note that -lcurl will be added
+  # to LIBCURL_LDLIBS by PGAC_CHECK_LIBCURL, below.
   for pgac_option in $LIBCURL_CFLAGS; do
     case $pgac_option in
-      -I*|-D*) CPPFLAGS="$CPPFLAGS $pgac_option";;
+      -I*|-D*) LIBCURL_CPPFLAGS="$LIBCURL_CPPFLAGS $pgac_option";;
     esac
   done
   for pgac_option in $LIBCURL_LIBS; do
     case $pgac_option in
-      -L*) LDFLAGS="$LDFLAGS $pgac_option";;
+      -L*) LIBCURL_LDFLAGS="$LIBCURL_LDFLAGS $pgac_option";;
     esac
   done
 
+  AC_SUBST(LIBCURL_CPPFLAGS)
+  AC_SUBST(LIBCURL_LDFLAGS)
+
   # OAuth requires python for testing
   if test "$with_python" != yes; then
     AC_MSG_WARN([*** OAuth support tests require --with-python to run])
@@ -1354,9 +1362,6 @@ failure.  It is possible the compiler isn't looking in the proper directory.
 Use --without-zlib to disable zlib support.])])
 fi
 
-# XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-# during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-# dependency on that platform?
 if test "$with_libcurl" = yes ; then
   PGAC_CHECK_LIBCURL
 fi
@@ -1654,6 +1659,13 @@ if test "$PORTNAME" = "win32" ; then
    AC_CHECK_HEADERS(crtdefs.h)
 fi
 
+if test "$with_libcurl" = yes ; then
+  # Error out early if this platform can't support libpq-oauth.
+  if test "$ac_cv_header_sys_event_h" != yes -a "$ac_cv_header_sys_epoll_h" != yes; then
+    AC_MSG_ERROR([client-side OAuth is not supported on this platform])
+  fi
+fi
+
 ##
 ## Types, structures, compiler characteristics
 ##
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index e7ffb942bbd..60419312113 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -313,6 +313,14 @@
      </para>
     </listitem>
 
+    <listitem>
+     <para>
+      You need <productname>Curl</productname> to build an optional module
+      which implements the <link linkend="libpq-oauth">OAuth Device
+      Authorization flow</link> for client applications.
+     </para>
+    </listitem>
+
     <listitem>
      <para>
       You need <productname>LZ4</productname>, if you want to support
diff --git a/doc/src/sgml/libpq.sgml b/doc/src/sgml/libpq.sgml
index 8cdd2997d43..695fe958c3e 100644
--- a/doc/src/sgml/libpq.sgml
+++ b/doc/src/sgml/libpq.sgml
@@ -10226,15 +10226,20 @@ void PQinitSSL(int do_ssl);
   <title>OAuth Support</title>
 
   <para>
-   libpq implements support for the OAuth v2 Device Authorization client flow,
+   <application>libpq</application> implements support for the OAuth v2 Device Authorization client flow,
    documented in
    <ulink url="https://datatracker.ietf.org/doc/html/rfc8628">RFC 8628</ulink>,
-   which it will attempt to use by default if the server
+   as an optional module. See the <link linkend="configure-option-with-libcurl">
+   installation documentation</link> for information on how to enable support
+   for Device Authorization as a builtin flow.
+  </para>
+  <para>
+   When support is enabled and the optional module installed, <application>libpq</application>
+   will use the builtin flow by default if the server
    <link linkend="auth-oauth">requests a bearer token</link> during
    authentication. This flow can be utilized even if the system running the
    client application does not have a usable web browser, for example when
-   running a client via <application>SSH</application>. Client applications may implement their own flows
-   instead; see <xref linkend="libpq-oauth-authdata-hooks"/>.
+   running a client via <acronym>SSH</acronym>.
   </para>
   <para>
    The builtin flow will, by default, print a URL to visit and a user code to
@@ -10251,6 +10256,11 @@ Visit https://example.com/device and enter the code: ABCD-EFGH
    they match expectations, before continuing. Permissions should not be given
    to untrusted third parties.
   </para>
+  <para>
+   Client applications may implement their own flows to customize interaction
+   and integration with applications. See <xref linkend="libpq-oauth-authdata-hooks"/>
+   for more information on how add a custom flow to <application>libpq</application>.
+  </para>
   <para>
    For an OAuth client flow to be usable, the connection string must at minimum
    contain <xref linkend="libpq-connect-oauth-issuer"/> and
@@ -10366,7 +10376,9 @@ typedef struct _PGpromptOAuthDevice
 </synopsis>
         </para>
         <para>
-         The OAuth Device Authorization flow included in <application>libpq</application>
+         The OAuth Device Authorization flow which
+         <link linkend="configure-option-with-libcurl">can be included</link>
+         in <application>libpq</application>
          requires the end user to visit a URL with a browser, then enter a code
          which permits <application>libpq</application> to connect to the server
          on their behalf. The default prompt simply prints the
@@ -10378,7 +10390,8 @@ typedef struct _PGpromptOAuthDevice
          This callback is only invoked during the builtin device
          authorization flow. If the application installs a
          <link linkend="libpq-oauth-authdata-oauth-bearer-token">custom OAuth
-         flow</link>, this authdata type will not be used.
+         flow</link>, or <application>libpq</application> was not built with
+         support for the builtin flow, this authdata type will not be used.
         </para>
         <para>
          If a non-NULL <structfield>verification_uri_complete</structfield> is
@@ -10400,8 +10413,9 @@ typedef struct _PGpromptOAuthDevice
        </term>
        <listitem>
         <para>
-         Replaces the entire OAuth flow with a custom implementation. The hook
-         should either directly return a Bearer token for the current
+         Adds a custom implementation of a flow, replacing the builtin flow if
+         it is <link linkend="configure-option-with-libcurl">installed</link>.
+         The hook should either directly return a Bearer token for the current
          user/issuer/scope combination, if one is available without blocking, or
          else set up an asynchronous callback to retrieve one.
         </para>
diff --git a/meson.build b/meson.build
index a1516e54529..29d46c8ad01 100644
--- a/meson.build
+++ b/meson.build
@@ -107,6 +107,7 @@ os_deps = []
 backend_both_deps = []
 backend_deps = []
 libpq_deps = []
+libpq_oauth_deps = []
 
 pg_sysroot = ''
 
@@ -860,13 +861,13 @@ endif
 ###############################################################
 
 libcurlopt = get_option('libcurl')
+oauth_flow_supported = false
+
 if not libcurlopt.disabled()
   # Check for libcurl 7.61.0 or higher (corresponding to RHEL8 and the ability
   # to explicitly set TLS 1.3 ciphersuites).
   libcurl = dependency('libcurl', version: '>= 7.61.0', required: libcurlopt)
   if libcurl.found()
-    cdata.set('USE_LIBCURL', 1)
-
     # Check to see whether the current platform supports thread-safe Curl
     # initialization.
     libcurl_threadsafe_init = false
@@ -938,6 +939,22 @@ if not libcurlopt.disabled()
     endif
   endif
 
+  # Check that the current platform supports our builtin flow. This requires
+  # libcurl and one of either epoll or kqueue.
+  oauth_flow_supported = (
+    libcurl.found()
+    and (cc.check_header('sys/event.h', required: false,
+                         args: test_c_args, include_directories: postgres_inc)
+         or cc.check_header('sys/epoll.h', required: false,
+                            args: test_c_args, include_directories: postgres_inc))
+  )
+
+  if oauth_flow_supported
+    cdata.set('USE_LIBCURL', 1)
+  elif libcurlopt.enabled()
+    error('client-side OAuth is not supported on this platform')
+  endif
+
 else
   libcurl = not_found_dep
 endif
@@ -3272,17 +3289,18 @@ libpq_deps += [
 
   gssapi,
   ldap_r,
-  # XXX libcurl must link after libgssapi_krb5 on FreeBSD to avoid segfaults
-  # during gss_acquire_cred(). This is possibly related to Curl's Heimdal
-  # dependency on that platform?
-  libcurl,
   libintl,
   ssl,
 ]
 
+libpq_oauth_deps += [
+  libcurl,
+]
+
 subdir('src/interfaces/libpq')
-# fe_utils depends on libpq
+# fe_utils and libpq-oauth depends on libpq
 subdir('src/fe_utils')
+subdir('src/interfaces/libpq-oauth')
 
 # for frontend binaries
 frontend_code = declare_dependency(
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index 6722fbdf365..04952b533de 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -347,6 +347,9 @@ perl_embed_ldflags	= @perl_embed_ldflags@
 
 AWK	= @AWK@
 LN_S	= @LN_S@
+LIBCURL_CPPFLAGS = @LIBCURL_CPPFLAGS@
+LIBCURL_LDFLAGS = @LIBCURL_LDFLAGS@
+LIBCURL_LDLIBS = @LIBCURL_LDLIBS@
 MSGFMT  = @MSGFMT@
 MSGFMT_FLAGS = @MSGFMT_FLAGS@
 MSGMERGE = @MSGMERGE@
diff --git a/src/interfaces/Makefile b/src/interfaces/Makefile
index 7d56b29d28f..e6822caa206 100644
--- a/src/interfaces/Makefile
+++ b/src/interfaces/Makefile
@@ -14,7 +14,19 @@ include $(top_builddir)/src/Makefile.global
 
 SUBDIRS = libpq ecpg
 
+ifeq ($(with_libcurl), yes)
+SUBDIRS += libpq-oauth
+else
+ALWAYS_SUBDIRS += libpq-oauth
+endif
+
 $(recurse)
+$(recurse_always)
 
 all-ecpg-recurse: all-libpq-recurse
 install-ecpg-recurse: install-libpq-recurse
+
+ifeq ($(with_libcurl), yes)
+all-libpq-oauth-recurse: all-libpq-recurse
+install-libpq-oauth-recurse: install-libpq-recurse
+endif
diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
new file mode 100644
index 00000000000..3e4b34142e0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -0,0 +1,83 @@
+#-------------------------------------------------------------------------
+#
+# Makefile for libpq-oauth
+#
+# Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+# Portions Copyright (c) 1994, Regents of the University of California
+#
+# src/interfaces/libpq-oauth/Makefile
+#
+#-------------------------------------------------------------------------
+
+subdir = src/interfaces/libpq-oauth
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+
+PGFILEDESC = "libpq-oauth - device authorization OAuth support"
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+NAME = pq-oauth-$(MAJORVERSION)
+
+# Force the name "libpq-oauth" for both the static and shared libraries. The
+# staticlib doesn't need version information in its name.
+override shlib := lib$(NAME)$(DLSUFFIX)
+override stlib := libpq-oauth.a
+
+override CPPFLAGS := -I$(libpq_srcdir) -I$(top_builddir)/src/port $(LIBCURL_CPPFLAGS) $(CPPFLAGS)
+
+OBJS = \
+	$(WIN32RES)
+
+OBJS_STATIC = oauth-curl.o
+
+# The shared library needs additional glue symbols.
+OBJS_SHLIB = \
+	oauth-curl_shlib.o \
+	oauth-utils.o \
+
+oauth-utils.o: override CPPFLAGS += -DUSE_DYNAMIC_OAUTH
+oauth-curl_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
+
+# Add shlib-/stlib-specific objects.
+$(shlib): override OBJS += $(OBJS_SHLIB)
+$(shlib): $(OBJS_SHLIB)
+
+$(stlib): override OBJS += $(OBJS_STATIC)
+$(stlib): $(OBJS_STATIC)
+
+SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
+SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)
+SHLIB_PREREQS = submake-libpq
+SHLIB_EXPORTS = exports.txt
+
+# Disable -bundle_loader on macOS.
+BE_DLLLIBS =
+
+# By default, a library without an SONAME doesn't get a static library, so we
+# add it to the build explicitly.
+all: all-lib all-static-lib
+
+# Shared library stuff
+include $(top_srcdir)/src/Makefile.shlib
+
+# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
+# objects.
+%_shlib.o: %.c %.o
+	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
+
+# Ignore the standard rules for SONAME-less installation; we want both the
+# static and shared libraries to go into libdir.
+install: all installdirs $(stlib) $(shlib)
+	$(INSTALL_SHLIB) $(shlib) '$(DESTDIR)$(libdir)/$(shlib)'
+	$(INSTALL_STLIB) $(stlib) '$(DESTDIR)$(libdir)/$(stlib)'
+
+installdirs:
+	$(MKDIR_P) '$(DESTDIR)$(libdir)'
+
+uninstall:
+	rm -f '$(DESTDIR)$(libdir)/$(stlib)'
+	rm -f '$(DESTDIR)$(libdir)/$(shlib)'
+
+clean distclean: clean-lib
+	rm -f $(OBJS) $(OBJS_STATIC) $(OBJS_SHLIB)
diff --git a/src/interfaces/libpq-oauth/README b/src/interfaces/libpq-oauth/README
new file mode 100644
index 00000000000..553962d644e
--- /dev/null
+++ b/src/interfaces/libpq-oauth/README
@@ -0,0 +1,57 @@
+libpq-oauth is an optional module implementing the Device Authorization flow for
+OAuth clients (RFC 8628). It is maintained as its own shared library in order to
+isolate its dependency on libcurl. (End users who don't want the Curl dependency
+can simply choose not to install this module.)
+
+If a connection string allows the use of OAuth, and the server asks for it, and
+a libpq client has not installed its own custom OAuth flow, libpq will attempt
+to delay-load this module using dlopen() and the following ABI. Failure to load
+results in a failed connection.
+
+= Load-Time ABI =
+
+This module ABI is an internal implementation detail, so it's subject to change
+across major releases; the name of the module (libpq-oauth-MAJOR) reflects this.
+The module exports the following symbols:
+
+- PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+- void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+pg_fe_run_oauth_flow and pg_fe_cleanup_oauth_flow are implementations of
+conn->async_auth and conn->cleanup_async_auth, respectively.
+
+At the moment, pg_fe_run_oauth_flow() relies on libpq's pg_g_threadlock and
+libpq_gettext(), which must be injected by libpq using this initialization
+function before the flow is run:
+
+- void libpq_oauth_init(pgthreadlock_t threadlock,
+						libpq_gettext_func gettext_impl,
+						conn_errorMessage_func errmsg_impl,
+						conn_oauth_client_id_func clientid_impl,
+						conn_oauth_client_secret_func clientsecret_impl,
+						conn_oauth_discovery_uri_func discoveryuri_impl,
+						conn_oauth_issuer_id_func issuerid_impl,
+						conn_oauth_scope_func scope_impl,
+						conn_sasl_state_func saslstate_impl,
+						set_conn_altsock_func setaltsock_impl,
+						set_conn_oauth_token_func settoken_impl);
+
+It also relies on access to several members of the PGconn struct. Not only can
+these change positions across minor versions, but the offsets aren't necessarily
+stable within a single minor release (conn->errorMessage, for instance, can
+change offsets depending on configure-time options). Therefore the necessary
+accessors (named conn_*) and mutators (set_conn_*) are injected here. With this
+approach, we can safely search the standard dlopen() paths (e.g. RPATH,
+LD_LIBRARY_PATH, the SO cache) for an implementation module to use, even if that
+module wasn't compiled at the same time as libpq -- which becomes especially
+important during "live upgrade" situations where a running libpq application has
+the libpq-oauth module updated out from under it before it's first loaded from
+disk.
+
+= Static Build =
+
+The static library libpq.a does not perform any dynamic loading. If the builtin
+flow is enabled, the application is expected to link against libpq-oauth.a
+directly to provide the necessary symbols. (libpq.a and libpq-oauth.a must be
+part of the same build. Unlike the dynamic module, there are no translation
+shims provided.)
diff --git a/src/interfaces/libpq-oauth/exports.txt b/src/interfaces/libpq-oauth/exports.txt
new file mode 100644
index 00000000000..6891a83dbf9
--- /dev/null
+++ b/src/interfaces/libpq-oauth/exports.txt
@@ -0,0 +1,4 @@
+# src/interfaces/libpq-oauth/exports.txt
+libpq_oauth_init          1
+pg_fe_run_oauth_flow      2
+pg_fe_cleanup_oauth_flow  3
diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
new file mode 100644
index 00000000000..9e7301a7f63
--- /dev/null
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -0,0 +1,45 @@
+# Copyright (c) 2022-2025, PostgreSQL Global Development Group
+
+if not oauth_flow_supported
+  subdir_done()
+endif
+
+libpq_oauth_sources = files(
+  'oauth-curl.c',
+)
+
+# The shared library needs additional glue symbols.
+libpq_oauth_so_sources = files(
+  'oauth-utils.c',
+)
+libpq_oauth_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
+
+export_file = custom_target('libpq-oauth.exports',
+  kwargs: gen_export_kwargs,
+)
+
+# port needs to be in include path due to pthread-win32.h
+libpq_oauth_inc = include_directories('.', '../libpq', '../../port')
+
+libpq_oauth_st = static_library('libpq-oauth',
+  libpq_oauth_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_stlib_code, libpq_oauth_deps],
+  kwargs: default_lib_args,
+)
+
+# This is an internal module; we don't want an SONAME and therefore do not set
+# SO_MAJOR_VERSION.
+libpq_oauth_name = 'libpq-oauth-@0@'.format(pg_version_major)
+
+libpq_oauth_so = shared_module(libpq_oauth_name,
+  libpq_oauth_sources + libpq_oauth_so_sources,
+  include_directories: [libpq_oauth_inc, postgres_inc],
+  c_args: libpq_so_c_args,
+  c_pch: pch_postgres_fe_h,
+  dependencies: [frontend_shlib_code, libpq, libpq_oauth_deps],
+  link_depends: export_file,
+  link_args: export_fmt.format(export_file.full_path()),
+  kwargs: default_lib_args,
+)
diff --git a/src/interfaces/libpq/fe-auth-oauth-curl.c b/src/interfaces/libpq-oauth/oauth-curl.c
similarity index 94%
rename from src/interfaces/libpq/fe-auth-oauth-curl.c
rename to src/interfaces/libpq-oauth/oauth-curl.c
index c195e00cd28..d13b9cbabb4 100644
--- a/src/interfaces/libpq/fe-auth-oauth-curl.c
+++ b/src/interfaces/libpq-oauth/oauth-curl.c
@@ -1,6 +1,6 @@
 /*-------------------------------------------------------------------------
  *
- * fe-auth-oauth-curl.c
+ * oauth-curl.c
  *	   The libcurl implementation of OAuth/OIDC authentication, using the
  *	   OAuth Device Authorization Grant (RFC 8628).
  *
@@ -8,7 +8,7 @@
  * Portions Copyright (c) 1994, Regents of the University of California
  *
  * IDENTIFICATION
- *	  src/interfaces/libpq/fe-auth-oauth-curl.c
+ *	  src/interfaces/libpq-oauth/oauth-curl.c
  *
  *-------------------------------------------------------------------------
  */
@@ -17,20 +17,56 @@
 
 #include <curl/curl.h>
 #include <math.h>
-#ifdef HAVE_SYS_EPOLL_H
+#include <unistd.h>
+
+#if defined(HAVE_SYS_EPOLL_H)
 #include <sys/epoll.h>
 #include <sys/timerfd.h>
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 #include <sys/event.h>
+#else
+#error libpq-oauth is not supported on this platform
 #endif
-#include <unistd.h>
 
 #include "common/jsonapi.h"
-#include "fe-auth.h"
 #include "fe-auth-oauth.h"
-#include "libpq-int.h"
 #include "mb/pg_wchar.h"
+#include "oauth-curl.h"
+
+#ifdef USE_DYNAMIC_OAUTH
+
+/*
+ * The module build is decoupled from libpq-int.h, to try to avoid inadvertent
+ * ABI breaks during minor version bumps. Replacements for the missing internals
+ * are provided by oauth-utils.
+ */
+#include "oauth-utils.h"
+
+#else							/* !USE_DYNAMIC_OAUTH */
+
+/*
+ * Static builds may rely on PGconn offsets directly. Keep these aligned with
+ * the bank of callbacks in oauth-utils.h.
+ */
+#include "libpq-int.h"
+
+#define conn_errorMessage(CONN) (&CONN->errorMessage)
+#define conn_oauth_client_id(CONN) (CONN->oauth_client_id)
+#define conn_oauth_client_secret(CONN) (CONN->oauth_client_secret)
+#define conn_oauth_discovery_uri(CONN) (CONN->oauth_discovery_uri)
+#define conn_oauth_issuer_id(CONN) (CONN->oauth_issuer_id)
+#define conn_oauth_scope(CONN) (CONN->oauth_scope)
+#define conn_sasl_state(CONN) (CONN->sasl_state)
+
+#define set_conn_altsock(CONN, VAL) do { CONN->altsock = VAL; } while (0)
+#define set_conn_oauth_token(CONN, VAL) do { CONN->oauth_token = VAL; } while (0)
+
+#endif							/* USE_DYNAMIC_OAUTH */
+
+/* One final guardrail against accidental inclusion... */
+#if defined(USE_DYNAMIC_OAUTH) && defined(LIBPQ_INT_H)
+#error do not rely on libpq-int.h in dynamic builds of libpq-oauth
+#endif
 
 /*
  * It's generally prudent to set a maximum response size to buffer in memory,
@@ -303,7 +339,7 @@ free_async_ctx(PGconn *conn, struct async_ctx *actx)
 void
 pg_fe_cleanup_oauth_flow(PGconn *conn)
 {
-	fe_oauth_state *state = conn->sasl_state;
+	fe_oauth_state *state = conn_sasl_state(conn);
 
 	if (state->async_ctx)
 	{
@@ -311,7 +347,7 @@ pg_fe_cleanup_oauth_flow(PGconn *conn)
 		state->async_ctx = NULL;
 	}
 
-	conn->altsock = PGINVALID_SOCKET;
+	set_conn_altsock(conn, PGINVALID_SOCKET);
 }
 
 /*
@@ -1110,7 +1146,7 @@ parse_access_token(struct async_ctx *actx, struct token *tok)
 static bool
 setup_multiplexer(struct async_ctx *actx)
 {
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {.events = EPOLLIN};
 
 	actx->mux = epoll_create1(EPOLL_CLOEXEC);
@@ -1134,8 +1170,7 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	actx->mux = kqueue();
 	if (actx->mux < 0)
 	{
@@ -1158,10 +1193,9 @@ setup_multiplexer(struct async_ctx *actx)
 	}
 
 	return true;
+#else
+#error setup_multiplexer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support the Device Authorization flow on this platform");
-	return false;
 }
 
 /*
@@ -1174,7 +1208,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 {
 	struct async_ctx *actx = ctx;
 
-#ifdef HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct epoll_event ev = {0};
 	int			res;
 	int			op = EPOLL_CTL_ADD;
@@ -1230,8 +1264,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev[2] = {0};
 	struct kevent ev_out[2];
 	struct timespec timeout = {0};
@@ -1312,10 +1345,9 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 	}
 
 	return 0;
+#else
+#error register_socket is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support multiplexer sockets on this platform");
-	return -1;
 }
 
 /*
@@ -1334,7 +1366,7 @@ register_socket(CURL *curl, curl_socket_t socket, int what, void *ctx,
 static bool
 set_timer(struct async_ctx *actx, long timeout)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timeout < 0)
@@ -1363,8 +1395,7 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	struct kevent ev;
 
 #ifdef __NetBSD__
@@ -1419,10 +1450,9 @@ set_timer(struct async_ctx *actx, long timeout)
 	}
 
 	return true;
+#else
+#error set_timer is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return false;
 }
 
 /*
@@ -1433,7 +1463,7 @@ set_timer(struct async_ctx *actx, long timeout)
 static int
 timer_expired(struct async_ctx *actx)
 {
-#if HAVE_SYS_EPOLL_H
+#if defined(HAVE_SYS_EPOLL_H)
 	struct itimerspec spec = {0};
 
 	if (timerfd_gettime(actx->timerfd, &spec) < 0)
@@ -1453,8 +1483,7 @@ timer_expired(struct async_ctx *actx)
 	/* If the remaining time to expiration is zero, we're done. */
 	return (spec.it_value.tv_sec == 0
 			&& spec.it_value.tv_nsec == 0);
-#endif
-#ifdef HAVE_SYS_EVENT_H
+#elif defined(HAVE_SYS_EVENT_H)
 	int			res;
 
 	/* Is the timer queue ready? */
@@ -1466,10 +1495,9 @@ timer_expired(struct async_ctx *actx)
 	}
 
 	return (res > 0);
+#else
+#error timer_expired is not implemented on this platform
 #endif
-
-	actx_error(actx, "libpq does not support timers on this platform");
-	return -1;
 }
 
 /*
@@ -2070,8 +2098,9 @@ static bool
 check_issuer(struct async_ctx *actx, PGconn *conn)
 {
 	const struct provider *provider = &actx->provider;
+	const char *oauth_issuer_id = conn_oauth_issuer_id(conn);
 
-	Assert(conn->oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
+	Assert(oauth_issuer_id);	/* ensured by setup_oauth_parameters() */
 	Assert(provider->issuer);	/* ensured by parse_provider() */
 
 	/*---
@@ -2091,11 +2120,11 @@ check_issuer(struct async_ctx *actx, PGconn *conn)
 	 *    sent to. This comparison MUST use simple string comparison as defined
 	 *    in Section 6.2.1 of [RFC3986].
 	 */
-	if (strcmp(conn->oauth_issuer_id, provider->issuer) != 0)
+	if (strcmp(oauth_issuer_id, provider->issuer) != 0)
 	{
 		actx_error(actx,
 				   "the issuer identifier (%s) does not match oauth_issuer (%s)",
-				   provider->issuer, conn->oauth_issuer_id);
+				   provider->issuer, oauth_issuer_id);
 		return false;
 	}
 
@@ -2172,11 +2201,14 @@ check_for_device_flow(struct async_ctx *actx)
 static bool
 add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *conn)
 {
+	const char *oauth_client_id = conn_oauth_client_id(conn);
+	const char *oauth_client_secret = conn_oauth_client_secret(conn);
+
 	bool		success = false;
 	char	   *username = NULL;
 	char	   *password = NULL;
 
-	if (conn->oauth_client_secret)	/* Zero-length secrets are permitted! */
+	if (oauth_client_secret)	/* Zero-length secrets are permitted! */
 	{
 		/*----
 		 * Use HTTP Basic auth to send the client_id and secret. Per RFC 6749,
@@ -2204,8 +2236,8 @@ add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *c
 		 * would it be redundant, but some providers in the wild (e.g. Okta)
 		 * refuse to accept it.
 		 */
-		username = urlencode(conn->oauth_client_id);
-		password = urlencode(conn->oauth_client_secret);
+		username = urlencode(oauth_client_id);
+		password = urlencode(oauth_client_secret);
 
 		if (!username || !password)
 		{
@@ -2225,7 +2257,7 @@ add_client_identification(struct async_ctx *actx, PQExpBuffer reqbody, PGconn *c
 		 * If we're not otherwise authenticating, client_id is REQUIRED in the
 		 * request body.
 		 */
-		build_urlencoded(reqbody, "client_id", conn->oauth_client_id);
+		build_urlencoded(reqbody, "client_id", oauth_client_id);
 
 		CHECK_SETOPT(actx, CURLOPT_HTTPAUTH, CURLAUTH_NONE, goto cleanup);
 		actx->used_basic_auth = false;
@@ -2253,16 +2285,17 @@ cleanup:
 static bool
 start_device_authz(struct async_ctx *actx, PGconn *conn)
 {
+	const char *oauth_scope = conn_oauth_scope(conn);
 	const char *device_authz_uri = actx->provider.device_authorization_endpoint;
 	PQExpBuffer work_buffer = &actx->work_data;
 
-	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(conn_oauth_client_id(conn)); /* ensured by setup_oauth_parameters() */
 	Assert(device_authz_uri);	/* ensured by check_for_device_flow() */
 
 	/* Construct our request body. */
 	resetPQExpBuffer(work_buffer);
-	if (conn->oauth_scope && conn->oauth_scope[0])
-		build_urlencoded(work_buffer, "scope", conn->oauth_scope);
+	if (oauth_scope && oauth_scope[0])
+		build_urlencoded(work_buffer, "scope", oauth_scope);
 
 	if (!add_client_identification(actx, work_buffer, conn))
 		return false;
@@ -2344,7 +2377,7 @@ start_token_request(struct async_ctx *actx, PGconn *conn)
 	const char *device_code = actx->authz.device_code;
 	PQExpBuffer work_buffer = &actx->work_data;
 
-	Assert(conn->oauth_client_id);	/* ensured by setup_oauth_parameters() */
+	Assert(conn_oauth_client_id(conn)); /* ensured by setup_oauth_parameters() */
 	Assert(token_uri);			/* ensured by parse_provider() */
 	Assert(device_code);		/* ensured by parse_device_authz() */
 
@@ -2487,8 +2520,9 @@ prompt_user(struct async_ctx *actx, PGconn *conn)
 		.verification_uri_complete = actx->authz.verification_uri_complete,
 		.expires_in = actx->authz.expires_in,
 	};
+	PQauthDataHook_type hook = PQgetAuthDataHook();
 
-	res = PQauthDataHook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
+	res = hook(PQAUTHDATA_PROMPT_OAUTH_DEVICE, conn, &prompt);
 
 	if (!res)
 	{
@@ -2633,8 +2667,10 @@ done:
 static PostgresPollingStatusType
 pg_fe_run_oauth_flow_impl(PGconn *conn)
 {
-	fe_oauth_state *state = conn->sasl_state;
+	fe_oauth_state *state = conn_sasl_state(conn);
 	struct async_ctx *actx;
+	char	   *oauth_token = NULL;
+	PQExpBuffer errbuf;
 
 	if (!initialize_curl(conn))
 		return PGRES_POLLING_FAILED;
@@ -2676,7 +2712,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 	do
 	{
 		/* By default, the multiplexer is the altsock. Reassign as desired. */
-		conn->altsock = actx->mux;
+		set_conn_altsock(conn, actx->mux);
 
 		switch (actx->step)
 		{
@@ -2712,7 +2748,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 				 */
 				if (!timer_expired(actx))
 				{
-					conn->altsock = actx->timerfd;
+					set_conn_altsock(conn, actx->timerfd);
 					return PGRES_POLLING_READING;
 				}
 
@@ -2732,7 +2768,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 		{
 			case OAUTH_STEP_INIT:
 				actx->errctx = "failed to fetch OpenID discovery document";
-				if (!start_discovery(actx, conn->oauth_discovery_uri))
+				if (!start_discovery(actx, conn_oauth_discovery_uri(conn)))
 					goto error_return;
 
 				actx->step = OAUTH_STEP_DISCOVERY;
@@ -2768,9 +2804,15 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 				break;
 
 			case OAUTH_STEP_TOKEN_REQUEST:
-				if (!handle_token_response(actx, &conn->oauth_token))
+				if (!handle_token_response(actx, &oauth_token))
 					goto error_return;
 
+				/*
+				 * Hook any oauth_token into the PGconn immediately so that
+				 * the allocation isn't lost in case of an error.
+				 */
+				set_conn_oauth_token(conn, oauth_token);
+
 				if (!actx->user_prompted)
 				{
 					/*
@@ -2783,7 +2825,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 					actx->user_prompted = true;
 				}
 
-				if (conn->oauth_token)
+				if (oauth_token)
 					break;		/* done! */
 
 				/*
@@ -2798,7 +2840,7 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 				 * the client wait directly on the timerfd rather than the
 				 * multiplexer.
 				 */
-				conn->altsock = actx->timerfd;
+				set_conn_altsock(conn, actx->timerfd);
 
 				actx->step = OAUTH_STEP_WAIT_INTERVAL;
 				actx->running = 1;
@@ -2818,48 +2860,40 @@ pg_fe_run_oauth_flow_impl(PGconn *conn)
 		 * point, actx->running will be set. But there are some corner cases
 		 * where we can immediately loop back around; see start_request().
 		 */
-	} while (!conn->oauth_token && !actx->running);
+	} while (!oauth_token && !actx->running);
 
 	/* If we've stored a token, we're done. Otherwise come back later. */
-	return conn->oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
+	return oauth_token ? PGRES_POLLING_OK : PGRES_POLLING_READING;
 
 error_return:
+	errbuf = conn_errorMessage(conn);
 
 	/*
 	 * Assemble the three parts of our error: context, body, and detail. See
 	 * also the documentation for struct async_ctx.
 	 */
 	if (actx->errctx)
-	{
-		appendPQExpBufferStr(&conn->errorMessage,
-							 libpq_gettext(actx->errctx));
-		appendPQExpBufferStr(&conn->errorMessage, ": ");
-	}
+		appendPQExpBuffer(errbuf, "%s: ", libpq_gettext(actx->errctx));
 
 	if (PQExpBufferDataBroken(actx->errbuf))
-		appendPQExpBufferStr(&conn->errorMessage,
-							 libpq_gettext("out of memory"));
+		appendPQExpBufferStr(errbuf, libpq_gettext("out of memory"));
 	else
-		appendPQExpBufferStr(&conn->errorMessage, actx->errbuf.data);
+		appendPQExpBufferStr(errbuf, actx->errbuf.data);
 
 	if (actx->curl_err[0])
 	{
-		size_t		len;
-
-		appendPQExpBuffer(&conn->errorMessage,
-						  " (libcurl: %s)", actx->curl_err);
+		appendPQExpBuffer(errbuf, " (libcurl: %s)", actx->curl_err);
 
 		/* Sometimes libcurl adds a newline to the error buffer. :( */
-		len = conn->errorMessage.len;
-		if (len >= 2 && conn->errorMessage.data[len - 2] == '\n')
+		if (errbuf->len >= 2 && errbuf->data[errbuf->len - 2] == '\n')
 		{
-			conn->errorMessage.data[len - 2] = ')';
-			conn->errorMessage.data[len - 1] = '\0';
-			conn->errorMessage.len--;
+			errbuf->data[errbuf->len - 2] = ')';
+			errbuf->data[errbuf->len - 1] = '\0';
+			errbuf->len--;
 		}
 	}
 
-	appendPQExpBufferChar(&conn->errorMessage, '\n');
+	appendPQExpBufferChar(errbuf, '\n');
 
 	return PGRES_POLLING_FAILED;
 }
diff --git a/src/interfaces/libpq-oauth/oauth-curl.h b/src/interfaces/libpq-oauth/oauth-curl.h
new file mode 100644
index 00000000000..248d0424ad0
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-curl.h
@@ -0,0 +1,24 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-curl.h
+ *
+ *	  Definitions for OAuth Device Authorization module
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-curl.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_CURL_H
+#define OAUTH_CURL_H
+
+#include "libpq-fe.h"
+
+/* Exported async-auth callbacks. */
+extern PGDLLEXPORT PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern PGDLLEXPORT void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+#endif							/* OAUTH_CURL_H */
diff --git a/src/interfaces/libpq-oauth/oauth-utils.c b/src/interfaces/libpq-oauth/oauth-utils.c
new file mode 100644
index 00000000000..45fdc7579f2
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.c
@@ -0,0 +1,233 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.c
+ *
+ *	  "Glue" helpers providing a copy of some internal APIs from libpq. At
+ *	  some point in the future, we might be able to deduplicate.
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	  src/interfaces/libpq-oauth/oauth-utils.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres_fe.h"
+
+#include <signal.h>
+
+#include "oauth-utils.h"
+
+#ifndef USE_DYNAMIC_OAUTH
+#error oauth-utils.c is not supported in static builds
+#endif
+
+#ifdef LIBPQ_INT_H
+#error do not rely on libpq-int.h in dynamic builds of libpq-oauth
+#endif
+
+/*
+ * Function pointers set by libpq_oauth_init().
+ */
+
+pgthreadlock_t pg_g_threadlock;
+static libpq_gettext_func libpq_gettext_impl;
+
+conn_errorMessage_func conn_errorMessage;
+conn_oauth_client_id_func conn_oauth_client_id;
+conn_oauth_client_secret_func conn_oauth_client_secret;
+conn_oauth_discovery_uri_func conn_oauth_discovery_uri;
+conn_oauth_issuer_id_func conn_oauth_issuer_id;
+conn_oauth_scope_func conn_oauth_scope;
+conn_sasl_state_func conn_sasl_state;
+
+set_conn_altsock_func set_conn_altsock;
+set_conn_oauth_token_func set_conn_oauth_token;
+
+/*-
+ * Initializes libpq-oauth by setting necessary callbacks.
+ *
+ * The current implementation relies on the following private implementation
+ * details of libpq:
+ *
+ * - pg_g_threadlock: protects libcurl initialization if the underlying Curl
+ *   installation is not threadsafe
+ *
+ * - libpq_gettext: translates error messages using libpq's message domain
+ *
+ * The implementation also needs access to several members of the PGconn struct,
+ * which are not guaranteed to stay in place across minor versions. Accessors
+ * (named conn_*) and mutators (named set_conn_*) are injected here.
+ */
+void
+libpq_oauth_init(pgthreadlock_t threadlock_impl,
+				 libpq_gettext_func gettext_impl,
+				 conn_errorMessage_func errmsg_impl,
+				 conn_oauth_client_id_func clientid_impl,
+				 conn_oauth_client_secret_func clientsecret_impl,
+				 conn_oauth_discovery_uri_func discoveryuri_impl,
+				 conn_oauth_issuer_id_func issuerid_impl,
+				 conn_oauth_scope_func scope_impl,
+				 conn_sasl_state_func saslstate_impl,
+				 set_conn_altsock_func setaltsock_impl,
+				 set_conn_oauth_token_func settoken_impl)
+{
+	pg_g_threadlock = threadlock_impl;
+	libpq_gettext_impl = gettext_impl;
+	conn_errorMessage = errmsg_impl;
+	conn_oauth_client_id = clientid_impl;
+	conn_oauth_client_secret = clientsecret_impl;
+	conn_oauth_discovery_uri = discoveryuri_impl;
+	conn_oauth_issuer_id = issuerid_impl;
+	conn_oauth_scope = scope_impl;
+	conn_sasl_state = saslstate_impl;
+	set_conn_altsock = setaltsock_impl;
+	set_conn_oauth_token = settoken_impl;
+}
+
+/*
+ * Append a formatted string to the error message buffer of the given
+ * connection, after translating it.  This is a copy of libpq's internal API.
+ */
+void
+libpq_append_conn_error(PGconn *conn, const char *fmt,...)
+{
+	int			save_errno = errno;
+	bool		done;
+	va_list		args;
+	PQExpBuffer errorMessage = conn_errorMessage(conn);
+
+	Assert(fmt[strlen(fmt) - 1] != '\n');
+
+	if (PQExpBufferBroken(errorMessage))
+		return;					/* already failed */
+
+	/* Loop in case we have to retry after enlarging the buffer. */
+	do
+	{
+		errno = save_errno;
+		va_start(args, fmt);
+		done = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);
+		va_end(args);
+	} while (!done);
+
+	appendPQExpBufferChar(errorMessage, '\n');
+}
+
+#ifdef ENABLE_NLS
+
+/*
+ * A shim that defers to the actual libpq_gettext().
+ */
+char *
+libpq_gettext(const char *msgid)
+{
+	if (!libpq_gettext_impl)
+	{
+		/*
+		 * Possible if the libpq build didn't enable NLS but the libpq-oauth
+		 * build did. That's an odd mismatch, but we can handle it.
+		 *
+		 * Note that callers of libpq_gettext() have to treat the return value
+		 * as if it were const, because builds without NLS simply pass through
+		 * their argument.
+		 */
+		return unconstify(char *, msgid);
+	}
+
+	return libpq_gettext_impl(msgid);
+}
+
+#endif							/* ENABLE_NLS */
+
+/*
+ * Returns true if the PGOAUTHDEBUG=UNSAFE flag is set in the environment.
+ */
+bool
+oauth_unsafe_debugging_enabled(void)
+{
+	const char *env = getenv("PGOAUTHDEBUG");
+
+	return (env && strcmp(env, "UNSAFE") == 0);
+}
+
+/*
+ * Duplicate SOCK_ERRNO* definitions from libpq-int.h, for use by
+ * pq_block/reset_sigpipe().
+ */
+#ifdef WIN32
+#define SOCK_ERRNO (WSAGetLastError())
+#define SOCK_ERRNO_SET(e) WSASetLastError(e)
+#else
+#define SOCK_ERRNO errno
+#define SOCK_ERRNO_SET(e) (errno = (e))
+#endif
+
+/*
+ *	Block SIGPIPE for this thread. This is a copy of libpq's internal API.
+ */
+int
+pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending)
+{
+	sigset_t	sigpipe_sigset;
+	sigset_t	sigset;
+
+	sigemptyset(&sigpipe_sigset);
+	sigaddset(&sigpipe_sigset, SIGPIPE);
+
+	/* Block SIGPIPE and save previous mask for later reset */
+	SOCK_ERRNO_SET(pthread_sigmask(SIG_BLOCK, &sigpipe_sigset, osigset));
+	if (SOCK_ERRNO)
+		return -1;
+
+	/* We can have a pending SIGPIPE only if it was blocked before */
+	if (sigismember(osigset, SIGPIPE))
+	{
+		/* Is there a pending SIGPIPE? */
+		if (sigpending(&sigset) != 0)
+			return -1;
+
+		if (sigismember(&sigset, SIGPIPE))
+			*sigpipe_pending = true;
+		else
+			*sigpipe_pending = false;
+	}
+	else
+		*sigpipe_pending = false;
+
+	return 0;
+}
+
+/*
+ *	Discard any pending SIGPIPE and reset the signal mask. This is a copy of
+ *	libpq's internal API.
+ */
+void
+pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe)
+{
+	int			save_errno = SOCK_ERRNO;
+	int			signo;
+	sigset_t	sigset;
+
+	/* Clear SIGPIPE only if none was pending */
+	if (got_epipe && !sigpipe_pending)
+	{
+		if (sigpending(&sigset) == 0 &&
+			sigismember(&sigset, SIGPIPE))
+		{
+			sigset_t	sigpipe_sigset;
+
+			sigemptyset(&sigpipe_sigset);
+			sigaddset(&sigpipe_sigset, SIGPIPE);
+
+			sigwait(&sigpipe_sigset, &signo);
+		}
+	}
+
+	/* Restore saved block mask */
+	pthread_sigmask(SIG_SETMASK, osigset, NULL);
+
+	SOCK_ERRNO_SET(save_errno);
+}
diff --git a/src/interfaces/libpq-oauth/oauth-utils.h b/src/interfaces/libpq-oauth/oauth-utils.h
new file mode 100644
index 00000000000..f4ffefef208
--- /dev/null
+++ b/src/interfaces/libpq-oauth/oauth-utils.h
@@ -0,0 +1,94 @@
+/*-------------------------------------------------------------------------
+ *
+ * oauth-utils.h
+ *
+ *	  Definitions providing missing libpq internal APIs
+ *
+ * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * src/interfaces/libpq-oauth/oauth-utils.h
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#ifndef OAUTH_UTILS_H
+#define OAUTH_UTILS_H
+
+#include "fe-auth-oauth.h"
+#include "libpq-fe.h"
+#include "pqexpbuffer.h"
+
+/*
+ * A bank of callbacks to safely access members of PGconn, which are all passed
+ * to libpq_oauth_init() by libpq.
+ *
+ * Keep these aligned with the definitions in fe-auth-oauth.c as well as the
+ * static declarations in oauth-curl.c.
+ */
+#define DECLARE_GETTER(TYPE, MEMBER) \
+	typedef TYPE (*conn_ ## MEMBER ## _func) (PGconn *conn); \
+	extern conn_ ## MEMBER ## _func conn_ ## MEMBER;
+
+#define DECLARE_SETTER(TYPE, MEMBER) \
+	typedef void (*set_conn_ ## MEMBER ## _func) (PGconn *conn, TYPE val); \
+	extern set_conn_ ## MEMBER ## _func set_conn_ ## MEMBER;
+
+DECLARE_GETTER(PQExpBuffer, errorMessage);
+DECLARE_GETTER(char *, oauth_client_id);
+DECLARE_GETTER(char *, oauth_client_secret);
+DECLARE_GETTER(char *, oauth_discovery_uri);
+DECLARE_GETTER(char *, oauth_issuer_id);
+DECLARE_GETTER(char *, oauth_scope);
+DECLARE_GETTER(fe_oauth_state *, sasl_state);
+
+DECLARE_SETTER(pgsocket, altsock);
+DECLARE_SETTER(char *, oauth_token);
+
+#undef DECLARE_GETTER
+#undef DECLARE_SETTER
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+
+/* Initializes libpq-oauth. */
+extern PGDLLEXPORT void libpq_oauth_init(pgthreadlock_t threadlock,
+										 libpq_gettext_func gettext_impl,
+										 conn_errorMessage_func errmsg_impl,
+										 conn_oauth_client_id_func clientid_impl,
+										 conn_oauth_client_secret_func clientsecret_impl,
+										 conn_oauth_discovery_uri_func discoveryuri_impl,
+										 conn_oauth_issuer_id_func issuerid_impl,
+										 conn_oauth_scope_func scope_impl,
+										 conn_sasl_state_func saslstate_impl,
+										 set_conn_altsock_func setaltsock_impl,
+										 set_conn_oauth_token_func settoken_impl);
+
+/*
+ * Duplicated APIs, copied from libpq (primarily libpq-int.h, which we cannot
+ * depend on here).
+ */
+
+typedef enum
+{
+	PG_BOOL_UNKNOWN = 0,		/* Currently unknown */
+	PG_BOOL_YES,				/* Yes (true) */
+	PG_BOOL_NO					/* No (false) */
+} PGTernaryBool;
+
+extern void libpq_append_conn_error(PGconn *conn, const char *fmt,...) pg_attribute_printf(2, 3);
+extern bool oauth_unsafe_debugging_enabled(void);
+extern int	pq_block_sigpipe(sigset_t *osigset, bool *sigpipe_pending);
+extern void pq_reset_sigpipe(sigset_t *osigset, bool sigpipe_pending, bool got_epipe);
+
+#ifdef ENABLE_NLS
+extern char *libpq_gettext(const char *msgid) pg_attribute_format_arg(1);
+#else
+#define libpq_gettext(x) (x)
+#endif
+
+extern pgthreadlock_t pg_g_threadlock;
+
+#define pglock_thread()		pg_g_threadlock(true)
+#define pgunlock_thread()	pg_g_threadlock(false)
+
+#endif							/* OAUTH_UTILS_H */
diff --git a/src/interfaces/libpq/Makefile b/src/interfaces/libpq/Makefile
index 90b0b65db6f..c6fe5fec7f6 100644
--- a/src/interfaces/libpq/Makefile
+++ b/src/interfaces/libpq/Makefile
@@ -31,7 +31,6 @@ endif
 
 OBJS = \
 	$(WIN32RES) \
-	fe-auth-oauth.o \
 	fe-auth-scram.o \
 	fe-cancel.o \
 	fe-connect.o \
@@ -64,9 +63,11 @@ OBJS += \
 	fe-secure-gssapi.o
 endif
 
-ifeq ($(with_libcurl),yes)
-OBJS += fe-auth-oauth-curl.o
-endif
+# The OAuth implementation differs depending on the type of library being built.
+OBJS_STATIC = fe-auth-oauth.o
+
+fe-auth-oauth_shlib.o: override CPPFLAGS_SHLIB += -DUSE_DYNAMIC_OAUTH
+OBJS_SHLIB = fe-auth-oauth_shlib.o
 
 ifeq ($(PORTNAME), cygwin)
 override shlib = cyg$(NAME)$(DLSUFFIX)
@@ -86,7 +87,7 @@ endif
 # that are built correctly for use in a shlib.
 SHLIB_LINK_INTERNAL = -lpgcommon_shlib -lpgport_shlib
 ifneq ($(PORTNAME), win32)
-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lcurl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
 else
 SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi32 -lssl -lsocket -lnsl -lresolv -lintl -lm $(PTHREAD_LIBS), $(LIBS)) $(LDAP_LIBS_FE)
 endif
@@ -101,12 +102,26 @@ ifeq ($(with_ssl),openssl)
 PKG_CONFIG_REQUIRES_PRIVATE = libssl, libcrypto
 endif
 
+ifeq ($(with_libcurl),yes)
+# libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+# libpq-oauth needs libcurl. Put both into *.private.
+PKG_CONFIG_REQUIRES_PRIVATE += libcurl
+%.pc: override SHLIB_LINK_INTERNAL += -lpq-oauth
+endif
+
 all: all-lib libpq-refs-stamp
 
 # Shared library stuff
 include $(top_srcdir)/src/Makefile.shlib
 backend_src = $(top_srcdir)/src/backend
 
+# Add shlib-/stlib-specific objects.
+$(shlib): override OBJS += $(OBJS_SHLIB)
+$(shlib): $(OBJS_SHLIB)
+
+$(stlib): override OBJS += $(OBJS_STATIC)
+$(stlib): $(OBJS_STATIC)
+
 # Check for functions that libpq must not call, currently just exit().
 # (Ideally we'd reject abort() too, but there are various scenarios where
 # build toolchains insert abort() calls, e.g. to implement assert().)
@@ -115,8 +130,6 @@ backend_src = $(top_srcdir)/src/backend
 # which seems to insert references to that even in pure C code. Excluding
 # __tsan_func_exit is necessary when using ThreadSanitizer data race detector
 # which use this function for instrumentation of function exit.
-# libcurl registers an exit handler in the memory debugging code when running
-# with LeakSanitizer.
 # Skip the test when profiling, as gcc may insert exit() calls for that.
 # Also skip the test on platforms where libpq infrastructure may be provided
 # by statically-linked libraries, as we can't expect them to honor this
@@ -124,7 +137,7 @@ backend_src = $(top_srcdir)/src/backend
 libpq-refs-stamp: $(shlib)
 ifneq ($(enable_coverage), yes)
 ifeq (,$(filter solaris,$(PORTNAME)))
-	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit -e _atexit | grep exit; then \
+	@if nm -A -u $< 2>/dev/null | grep -v -e __cxa_atexit -e __tsan_func_exit | grep exit; then \
 		echo 'libpq must not be calling any function which invokes exit'; exit 1; \
 	fi
 endif
@@ -138,6 +151,11 @@ fe-misc.o: fe-misc.c $(top_builddir)/src/port/pg_config_paths.h
 $(top_builddir)/src/port/pg_config_paths.h:
 	$(MAKE) -C $(top_builddir)/src/port pg_config_paths.h
 
+# Use src/common/Makefile's trick for tracking dependencies of shlib-specific
+# objects.
+%_shlib.o: %.c %.o
+	$(CC) $(CFLAGS) $(CFLAGS_SL) $(CPPFLAGS) $(CPPFLAGS_SHLIB) -c $< -o $@
+
 install: all installdirs install-lib
 	$(INSTALL_DATA) $(srcdir)/libpq-fe.h '$(DESTDIR)$(includedir)'
 	$(INSTALL_DATA) $(srcdir)/libpq-events.h '$(DESTDIR)$(includedir)'
@@ -171,6 +189,6 @@ uninstall: uninstall-lib
 clean distclean: clean-lib
 	$(MAKE) -C test $@
 	rm -rf tmp_check
-	rm -f $(OBJS) pthread.h libpq-refs-stamp
+	rm -f $(OBJS) $(OBJS_SHLIB) $(OBJS_STATIC) pthread.h libpq-refs-stamp
 # Might be left over from a Win32 client-only build
 	rm -f pg_config_paths.h
diff --git a/src/interfaces/libpq/exports.txt b/src/interfaces/libpq/exports.txt
index d5143766858..0625cf39e9a 100644
--- a/src/interfaces/libpq/exports.txt
+++ b/src/interfaces/libpq/exports.txt
@@ -210,3 +210,4 @@ PQsetAuthDataHook         207
 PQgetAuthDataHook         208
 PQdefaultAuthDataHook     209
 PQfullProtocolVersion     210
+appendPQExpBufferVA       211
diff --git a/src/interfaces/libpq/fe-auth-oauth.c b/src/interfaces/libpq/fe-auth-oauth.c
index ab6a45e2aba..9fbff89a21d 100644
--- a/src/interfaces/libpq/fe-auth-oauth.c
+++ b/src/interfaces/libpq/fe-auth-oauth.c
@@ -15,6 +15,10 @@
 
 #include "postgres_fe.h"
 
+#ifdef USE_DYNAMIC_OAUTH
+#include <dlfcn.h>
+#endif
+
 #include "common/base64.h"
 #include "common/hmac.h"
 #include "common/jsonapi.h"
@@ -22,6 +26,7 @@
 #include "fe-auth.h"
 #include "fe-auth-oauth.h"
 #include "mb/pg_wchar.h"
+#include "pg_config_paths.h"
 
 /* The exported OAuth callback mechanism. */
 static void *oauth_init(PGconn *conn, const char *password,
@@ -721,6 +726,218 @@ cleanup_user_oauth_flow(PGconn *conn)
 	state->async_ctx = NULL;
 }
 
+/*-------------
+ * Builtin Flow
+ *
+ * There are three potential implementations of use_builtin_flow:
+ *
+ * 1) If the OAuth client is disabled at configuration time, return false.
+ *    Dependent clients must provide their own flow.
+ * 2) If the OAuth client is enabled and USE_DYNAMIC_OAUTH is defined, dlopen()
+ *    the libpq-oauth plugin and use its implementation.
+ * 3) Otherwise, use flow callbacks that are statically linked into the
+ *    executable.
+ */
+
+#if !defined(USE_LIBCURL)
+
+/*
+ * This configuration doesn't support the builtin flow.
+ */
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	return false;
+}
+
+#elif defined(USE_DYNAMIC_OAUTH)
+
+/*
+ * Use the builtin flow in the libpq-oauth plugin, which is loaded at runtime.
+ */
+
+typedef char *(*libpq_gettext_func) (const char *msgid);
+
+/*
+ * Define accessor/mutator shims to inject into libpq-oauth, so that it doesn't
+ * depend on the offsets within PGconn. (These have changed during minor version
+ * updates in the past.)
+ */
+
+#define DEFINE_GETTER(TYPE, MEMBER) \
+	typedef TYPE (*conn_ ## MEMBER ## _func) (PGconn *conn); \
+	static TYPE conn_ ## MEMBER(PGconn *conn) { return conn->MEMBER; }
+
+/* Like DEFINE_GETTER, but returns a pointer to the member. */
+#define DEFINE_GETTER_P(TYPE, MEMBER) \
+	typedef TYPE (*conn_ ## MEMBER ## _func) (PGconn *conn); \
+	static TYPE conn_ ## MEMBER(PGconn *conn) { return &conn->MEMBER; }
+
+#define DEFINE_SETTER(TYPE, MEMBER) \
+	typedef void (*set_conn_ ## MEMBER ## _func) (PGconn *conn, TYPE val); \
+	static void set_conn_ ## MEMBER(PGconn *conn, TYPE val) { conn->MEMBER = val; }
+
+DEFINE_GETTER_P(PQExpBuffer, errorMessage);
+DEFINE_GETTER(char *, oauth_client_id);
+DEFINE_GETTER(char *, oauth_client_secret);
+DEFINE_GETTER(char *, oauth_discovery_uri);
+DEFINE_GETTER(char *, oauth_issuer_id);
+DEFINE_GETTER(char *, oauth_scope);
+DEFINE_GETTER(fe_oauth_state *, sasl_state);
+
+DEFINE_SETTER(pgsocket, altsock);
+DEFINE_SETTER(char *, oauth_token);
+
+/*
+ * Loads the libpq-oauth plugin via dlopen(), initializes it, and plugs its
+ * callbacks into the connection's async auth handlers.
+ *
+ * Failure to load here results in a relatively quiet connection error, to
+ * handle the use case where the build supports loading a flow but a user does
+ * not want to install it. Troubleshooting of linker/loader failures can be done
+ * via PGOAUTHDEBUG.
+ */
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	static bool initialized = false;
+	static pthread_mutex_t init_mutex = PTHREAD_MUTEX_INITIALIZER;
+	int			lockerr;
+
+	void		(*init) (pgthreadlock_t threadlock,
+						 libpq_gettext_func gettext_impl,
+						 conn_errorMessage_func errmsg_impl,
+						 conn_oauth_client_id_func clientid_impl,
+						 conn_oauth_client_secret_func clientsecret_impl,
+						 conn_oauth_discovery_uri_func discoveryuri_impl,
+						 conn_oauth_issuer_id_func issuerid_impl,
+						 conn_oauth_scope_func scope_impl,
+						 conn_sasl_state_func saslstate_impl,
+						 set_conn_altsock_func setaltsock_impl,
+						 set_conn_oauth_token_func settoken_impl);
+	PostgresPollingStatusType (*flow) (PGconn *conn);
+	void		(*cleanup) (PGconn *conn);
+
+	/*
+	 * On macOS only, load the module using its absolute install path; the
+	 * standard search behavior is not very helpful for this use case. Unlike
+	 * on other platforms, DYLD_LIBRARY_PATH is used as a fallback even with
+	 * absolute paths (modulo SIP effects), so tests can continue to work.
+	 *
+	 * On the other platforms, load the module using only the basename, to
+	 * rely on the runtime linker's standard search behavior.
+	 */
+	const char *const module_name =
+#if defined(__darwin__)
+		LIBDIR "/libpq-oauth-" PG_MAJORVERSION DLSUFFIX;
+#else
+		"libpq-oauth-" PG_MAJORVERSION DLSUFFIX;
+#endif
+
+	state->builtin_flow = dlopen(module_name, RTLD_NOW | RTLD_LOCAL);
+	if (!state->builtin_flow)
+	{
+		/*
+		 * For end users, this probably isn't an error condition, it just
+		 * means the flow isn't installed. Developers and package maintainers
+		 * may want to debug this via the PGOAUTHDEBUG envvar, though.
+		 *
+		 * Note that POSIX dlerror() isn't guaranteed to be threadsafe.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlopen for libpq-oauth: %s\n", dlerror());
+
+		return false;
+	}
+
+	if ((init = dlsym(state->builtin_flow, "libpq_oauth_init")) == NULL
+		|| (flow = dlsym(state->builtin_flow, "pg_fe_run_oauth_flow")) == NULL
+		|| (cleanup = dlsym(state->builtin_flow, "pg_fe_cleanup_oauth_flow")) == NULL)
+	{
+		/*
+		 * This is more of an error condition than the one above, but due to
+		 * the dlerror() threadsafety issue, lock it behind PGOAUTHDEBUG too.
+		 */
+		if (oauth_unsafe_debugging_enabled())
+			fprintf(stderr, "failed dlsym for libpq-oauth: %s\n", dlerror());
+
+		dlclose(state->builtin_flow);
+		return false;
+	}
+
+	/*
+	 * Past this point, we do not unload the module. It stays in the process
+	 * permanently.
+	 */
+
+	/*
+	 * We need to inject necessary function pointers into the module. This
+	 * only needs to be done once -- even if the pointers are constant,
+	 * assigning them while another thread is executing the flows feels like
+	 * tempting fate.
+	 */
+	if ((lockerr = pthread_mutex_lock(&init_mutex)) != 0)
+	{
+		/* Should not happen... but don't continue if it does. */
+		Assert(false);
+
+		libpq_append_conn_error(conn, "failed to lock mutex (%d)", lockerr);
+		return false;
+	}
+
+	if (!initialized)
+	{
+		init(pg_g_threadlock,
+#ifdef ENABLE_NLS
+			 libpq_gettext,
+#else
+			 NULL,
+#endif
+			 conn_errorMessage,
+			 conn_oauth_client_id,
+			 conn_oauth_client_secret,
+			 conn_oauth_discovery_uri,
+			 conn_oauth_issuer_id,
+			 conn_oauth_scope,
+			 conn_sasl_state,
+			 set_conn_altsock,
+			 set_conn_oauth_token);
+
+		initialized = true;
+	}
+
+	pthread_mutex_unlock(&init_mutex);
+
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = flow;
+	conn->cleanup_async_auth = cleanup;
+
+	return true;
+}
+
+#else
+
+/*
+ * Use the builtin flow in libpq-oauth.a (see libpq-oauth/oauth-curl.h).
+ */
+
+extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
+extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
+
+bool
+use_builtin_flow(PGconn *conn, fe_oauth_state *state)
+{
+	/* Set our asynchronous callbacks. */
+	conn->async_auth = pg_fe_run_oauth_flow;
+	conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
+
+	return true;
+}
+
+#endif							/* USE_LIBCURL */
+
+
 /*
  * Chooses an OAuth client flow for the connection, which will retrieve a Bearer
  * token for presentation to the server.
@@ -792,18 +1009,10 @@ setup_token_request(PGconn *conn, fe_oauth_state *state)
 		libpq_append_conn_error(conn, "user-defined OAuth flow failed");
 		goto fail;
 	}
-	else
+	else if (!use_builtin_flow(conn, state))
 	{
-#if USE_LIBCURL
-		/* Hand off to our built-in OAuth flow. */
-		conn->async_auth = pg_fe_run_oauth_flow;
-		conn->cleanup_async_auth = pg_fe_cleanup_oauth_flow;
-
-#else
-		libpq_append_conn_error(conn, "no custom OAuth flows are available, and libpq was not built with libcurl support");
+		libpq_append_conn_error(conn, "no OAuth flows are available (try installing the libpq-oauth package)");
 		goto fail;
-
-#endif
 	}
 
 	return true;
diff --git a/src/interfaces/libpq/fe-auth-oauth.h b/src/interfaces/libpq/fe-auth-oauth.h
index 3f1a7503a01..0d59e91605b 100644
--- a/src/interfaces/libpq/fe-auth-oauth.h
+++ b/src/interfaces/libpq/fe-auth-oauth.h
@@ -15,8 +15,8 @@
 #ifndef FE_AUTH_OAUTH_H
 #define FE_AUTH_OAUTH_H
 
+#include "fe-auth-sasl.h"
 #include "libpq-fe.h"
-#include "libpq-int.h"
 
 
 enum fe_oauth_step
@@ -27,18 +27,24 @@ enum fe_oauth_step
 	FE_OAUTH_SERVER_ERROR,
 };
 
+/*
+ * This struct is exported to the libpq-oauth module. If changes are needed
+ * during backports to stable branches, please keep ABI compatibility (no
+ * changes to existing members, add new members at the end, etc.).
+ */
 typedef struct
 {
 	enum fe_oauth_step step;
 
 	PGconn	   *conn;
 	void	   *async_ctx;
+
+	void	   *builtin_flow;
 } fe_oauth_state;
 
-extern PostgresPollingStatusType pg_fe_run_oauth_flow(PGconn *conn);
-extern void pg_fe_cleanup_oauth_flow(PGconn *conn);
 extern void pqClearOAuthToken(PGconn *conn);
 extern bool oauth_unsafe_debugging_enabled(void);
+extern bool use_builtin_flow(PGconn *conn, fe_oauth_state *state);
 
 /* Mechanisms in fe-auth-oauth.c */
 extern const pg_fe_sasl_mech pg_oauth_mech;
diff --git a/src/interfaces/libpq/meson.build b/src/interfaces/libpq/meson.build
index 292fecf3320..a74e885b169 100644
--- a/src/interfaces/libpq/meson.build
+++ b/src/interfaces/libpq/meson.build
@@ -38,10 +38,6 @@ if gssapi.found()
   )
 endif
 
-if libcurl.found()
-  libpq_sources += files('fe-auth-oauth-curl.c')
-endif
-
 export_file = custom_target('libpq.exports',
   kwargs: gen_export_kwargs,
 )
@@ -50,6 +46,9 @@ export_file = custom_target('libpq.exports',
 libpq_inc = include_directories('.', '../../port')
 libpq_c_args = ['-DSO_MAJOR_VERSION=5']
 
+# The OAuth implementation differs depending on the type of library being built.
+libpq_so_c_args = ['-DUSE_DYNAMIC_OAUTH']
+
 # Not using both_libraries() here as
 # 1) resource files should only be in the shared library
 # 2) we want the .pc file to include a dependency to {pgport,common}_static for
@@ -70,7 +69,7 @@ libpq_st = static_library('libpq',
 libpq_so = shared_library('libpq',
   libpq_sources + libpq_so_sources,
   include_directories: [libpq_inc, postgres_inc],
-  c_args: libpq_c_args,
+  c_args: libpq_c_args + libpq_so_c_args,
   c_pch: pch_postgres_fe_h,
   version: '5.' + pg_version_major.to_string(),
   soversion: host_system != 'windows' ? '5' : '',
@@ -86,12 +85,26 @@ libpq = declare_dependency(
   include_directories: [include_directories('.')]
 )
 
+private_deps = [
+  frontend_stlib_code,
+  libpq_deps,
+]
+
+if oauth_flow_supported
+  # libpq.so doesn't link against libcurl, but libpq.a needs libpq-oauth, and
+  # libpq-oauth needs libcurl. Put both into *.private.
+  private_deps += [
+    libpq_oauth_deps,
+    '-lpq-oauth',
+  ]
+endif
+
 pkgconfig.generate(
   name: 'libpq',
   description: 'PostgreSQL libpq library',
   url: pg_url,
   libraries: libpq,
-  libraries_private: [frontend_stlib_code, libpq_deps],
+  libraries_private: private_deps,
 )
 
 install_headers(
diff --git a/src/interfaces/libpq/nls.mk b/src/interfaces/libpq/nls.mk
index ae761265852..b87df277d93 100644
--- a/src/interfaces/libpq/nls.mk
+++ b/src/interfaces/libpq/nls.mk
@@ -13,15 +13,21 @@ GETTEXT_FILES    = fe-auth.c \
                    fe-secure-common.c \
                    fe-secure-gssapi.c \
                    fe-secure-openssl.c \
-                   win32.c
-GETTEXT_TRIGGERS = libpq_append_conn_error:2 \
+                   win32.c \
+                   ../libpq-oauth/oauth-curl.c \
+                   ../libpq-oauth/oauth-utils.c
+GETTEXT_TRIGGERS = actx_error:2 \
+                   libpq_append_conn_error:2 \
                    libpq_append_error:2 \
                    libpq_gettext \
                    libpq_ngettext:1,2 \
+                   oauth_parse_set_error:2 \
                    pqInternalNotice:2
-GETTEXT_FLAGS    = libpq_append_conn_error:2:c-format \
+GETTEXT_FLAGS    = actx_error:2:c-format \
+                   libpq_append_conn_error:2:c-format \
                    libpq_append_error:2:c-format \
                    libpq_gettext:1:pass-c-format \
                    libpq_ngettext:1:pass-c-format \
                    libpq_ngettext:2:pass-c-format \
+                   oauth_parse_set_error:2:c-format \
                    pqInternalNotice:2:c-format
diff --git a/src/makefiles/meson.build b/src/makefiles/meson.build
index 55da678ec27..91a8de1ee9b 100644
--- a/src/makefiles/meson.build
+++ b/src/makefiles/meson.build
@@ -203,6 +203,8 @@ pgxs_empty = [
   'LIBNUMA_CFLAGS', 'LIBNUMA_LIBS',
 
   'LIBURING_CFLAGS', 'LIBURING_LIBS',
+
+  'LIBCURL_CPPFLAGS', 'LIBCURL_LDFLAGS', 'LIBCURL_LDLIBS',
 ]
 
 if host_system == 'windows' and cc.get_argument_syntax() != 'msvc'
diff --git a/src/test/modules/oauth_validator/meson.build b/src/test/modules/oauth_validator/meson.build
index 36d1b26369f..e190f9cf15a 100644
--- a/src/test/modules/oauth_validator/meson.build
+++ b/src/test/modules/oauth_validator/meson.build
@@ -78,7 +78,7 @@ tests += {
     ],
     'env': {
       'PYTHON': python.path(),
-      'with_libcurl': libcurl.found() ? 'yes' : 'no',
+      'with_libcurl': oauth_flow_supported ? 'yes' : 'no',
       'with_python': 'yes',
     },
   },
diff --git a/src/test/modules/oauth_validator/t/002_client.pl b/src/test/modules/oauth_validator/t/002_client.pl
index 8dd502f41e1..21d4acc1926 100644
--- a/src/test/modules/oauth_validator/t/002_client.pl
+++ b/src/test/modules/oauth_validator/t/002_client.pl
@@ -110,7 +110,7 @@ if ($ENV{with_libcurl} ne 'yes')
 		"fails without custom hook installed",
 		flags => ["--no-hook"],
 		expected_stderr =>
-		  qr/no custom OAuth flows are available, and libpq was not built with libcurl support/
+		  qr/no OAuth flows are available \(try installing the libpq-oauth package\)/
 	);
 }
 
-- 
2.34.1

#375Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#374)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 30 Apr 2025, at 19:59, Jacob Champion <jacob.champion@enterprisedb.com> wrote:
On Wed, Apr 30, 2025 at 5:55 AM Daniel Gustafsson <daniel@yesql.se> wrote:

Nitpick, but it won't be .so everywhere. Would this be clearar if spelled out
with something like "do not rely on libpq-int.h when building libpq-oauth as
dynamic shared lib"?

I went with "do not rely on libpq-int.h in dynamic builds of
libpq-oauth", since devs are hopefully going to be the only people who
see it. I've also fixed up an errant #endif label right above it.

That's indeed better than my suggestion.

I'd ideally like to get a working split in for beta.

+Many

Barring
objections, I plan to get this pushed tomorrow so that the buildfarm
has time to highlight any corner cases well before the Saturday
freeze.

I'll try to kick the tyres a bit more as well.

--
Daniel Gustafsson

#376Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#375)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Apr 30, 2025 at 11:09 AM Daniel Gustafsson <daniel@yesql.se> wrote:

I'll try to kick the tyres a bit more as well.

Thanks! Alpine seems to be happy with the dlopen() arrangement. And
I've thrown some more Autoconf testing at Rocky, Mac, and Ubuntu.

So, committed. Thanks everyone for all the excellent feedback!
(Further feedback is still very welcome.)

--Jacob

#377Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#376)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, May 1, 2025 at 10:38 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

I've thrown some more Autoconf testing at Rocky, Mac, and Ubuntu.

So, committed.

I forgot --enable-nls in my Mac testing, so indri complains about my
omission of -lintl... I'd incorrectly thought it was no longer needed
after all the gettext motion.

I'm running the attached fixup through CI now.

--Jacob

Attachments:

0001-oauth-Fix-Autoconf-build-on-macOS.patchapplication/x-patch; name=0001-oauth-Fix-Autoconf-build-on-macOS.patchDownload
From 379c3fb40547391c649bb5c53668581ee017489e Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Thu, 1 May 2025 12:03:28 -0700
Subject: [PATCH] oauth: Fix Autoconf build on macOS

Oversight in b0635bfda. -lintl is necessary for gettext on Mac, which
libpq-oauth depends on via pgport/pgcommon. (I'd incorrectly removed
this change from an earlier version of the patch, where it was suggested
by Peter Eisentraut.)

Per buildfarm member indri.
---
 src/interfaces/libpq-oauth/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/interfaces/libpq-oauth/Makefile b/src/interfaces/libpq-oauth/Makefile
index 3e4b34142e0..270fc0cf2d9 100644
--- a/src/interfaces/libpq-oauth/Makefile
+++ b/src/interfaces/libpq-oauth/Makefile
@@ -47,7 +47,7 @@ $(stlib): override OBJS += $(OBJS_STATIC)
 $(stlib): $(OBJS_STATIC)
 
 SHLIB_LINK_INTERNAL = $(libpq_pgport_shlib)
-SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)
+SHLIB_LINK = $(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS) $(filter -lintl, $(LIBS))
 SHLIB_PREREQS = submake-libpq
 SHLIB_EXPORTS = exports.txt
 
-- 
2.34.1

#378Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#377)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, May 1, 2025 at 12:24 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

I'm running the attached fixup through CI now.

(Pushed, and indri is happy again.)

--Jacob

#379Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#358)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Apr 21, 2025 at 9:57 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

So to recap: I'm happy to add a Google compatibility mode, but I'd
like to gather some evidence that their device flow can actually
authorize tokens for third parties safely, before we commit to that.
Thoughts?

Hi Ivan, I know the thread has been deep in discussion around the
module split, but I was wondering if you'd had any thoughts on the
Google safety problem?

--Jacob

#380Nathan Bossart
nathandbossart@gmail.com
In reply to: Jacob Champion (#378)
Re: [PoC] Federated Authn/z with OAUTHBEARER

After commit b0635bf, I'm seeing the following meson build failures on
macOS:

In file included from ../postgresql/src/interfaces/libpq-oauth/oauth-curl.c:51:
../postgresql/src/interfaces/libpq/libpq-int.h:70:10: fatal error: 'openssl/ssl.h' file not found
70 | #include <openssl/ssl.h>
| ^~~~~~~~~~~~~~~
1 error generated.

The following patch seems to resolve it. I'm curious if commit 4ea1254
might apply to meson, too, but FWIW I haven't noticed any related failures
on my machine.

diff --git a/meson.build b/meson.build
index 29d46c8ad01..19ad03042d3 100644
--- a/meson.build
+++ b/meson.build
@@ -3295,6 +3295,7 @@ libpq_deps += [

libpq_oauth_deps += [
libcurl,
+ ssl,
]

subdir('src/interfaces/libpq')

--
nathan

#381Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Nathan Bossart (#380)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, May 2, 2025 at 8:11 AM Nathan Bossart <nathandbossart@gmail.com> wrote:

After commit b0635bf, I'm seeing the following meson build failures on
macOS:

Thanks for the report, and sorry for the breakage.

In file included from ../postgresql/src/interfaces/libpq-oauth/oauth-curl.c:51:
../postgresql/src/interfaces/libpq/libpq-int.h:70:10: fatal error: 'openssl/ssl.h' file not found
70 | #include <openssl/ssl.h>
| ^~~~~~~~~~~~~~~
1 error generated.

Hm. My test setup here is Homebrew with -Dextra_include_dirs, which
may explain why it's not failing for me. Looks like Cirrus also has
-Dextra_include_dirs...

The following patch seems to resolve it. I'm curious if commit 4ea1254
might apply to meson, too, but FWIW I haven't noticed any related failures
on my machine.

Yeah, I wonder if libintl is being similarly "cheated" on the Meson side.

diff --git a/meson.build b/meson.build
index 29d46c8ad01..19ad03042d3 100644
--- a/meson.build
+++ b/meson.build
@@ -3295,6 +3295,7 @@ libpq_deps += [

libpq_oauth_deps += [
libcurl,
+ ssl,
]

Thanks! I think the include directory is the only thing needed for the
static library, not the full link dependency. But I'll try to build up
a new Meson setup, with minimal added settings, to see if I can
reproduce here. Can you share your Meson configuration?

--Jacob

#382Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#381)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, May 2, 2025 at 8:46 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Yeah, I wonder if libintl is being similarly "cheated" on the Meson side.

libintl is already coming in via frontend_stlib_code, so that's fine.
So now I'm wondering if any other static clients of libpq-int.h (if
there are any) need the ssl dependency too, for correctness, or if
it's just me.

But I'll try to build up
a new Meson setup, with minimal added settings, to see if I can
reproduce here. Can you share your Meson configuration?

(Never mind -- this was pretty easy to hit in a from-scratch configuration.)

--Jacob

#383Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#382)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, May 2, 2025 at 8:59 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

libintl is already coming in via frontend_stlib_code, so that's fine.
So now I'm wondering if any other static clients of libpq-int.h (if
there are any) need the ssl dependency too, for correctness, or if
it's just me.

Looks like it's just me. And using partial_dependency for the includes
seems like overkill, so I've kept the full ssl dependency object, but
moved it to the staticlib only, which is enough to solve the breakage
on my machine.

Nathan, if you get a chance, does the attached patch work for you?

Thanks!
--Jacob

Attachments:

0001-oauth-Correct-SSL-dependency-for-libpq-oauth.a.patchapplication/octet-stream; name=0001-oauth-Correct-SSL-dependency-for-libpq-oauth.a.patchDownload
From 39fb3aa3df9633fca393212df9be2efc6c2f9fdc Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Fri, 2 May 2025 09:44:43 -0700
Subject: [PATCH] oauth: Correct SSL dependency for libpq-oauth.a

libpq-oauth.a includes libpq-int.h, which includes OpenSSL headers. The
Autoconf side picks up the necessary include directories via CPPFLAGS,
but Meson needs the dependency to be made explicit.

Reported-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/aBTgjDfrdOZmaPgv%40nathan
---
 src/interfaces/libpq-oauth/meson.build | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/interfaces/libpq-oauth/meson.build b/src/interfaces/libpq-oauth/meson.build
index 9e7301a7f63..df064c59a40 100644
--- a/src/interfaces/libpq-oauth/meson.build
+++ b/src/interfaces/libpq-oauth/meson.build
@@ -25,7 +25,11 @@ libpq_oauth_st = static_library('libpq-oauth',
   libpq_oauth_sources,
   include_directories: [libpq_oauth_inc, postgres_inc],
   c_pch: pch_postgres_fe_h,
-  dependencies: [frontend_stlib_code, libpq_oauth_deps],
+  dependencies: [
+    frontend_stlib_code,
+    libpq_oauth_deps,
+    ssl, # libpq-int.h includes OpenSSL headers
+  ],
   kwargs: default_lib_args,
 )
 
-- 
2.34.1

#384Nathan Bossart
nathandbossart@gmail.com
In reply to: Jacob Champion (#383)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, May 02, 2025 at 10:05:26AM -0700, Jacob Champion wrote:

Nathan, if you get a chance, does the attached patch work for you?

Yup, thanks!

--
nathan

#385Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jacob Champion (#383)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> writes:

Looks like it's just me. And using partial_dependency for the includes
seems like overkill, so I've kept the full ssl dependency object, but
moved it to the staticlib only, which is enough to solve the breakage
on my machine.
Nathan, if you get a chance, does the attached patch work for you?

FWIW, on my Mac a meson build from HEAD goes through fine, with or
without this patch. I'm getting openssl and libcurl from MacPorts
not Homebrew, but I don't know why that would make any difference.

regards, tom lane

#386Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Tom Lane (#385)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, May 2, 2025 at 10:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

FWIW, on my Mac a meson build from HEAD goes through fine, with or
without this patch. I'm getting openssl and libcurl from MacPorts
not Homebrew, but I don't know why that would make any difference.

Do your <libintl.h> and <openssl/*.h> live in the same place? Mine do,
so I had to disable NLS to get this to break.

--Jacob

#387Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Nathan Bossart (#384)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, May 2, 2025 at 10:31 AM Nathan Bossart <nathandbossart@gmail.com> wrote:

Yup, thanks!

Great, thanks. I'll push it soon.

--Jacob

#388Nathan Bossart
nathandbossart@gmail.com
In reply to: Jacob Champion (#386)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, May 02, 2025 at 10:42:25AM -0700, Jacob Champion wrote:

On Fri, May 2, 2025 at 10:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

FWIW, on my Mac a meson build from HEAD goes through fine, with or
without this patch. I'm getting openssl and libcurl from MacPorts
not Homebrew, but I don't know why that would make any difference.

Do your <libintl.h> and <openssl/*.h> live in the same place? Mine do,
so I had to disable NLS to get this to break.

I enabled NLS and the problem disappeared for me, but that seems to be a
side effect of setting -Dextra_{include,lib}_dirs to point to my Homebrew
directories, which I needed to do to get NLS to work.

--
nathan

#389Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jacob Champion (#386)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> writes:

On Fri, May 2, 2025 at 10:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

FWIW, on my Mac a meson build from HEAD goes through fine, with or
without this patch. I'm getting openssl and libcurl from MacPorts
not Homebrew, but I don't know why that would make any difference.

Do your <libintl.h> and <openssl/*.h> live in the same place? Mine do,
so I had to disable NLS to get this to break.

Yeah, they are both under /opt/local/include in a MacPorts setup.
But disabling NLS doesn't break it for me. I tried

meson setup build --auto-features=disabled -Dlibcurl=enabled

to make sure that /opt/local/include wasn't getting pulled in
some other way, and it still builds.

Apropos of that: our fine manual claims that option is spelled
--auto_features, but that fails for me. Is that a typo in our
manual, or do some meson versions accept the underscore?

regards, tom lane

#390Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Tom Lane (#389)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, May 2, 2025 at 11:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Yeah, they are both under /opt/local/include in a MacPorts setup.
But disabling NLS doesn't break it for me. I tried

meson setup build --auto-features=disabled -Dlibcurl=enabled

to make sure that /opt/local/include wasn't getting pulled in
some other way, and it still builds.

Hm. If you clear out the build artifacts under
src/interfaces/libpq-oauth, and then build with

$ ninja -v src/interfaces/libpq-oauth/libpq-oauth.a

does that help surface anything interesting?

Apropos of that: our fine manual claims that option is spelled
--auto_features, but that fails for me. Is that a typo in our
manual, or do some meson versions accept the underscore?

--auto_features doesn't work for me either. That looks like an
accidental mashup of --auto-features and -Dauto_features.

--Jacob

#391Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jacob Champion (#390)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> writes:

Hm. If you clear out the build artifacts under
src/interfaces/libpq-oauth, and then build with
$ ninja -v src/interfaces/libpq-oauth/libpq-oauth.a
does that help surface anything interesting?

$ rm -rf src/interfaces/libpq-oauth
$ ninja -v src/interfaces/libpq-oauth/libpq-oauth.a
[1/2] ccache cc -Isrc/interfaces/libpq-oauth/libpq-oauth.a.p -Isrc/interfaces/libpq-oauth -I../src/interfaces/libpq-oauth -Isrc/interfaces/libpq -I../src/interfaces/libpq -Isrc/port -I../src/port -Isrc/include -I../src/include -I/opt/local/include -I/opt/local/libexec/openssl3/include -fdiagnostics-color=always -Wall -Winvalid-pch -O2 -g -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.4.sdk -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wmissing-prototypes -Wpointer-arith -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -Wdeclaration-after-statement -Wmissing-variable-declarations -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -Wno-format-truncation -Wno-cast-function-type-strict -MD -MQ src/interfaces/libpq-oauth/libpq-oauth.a.p/oauth-curl.c.o -MF src/interfaces/libpq-oauth/libpq-oauth.a.p/oauth-curl.c.o.d -o src/interfaces/libpq-oauth/libpq-oauth.a.p/oauth-curl.c.o -c ../src/interfaces/libpq-oauth/oauth-curl.c
[2/2] rm -f src/interfaces/libpq-oauth/libpq-oauth.a && ar csr src/interfaces/libpq-oauth/libpq-oauth.a src/interfaces/libpq-oauth/libpq-oauth.a.p/oauth-curl.c.o && ranlib -c src/interfaces/libpq-oauth/libpq-oauth.a

So it's getting -I/opt/local/include and also
-I/opt/local/libexec/openssl3/include from somewhere,
which I guess must be libcurl's pkg-config data ... yup:

$ pkg-config --cflags libcurl
-I/opt/local/include -I/opt/local/libexec/openssl3/include -I/opt/local/include

I bet Homebrew's libcurl packaging doesn't do that.

regards, tom lane

#392Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Tom Lane (#391)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, May 2, 2025 at 11:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

$ pkg-config --cflags libcurl
-I/opt/local/include -I/opt/local/libexec/openssl3/include -I/opt/local/include

I bet Homebrew's libcurl packaging doesn't do that.

Nope, Homebrew breaks them out into smaller pieces:

% PKG_CONFIG_PATH=/opt/homebrew/opt/curl/lib/pkgconfig pkg-config
--cflags libcurl
-I/opt/homebrew/Cellar/curl/8.13.0/include
-I/opt/homebrew/Cellar/brotli/1.1.0/include
-I/opt/homebrew/opt/zstd/include
-I/opt/homebrew/Cellar/libssh2/1.11.1/include
-I/opt/homebrew/Cellar/rtmpdump/2.4-20151223_3/include
-I/opt/homebrew/Cellar/openssl@3/3.5.0/include
-I/opt/homebrew/Cellar/libnghttp2/1.65.0/include

--Jacob

#393Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#392)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, May 2, 2025 at 11:56 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

-I/opt/homebrew/Cellar/openssl@3/3.5.0/include

Except it _is_ right there.

Oh, ha -- I'm not using Homebrew's Curl in this minimal build. Looks
like it's coming from the sysroot.

% ls -l /Library/Developer/CommandLineTools/SDKs/MacOSX15.2.sdk/usr/include/curl
total 208
-rw-r--r-- 1 root wheel 129052 Nov 9 22:54 curl.h
-rw-r--r-- 1 root wheel 3044 Nov 9 22:54 curlver.h
...

Well, that was fun.

--Jacob

#394Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jacob Champion (#390)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> writes:

On Fri, May 2, 2025 at 11:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Apropos of that: our fine manual claims that option is spelled
--auto_features, but that fails for me. Is that a typo in our
manual, or do some meson versions accept the underscore?

--auto_features doesn't work for me either. That looks like an
accidental mashup of --auto-features and -Dauto_features.

Ah, I see somebody already complained of this [1]/messages/by-id/172465652540.862882.17808523044292761256@wrigleys.postgresql.org, but apparently
we did nothing about it. I shall go fix it now.

regards, tom lane

[1]: /messages/by-id/172465652540.862882.17808523044292761256@wrigleys.postgresql.org

#395Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#376)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Re: Jacob Champion

So, committed. Thanks everyone for all the excellent feedback!

The package split between libpq5 and libpq-oauth in Debian has already
been accepted into the experimental branch.

Thanks,
Christoph

#396Wolfgang Walther
walther@technowledgy.de
In reply to: Jacob Champion (#383)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion:

libintl is already coming in via frontend_stlib_code, so that's fine.
So now I'm wondering if any other static clients of libpq-int.h (if
there are any) need the ssl dependency too, for correctness, or if
it's just me.

Looks like it's just me. And using partial_dependency for the includes
seems like overkill, so I've kept the full ssl dependency object, but
moved it to the staticlib only, which is enough to solve the breakage
on my machine.

Nathan, if you get a chance, does the attached patch work for you?

I couldn't reproduce the problem, so did not test the latest patch. But
I tested a lot of scenarios on nixpkgs with latest master (250a718a):

- aarch64 + x86_64 architectures, both Linux and MacOS

- Autoconf and Meson

- Various features enabled / disabled in different configurations (NLS,
OpenSSL, GSSAPI)

- And additionally some cross-compiling from x86_64 Linux to aarch64
Linux and x86_64 FreeBSD

Worked very well.

The only inconsistency I was able to find is the autoconf-generated
libpq.pc file, which has this:

Requires.private: libssl, libcrypto libcurl

Note the missing "," before libcurl.

It does *not* affect functionality, though:

pkg-config --print-requires-private libpq
libssl
libcrypto
libcurl

The meson-generated libpq.pc looks like this:

Requires.private: openssl, krb5-gssapi, libcurl >= 7.61.0

I was only able to test the latter in an end-to-end fully static build
of a downstream dependency - works great. The final executable has all
the expected oauth strings in it.

Best,

Wolfgang

#397Devrim Gündüz
devrim@gunduz.org
In reply to: Christoph Berg (#395)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On Sat, 2025-05-03 at 16:54 +0200, Christoph Berg wrote:

The package split between libpq5 and libpq-oauth in Debian has already
been accepted into the experimental branch.

RPMs will ship postgresql18-libs and postgresql18-libs-oauth. The latter
depends on the former for sure.

Regards,

--
Devrim Gündüz
Open Source Solution Architect, PostgreSQL Major Contributor
BlueSky: @devrim.gunduz.org , @gunduz.org

#398Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Wolfgang Walther (#396)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Sun, May 4, 2025 at 5:58 AM Wolfgang Walther <walther@technowledgy.de> wrote:

The only inconsistency I was able to find is the autoconf-generated
libpq.pc file, which has this:

Requires.private: libssl, libcrypto libcurl

Oh, I see what I did. Will fix, thanks.

I was only able to test the latter in an end-to-end fully static build
of a downstream dependency - works great. The final executable has all
the expected oauth strings in it.

Thank you so much for all the detailed testing!

--Jacob

#399Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#243)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Mar 6, 2025 at 12:57 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

3) There is a related performance bug on other platforms. If a Curl
timeout happens partway through a request (so libcurl won't clear it),
the timer-expired event will stay set and CPU will be burned to spin
pointlessly on drive_request(). This is much easier to notice after
taking Happy Eyeballs out of the picture. It doesn't cause logical
failures -- Curl basically discards the unnecessary calls -- but it's
definitely unintended.

...

I plan to defer working on Problem 3, which should just be a
performance bug, until the tests are green again. And I would like to
eventually add some stronger unit tests for the timer behavior, to
catch other potential OS-specific problems in the future.

To follow up on this: I had intended to send a patch fixing the timer
bug this week, but after fixing it, the performance problem did not
disappear. Turns out: other file descriptors can get stuck open on
BSD, depending on how complicated Curl wants to make the order of
operations, and the existing tests aren't always enough to expose it.
(It also depends on the Curl version installed.)

I will split this off into its own thread soon, because this
megathread is just too big, but I wanted to make a note here and file
an open item. As part of that, I have a set of more rigorous unit
tests for the libcurl-libpq interaction that I'm working on, since the
external view of "the flow worked/didn't work" is not enough to
indicate internal health.

--Jacob

#400Ivan Kush
ivan.kush@tantorlabs.com
In reply to: Jacob Champion (#399)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hello!

This patch fixes CPPFLAGS, LDFLAGS, LIBS when checking AsyncDNS libcurl
support in configure

Custom parameters and paths to libcurl were mistakenly excluded from
CPPFLAGS, LDFLAGS, and LIBS, although AsyncDNS check was OK.

For example, the command `pkg-config --libs libcurl` gives
`-L/usr/local/lib -lcurl`. LDFLAGS will not contain `-L/usr/local/lib`.

This patch fixes such behaviour.

Test case:

I've tested custom Postgres in an old Debian based Linux distro. This
distro contains old libcurl (< 7.61, package libcurl3) that was compiled
with symbols CURL_OPENSSL_3. So I've installed newer version of
libcurlssl, package libcurl4-openssl-dev, that contains symbols
CURL_OPENSSL_4 and compiled my libcurl version > 7.61.

After compilation during testing some Postgres shared libraries or
binaries that was linked with libcurl showed an error "version
CURL_OPENSSL_3 not found (required by …/libcurl.so.4)"

--
Best wishes,
Ivan Kush
Tantor Labs LLC

Attachments:

0001_oauth_ Fix_CPPFLAGS,_LDFLAGS,_LIBS_when_checking_AsyncDNS_libcurl_support.patchtext/x-patch; charset=UTF-8; name="0001_oauth_ Fix_CPPFLAGS,_LDFLAGS,_LIBS_when_checking_AsyncDNS_libcurl_support.patch"Download
From 8a24c24f85c40e2aa0c40afc8f9cd7a19afa66c3 Mon Sep 17 00:00:00 2001
From: Ivan Kush <ivan.kush@tantorlabs.com>
Date: Fri, 20 Jun 2025 12:16:47 +0300
Subject: [PATCH] oauth: Fix CPPFLAGS, LDFLAGS, LIBS when checking AsyncDNS libcurl support

Custom parameters and paths to libcurl were mistakenly excluded from CPPFLAGS,
LDFLAGS, and LIBS.
For example, the command `pkg-config --libs libcurl` gives
`-L/usr/local/lib -lcurl`. LDFLAGS will not contain `-L/usr/local/lib`

This patch fixes this.

Author: Ivan Kush <ivan.kush@tantorlabs.com>
Author: Lev Nikolaev <lev.nikolaev@tantorlabs.com>
Reviewed-by:
Discussion:
---
 config/programs.m4 | 7 +++----
 configure          | 8 +++-----
 2 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/config/programs.m4 b/config/programs.m4
index 0ad1e58b48d..2556e469323 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -348,9 +348,8 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 *** The installed version of libcurl does not support asynchronous DNS
 *** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
 *** to use it with libpq.])
+    CPPFLAGS=$pgac_save_CPPFLAGS
+    LDFLAGS=$pgac_save_LDFLAGS
+    LIBS=$pgac_save_LIBS
   fi
-
-  CPPFLAGS=$pgac_save_CPPFLAGS
-  LDFLAGS=$pgac_save_LDFLAGS
-  LIBS=$pgac_save_LIBS
 ])# PGAC_CHECK_LIBCURL
diff --git a/configure b/configure
index 4f15347cc95..46a011d1d1b 100755
--- a/configure
+++ b/configure
@@ -12883,12 +12883,10 @@ $as_echo "$pgac_cv__libcurl_async_dns" >&6; }
 *** The installed version of libcurl does not support asynchronous DNS
 *** lookups. Rebuild libcurl with the AsynchDNS feature enabled in order
 *** to use it with libpq." "$LINENO" 5
+    CPPFLAGS=$pgac_save_CPPFLAGS
+    LDFLAGS=$pgac_save_LDFLAGS
+    LIBS=$pgac_save_LIBS
   fi
-
-  CPPFLAGS=$pgac_save_CPPFLAGS
-  LDFLAGS=$pgac_save_LDFLAGS
-  LIBS=$pgac_save_LIBS
-
 fi

 if test "$with_gssapi" = yes ; then
--
2.34.1

#401Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Ivan Kush (#400)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Jun 20, 2025 at 3:08 AM Ivan Kush <ivan.kush@tantorlabs.com> wrote:

Hello!

This patch fixes CPPFLAGS, LDFLAGS, LIBS when checking AsyncDNS libcurl
support in configure

Hi Ivan, thanks for the report! Your patch puts new logic directly
after an AC_MSG_ERROR() call, so any effect has to come from the fact
that we're no longer restoring the old compiler and linker flags.
That's not what we want -- Curl needs to be isolated from the rest of
the build.

Let's focus on the error you're seeing:

After compilation during testing some Postgres shared libraries or
binaries that was linked with libcurl showed an error "version
CURL_OPENSSL_3 not found (required by …/libcurl.so.4)"

What's your configure line? You need to make sure that your custom
libcurl is used at configure-time, compile-time, and run-time.

And which binaries are complaining? The only thing that should ever be
linked against libcurl is libpq-oauth-18.so.

Thanks,
--Jacob

#402Andres Freund
andres@anarazel.de
In reply to: Jacob Champion (#387)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-05-02 10:42:34 -0700, Jacob Champion wrote:

Great, thanks. I'll push it soon.

I just noticed that I think the dependencies on the meson build aren't quite
sufficient:

andres@awork3:/srv/dev/build/postgres/m-dev-assert$ ninja install-quiet
[2205/2205 1 100%] Generating install-quiet with a custom command
FAILED: install-quiet
/usr/bin/python3 /home/andres/src/meson/meson.py install --quiet --no-rebuild

ERROR: File 'src/interfaces/libpq-oauth/libpq-oauth.a' could not be found
ninja: build stopped: subcommand failed.

Probably just needs to be added to the installed_targets list.

Greetings,

Andres Freund

#403Daniel Gustafsson
daniel@yesql.se
In reply to: Andres Freund (#402)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 30 Jun 2025, at 18:58, Andres Freund <andres@anarazel.de> wrote:

Hi,

On 2025-05-02 10:42:34 -0700, Jacob Champion wrote:

Great, thanks. I'll push it soon.

I just noticed that I think the dependencies on the meson build aren't quite
sufficient:

andres@awork3:/srv/dev/build/postgres/m-dev-assert$ ninja install-quiet
[2205/2205 1 100%] Generating install-quiet with a custom command
FAILED: install-quiet
/usr/bin/python3 /home/andres/src/meson/meson.py install --quiet --no-rebuild

ERROR: File 'src/interfaces/libpq-oauth/libpq-oauth.a' could not be found
ninja: build stopped: subcommand failed.

Probably just needs to be added to the installed_targets list.

Thanks for the report, I'll take a look today to get it fixed.

--
Daniel Gustafsson

#404Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#403)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Mon, Jun 30, 2025 at 10:02 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On 30 Jun 2025, at 18:58, Andres Freund <andres@anarazel.de> wrote:
Probably just needs to be added to the installed_targets list.

Thanks for the report, I'll take a look today to get it fixed.

Thanks both!

Looking at the installed_targets stuff, though... why do we use `meson
install --no-rebuild` in combination with `depends:
installed_targets`? Can't we just use Meson's dependency tracking
during installation, and avoid this hazard?

--Jacob

#405Daniel Gustafsson
daniel@yesql.se
In reply to: Jacob Champion (#404)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 30 Jun 2025, at 20:33, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Mon, Jun 30, 2025 at 10:02 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On 30 Jun 2025, at 18:58, Andres Freund <andres@anarazel.de> wrote:
Probably just needs to be added to the installed_targets list.

Thanks for the report, I'll take a look today to get it fixed.

Thanks both!

Looking at the installed_targets stuff, though... why do we use `meson
install --no-rebuild` in combination with `depends:
installed_targets`? Can't we just use Meson's dependency tracking
during installation, and avoid this hazard?

I suspect it is because without --no-rebuild the quiet target isn't entirely
quiet. Still, I was unable to make something that work in all build
combinations while keeping --no-rebuild (which isn't indicative of it being
possible to do). Is --no-rebuild just there to reduce output noise, or is
there another reason that I don't see?

--
Daniel Gustafsson

#406Andres Freund
andres@anarazel.de
In reply to: Daniel Gustafsson (#405)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-07-01 00:52:49 +0200, Daniel Gustafsson wrote:

On 30 Jun 2025, at 20:33, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Mon, Jun 30, 2025 at 10:02 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On 30 Jun 2025, at 18:58, Andres Freund <andres@anarazel.de> wrote:
Probably just needs to be added to the installed_targets list.

Thanks for the report, I'll take a look today to get it fixed.

Thanks both!

Looking at the installed_targets stuff, though... why do we use `meson
install --no-rebuild` in combination with `depends:
installed_targets`? Can't we just use Meson's dependency tracking
during installation, and avoid this hazard?

I don't think that's really possible - the dependency tracking is useful to
generate granular *rebuild* information, but doesn't help with the first
build.

If we had dependency generation for the install target it could be helpful to
discover missing dependencies though.

I suspect it is because without --no-rebuild the quiet target isn't entirely
quiet.

No - the issue is that you're not allowed to run ninja while ninja is running,
as that would corrupt it's tracking (and build things multiple times). meson
install --no-rebuild would run ninja to build things...

Still, I was unable to make something that work in all build combinations
while keeping --no-rebuild (which isn't indicative of it being possible to
do).

Hm, what problem did you encounter? I don't think there should be any
difficulty?

Greetings,

Andres Freund

#407Ivan Kush
ivan.kush@tantorlabs.com
In reply to: Jacob Champion (#401)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Thanks for the clarification! I thought linker flags should be installed
globally for all compilation targets.

Another question:

Why don't we set LIBS in the configure in "checking for curl_multi_init"
using LIBCURL_LIBS or LIBCURL_LDFLAGS?
https://github.com/postgres/postgres/blob/master/configure#L12734

Like this:
    LIBS="$(LIBCURL_LDFLAGS) $(LIBCURL_LDLIBS)"

And set LIBS with -lcurl.

As I understand we need to check the properties of libcurl we are
compiling with?
It may be some local libcurl from /opt/my_libcurl. So LIBCURL_... may
contain a flag like -L/opt/my_libcurl
Without these LIBCURL... variables we will check a system libcurl, not
our local.

I mean why don't we set LIBS

current *configure*

$as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
....
else
  ac_check_lib_save_LIBS=$LIBS
LIBS="-lcurl  $LIBS"
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h.  */

https://github.com/postgres/postgres/blob/master/configure#L12734

For example, I've logged flags after this code sample and they don't
contain -L/opt/my_libcurl

    IVK configure:13648: CFLAGS=-Wall -Wmissing-prototypes
-Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing
-fwrapv -fexcess-precision=standard -pipe -O2
    IVK configure:13649: LDFLAGS=-Wl,-z,relro -Wl,-z,now -flto=auto
-ffat-lto-objects -L/usr/lib/llvm-10/lib -L/usr/local/lib/zstd
    IVK configure:13650: LIBS=-lcurl  -lz -lreadline -lpthread -lrt
-ldl -lm
    IVK configure:13651: LDLIBS=

On 25-06-23 18:32, Jacob Champion wrote:

On Fri, Jun 20, 2025 at 3:08 AM Ivan Kush <ivan.kush@tantorlabs.com> wrote:

Hello!

This patch fixes CPPFLAGS, LDFLAGS, LIBS when checking AsyncDNS libcurl
support in configure

Hi Ivan, thanks for the report! Your patch puts new logic directly
after an AC_MSG_ERROR() call, so any effect has to come from the fact
that we're no longer restoring the old compiler and linker flags.
That's not what we want -- Curl needs to be isolated from the rest of
the build.

Let's focus on the error you're seeing:

After compilation during testing some Postgres shared libraries or
binaries that was linked with libcurl showed an error "version
CURL_OPENSSL_3 not found (required by …/libcurl.so.4)"

What's your configure line? You need to make sure that your custom
libcurl is used at configure-time, compile-time, and run-time.

And which binaries are complaining? The only thing that should ever be
linked against libcurl is libpq-oauth-18.so.

Thanks,
--Jacob

--
Best wishes,
Ivan Kush
Tantor Labs LLC

#408Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Ivan Kush (#407)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Jul 2, 2025 at 5:45 AM Ivan Kush <ivan.kush@tantorlabs.com> wrote:

Thanks for the clarification! I thought linker flags should be installed
globally for all compilation targets.

Not for libcurl, since the libpq-oauth module split.

Another question:

Why don't we set LIBS in the configure in "checking for curl_multi_init"
using LIBCURL_LIBS or LIBCURL_LDFLAGS?
[...]
Without these LIBCURL... variables we will check a system libcurl, not
our local.

Ah, that's definitely a bug. I've tested alternate PKG_CONFIG_PATHs,
but I haven't regularly tested on systems that have no system libcurl
at all. So those header and lib checks need to be moved after the use
of LIBCURL_CPPFLAGS and LIBCURL_LDFLAGS to prevent a false failure.
Otherwise they're only useful for the LIBCURL_LDLIBS assignment.

I wonder if I should just get rid of those to better match the Meson
implementation... but the error messages from the checks will likely
be nicer than compilation failures during the later test programs. Hm.

(Thanks for the report!)

--Jacob

#409Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jacob Champion (#408)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> writes:

On Wed, Jul 2, 2025 at 5:45 AM Ivan Kush <ivan.kush@tantorlabs.com> wrote:

Why don't we set LIBS in the configure in "checking for curl_multi_init"
using LIBCURL_LIBS or LIBCURL_LDFLAGS?
[...]
Without these LIBCURL... variables we will check a system libcurl, not
our local.

Ah, that's definitely a bug.

I just ran into a vaguely-related failure: on RHEL8, building
with --with-libcurl leads to failures during check-world:

../../../../src/interfaces/libpq/libpq.so: undefined reference to `dlopen'
../../../../src/interfaces/libpq/libpq.so: undefined reference to `dlclose'
../../../../src/interfaces/libpq/libpq.so: undefined reference to `dlerror'
../../../../src/interfaces/libpq/libpq.so: undefined reference to `dlsym'
collect2: error: ld returned 1 exit status

Per "man dlopen", you have to link with libdl to use these functions
on this platform. (Curiously, although RHEL9 still says that in the
documentation, it doesn't seem to actually need -ldl.) I was able
to resolve this by adding -ldl in libpq's Makefile:

-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -ldl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)

It doesn't look like the Meson support needs such explicit tracking of
required libraries, but perhaps I'm missing something? I'm not able
to test that directly for lack of a usable ninja version on this
platform.

Apologies for not noticing this sooner. I don't think I'd tried
--with-libcurl since the changes to split out libpq-oauth.

regards, tom lane

#410Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#409)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-07-09 13:36:26 -0400, Tom Lane wrote:

It doesn't look like the Meson support needs such explicit tracking of
required libraries, but perhaps I'm missing something?

It should be fine, -ldl is added to "os_deps" if needed, and os_deps is used
for all code in pg.

Greetings,

Andres Freund

#411Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Tom Lane (#409)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Jul 9, 2025 at 10:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Per "man dlopen", you have to link with libdl to use these functions
on this platform. (Curiously, although RHEL9 still says that in the
documentation, it doesn't seem to actually need -ldl.) I was able
to resolve this by adding -ldl in libpq's Makefile:

-SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)
+SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lgssapi_krb5 -lgss -lgssapi -lssl -lsocket -lnsl -lresolv -lintl -ldl -lm, $(LIBS)) $(LDAP_LIBS_FE) $(PTHREAD_LIBS)

Hmm, okay. That analysis and fix look good to me. (It looks like none
of the RHEL animals are testing with Curl yet, and locally I was using
Rocky 9...)

I'll work up a patch to send through the CI. I can't currently test
RHEL8 easily -- Rocky 8 is incompatible with my Macbook? -- which I
will need to rectify eventually, but I can't this week.

Thanks!
--Jacob

#412Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jacob Champion (#411)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> writes:

I'll work up a patch to send through the CI. I can't currently test
RHEL8 easily -- Rocky 8 is incompatible with my Macbook? -- which I
will need to rectify eventually, but I can't this week.

No need, I already tested locally. Will push shortly.

regards, tom lane

#413Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Tom Lane (#412)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Jul 9, 2025 at 11:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Jacob Champion <jacob.champion@enterprisedb.com> writes:

I'll work up a patch to send through the CI. I can't currently test
RHEL8 easily -- Rocky 8 is incompatible with my Macbook? -- which I
will need to rectify eventually, but I can't this week.

No need, I already tested locally. Will push shortly.

Thank you very much!

Here is a draft patch for Ivan's reported issue; I still need to put
it through its paces with some more unusual setups, but I want to get
cfbot on it.

--Jacob

Attachments:

WIP-oauth-run-Autoconf-tests-with-correct-compiler-f.patchapplication/octet-stream; name=WIP-oauth-run-Autoconf-tests-with-correct-compiler-f.patchDownload
From 2186e74f79a1dea452f1e25b70e1e7bfdde72d8f Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 9 Jul 2025 11:33:30 -0700
Subject: [PATCH] WIP: oauth: run Autoconf tests with correct compiler flags

Reported-by: Ivan Kush <ivan.kush@tantorlabs.com>
Discussion: https://postgr.es/m/8a611028-51a1-408c-b592-832e2e6e1fc9%40tantorlabs.com
Backpatch-through: 18
---
 config/programs.m4 | 15 +++++++++------
 configure          | 15 +++++++++------
 2 files changed, 18 insertions(+), 12 deletions(-)

diff --git a/config/programs.m4 b/config/programs.m4
index 0ad1e58b48d..b667aec4458 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -284,6 +284,15 @@ AC_DEFUN([PGAC_CHECK_STRIP],
 
 AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
+  # libcurl compiler/linker flags are kept separate from the global flags, so
+  # they have to be added back temporarily for the following tests.
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+
   AC_CHECK_HEADER(curl/curl.h, [],
 				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
   AC_CHECK_LIB(curl, curl_multi_init, [
@@ -292,12 +301,6 @@ AC_DEFUN([PGAC_CHECK_LIBCURL],
 			   ],
 			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
 
-  pgac_save_CPPFLAGS=$CPPFLAGS
-  pgac_save_LDFLAGS=$LDFLAGS
-  pgac_save_LIBS=$LIBS
-
-  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
-  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
   LIBS="$LIBCURL_LDLIBS $LIBS"
 
   # Check to see whether the current platform supports threadsafe Curl
diff --git a/configure b/configure
index cfaf3757dd7..cf54332a799 100755
--- a/configure
+++ b/configure
@@ -12717,6 +12717,15 @@ fi
 
 if test "$with_libcurl" = yes ; then
 
+  # libcurl compiler/linker flags are kept separate from the global flags, so
+  # they have to be added back temporarily for the following tests.
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
 if test "x$ac_cv_header_curl_curl_h" = xyes; then :
 
@@ -12774,12 +12783,6 @@ else
 fi
 
 
-  pgac_save_CPPFLAGS=$CPPFLAGS
-  pgac_save_LDFLAGS=$LDFLAGS
-  pgac_save_LIBS=$LIBS
-
-  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
-  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
   LIBS="$LIBCURL_LDLIBS $LIBS"
 
   # Check to see whether the current platform supports threadsafe Curl
-- 
2.34.1

#414Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jacob Champion (#413)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> writes:

Here is a draft patch for Ivan's reported issue; I still need to put
it through its paces with some more unusual setups, but I want to get
cfbot on it.

I'm confused about why this moves up the temporary changes of
CPPFLAGS and LDFLAGS, but not LIBS? Maybe that's actually correct,
but it looks strange (and perhaps deserves a comment about why).

regards, tom lane

#415Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Tom Lane (#414)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Jul 9, 2025 at 12:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Jacob Champion <jacob.champion@enterprisedb.com> writes:

Here is a draft patch for Ivan's reported issue; I still need to put
it through its paces with some more unusual setups, but I want to get
cfbot on it.

I'm confused about why this moves up the temporary changes of
CPPFLAGS and LDFLAGS, but not LIBS? Maybe that's actually correct,
but it looks strange (and perhaps deserves a comment about why).

Yeah, that's fair. It's because LIBCURL_LDLIBS isn't set until that
AC_CHECK_LIB test is run, and the test needs LIBCURL_LDFLAGS to be in
force.

(Upthread, I was idly wondering if those AC_CHECKs should just be
removed -- after all, PKG_CHECK_MODULES just told us where Curl was --
but I'm nervous that this might make more niche use cases like
cross-compilation harder to use in practice?)

Does the attached help clarify?

Thanks,
--Jacob

Attachments:

v2-0001-WIP-oauth-run-Autoconf-tests-with-correct-compile.patchapplication/octet-stream; name=v2-0001-WIP-oauth-run-Autoconf-tests-with-correct-compile.patchDownload
From 59d6df6fca487622ddb64b219b721f9990ac5809 Mon Sep 17 00:00:00 2001
From: Jacob Champion <jacob.champion@enterprisedb.com>
Date: Wed, 9 Jul 2025 11:33:30 -0700
Subject: [PATCH v2] WIP: oauth: run Autoconf tests with correct compiler flags

Reported-by: Ivan Kush <ivan.kush@tantorlabs.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/8a611028-51a1-408c-b592-832e2e6e1fc9%40tantorlabs.com
Backpatch-through: 18
---
 config/programs.m4 | 18 ++++++++++++------
 configure          | 18 ++++++++++++------
 2 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/config/programs.m4 b/config/programs.m4
index 0ad1e58b48d..c73d9307ea8 100644
--- a/config/programs.m4
+++ b/config/programs.m4
@@ -284,20 +284,26 @@ AC_DEFUN([PGAC_CHECK_STRIP],
 
 AC_DEFUN([PGAC_CHECK_LIBCURL],
 [
+  # libcurl compiler/linker flags are kept separate from the global flags, so
+  # they have to be added back temporarily for the following tests.
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+
   AC_CHECK_HEADER(curl/curl.h, [],
 				  [AC_MSG_ERROR([header file <curl/curl.h> is required for --with-libcurl])])
+
+  # LIBCURL_LDLIBS is determined here. Like the compiler flags, it should not
+  # pollute the global LIBS setting.
   AC_CHECK_LIB(curl, curl_multi_init, [
 				 AC_DEFINE([HAVE_LIBCURL], [1], [Define to 1 if you have the `curl' library (-lcurl).])
 				 AC_SUBST(LIBCURL_LDLIBS, -lcurl)
 			   ],
 			   [AC_MSG_ERROR([library 'curl' does not provide curl_multi_init])])
 
-  pgac_save_CPPFLAGS=$CPPFLAGS
-  pgac_save_LDFLAGS=$LDFLAGS
-  pgac_save_LIBS=$LIBS
-
-  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
-  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
   LIBS="$LIBCURL_LDLIBS $LIBS"
 
   # Check to see whether the current platform supports threadsafe Curl
diff --git a/configure b/configure
index cfaf3757dd7..6d7c22e153f 100755
--- a/configure
+++ b/configure
@@ -12717,6 +12717,15 @@ fi
 
 if test "$with_libcurl" = yes ; then
 
+  # libcurl compiler/linker flags are kept separate from the global flags, so
+  # they have to be added back temporarily for the following tests.
+  pgac_save_CPPFLAGS=$CPPFLAGS
+  pgac_save_LDFLAGS=$LDFLAGS
+  pgac_save_LIBS=$LIBS
+
+  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
+  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
+
   ac_fn_c_check_header_mongrel "$LINENO" "curl/curl.h" "ac_cv_header_curl_curl_h" "$ac_includes_default"
 if test "x$ac_cv_header_curl_curl_h" = xyes; then :
 
@@ -12725,6 +12734,9 @@ else
 fi
 
 
+
+  # LIBCURL_LDLIBS is determined here. Like the compiler flags, it should not
+  # pollute the global LIBS setting.
   { $as_echo "$as_me:${as_lineno-$LINENO}: checking for curl_multi_init in -lcurl" >&5
 $as_echo_n "checking for curl_multi_init in -lcurl... " >&6; }
 if ${ac_cv_lib_curl_curl_multi_init+:} false; then :
@@ -12774,12 +12786,6 @@ else
 fi
 
 
-  pgac_save_CPPFLAGS=$CPPFLAGS
-  pgac_save_LDFLAGS=$LDFLAGS
-  pgac_save_LIBS=$LIBS
-
-  CPPFLAGS="$LIBCURL_CPPFLAGS $CPPFLAGS"
-  LDFLAGS="$LIBCURL_LDFLAGS $LDFLAGS"
   LIBS="$LIBCURL_LDLIBS $LIBS"
 
   # Check to see whether the current platform supports threadsafe Curl
-- 
2.34.1

#416Tom Lane
tgl@sss.pgh.pa.us
In reply to: Jacob Champion (#415)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Jacob Champion <jacob.champion@enterprisedb.com> writes:

On Wed, Jul 9, 2025 at 12:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

I'm confused about why this moves up the temporary changes of
CPPFLAGS and LDFLAGS, but not LIBS? Maybe that's actually correct,
but it looks strange (and perhaps deserves a comment about why).

Does the attached help clarify?

Yes, thanks.

(Upthread, I was idly wondering if those AC_CHECKs should just be
removed -- after all, PKG_CHECK_MODULES just told us where Curl was --
but I'm nervous that this might make more niche use cases like
cross-compilation harder to use in practice?)

Nah, let's keep them. We do document for at least some libraries
how to manually specify the include and link options without
depending on pkg-config. If someone tries that with libcurl,
it'd be good to have sanity checks on the results.

regards, tom lane

#417Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Tom Lane (#416)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Wed, Jul 9, 2025 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Nah, let's keep them. We do document for at least some libraries
how to manually specify the include and link options without
depending on pkg-config. If someone tries that with libcurl,
it'd be good to have sanity checks on the results.

Sounds good, thanks for the review!

On Wed, Jul 9, 2025 at 11:39 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Here is a draft patch for Ivan's reported issue; I still need to put
it through its paces with some more unusual setups, but I want to get
cfbot on it.

On HEAD, Rocky 9 fails to build with a custom Curl PKG_CONFIG_PATH and
no libcurl-devel installed. With this patch, that build now succeeds,
and it still succeeds after libcurl-devel is reinstalled, with the
compiler tests continuing to use the custom libcurl and not the
system's.

So I'll give Ivan a little time in case he'd like to test/review
again, but otherwise I plan to push it this week.

Thanks,
--Jacob

#418Noname
ivan.kush@tantorlabs.com
In reply to: Jacob Champion (#417)
Re: [PoC] Federated Authn/z with OAUTHBEARER

I agree with the patch. Works in my OSes

Show quoted text

On 7/10/25 2:54 AM, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Wed, Jul 9, 2025 at 12:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:

Nah, let's keep them. We do document for at least some libraries
how to manually specify the include and link options without
depending on pkg-config. If someone tries that with libcurl,
it'd be good to have sanity checks on the results.

Sounds good, thanks for the review!

On Wed, Jul 9, 2025 at 11:39 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Here is a draft patch for Ivan's reported issue; I still need to put
it through its paces with some more unusual setups, but I want to get
cfbot on it.

On HEAD, Rocky 9 fails to build with a custom Curl PKG_CONFIG_PATH and
no libcurl-devel installed. With this patch, that build now succeeds,
and it still succeeds after libcurl-devel is reinstalled, with the
compiler tests continuing to use the custom libcurl and not the
system's.

So I'll give Ivan a little time in case he'd like to test/review
again, but otherwise I plan to push it this week.

Thanks,
--Jacob

#419Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Noname (#418)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Thu, Jul 10, 2025 at 7:41 AM <ivan.kush@tantorlabs.com> wrote:

I agree with the patch. Works in my OSes

Thanks Ivan! Committed.

--Jacob

#420Andres Freund
andres@anarazel.de
In reply to: Andres Freund (#406)
Re: [PoC] Federated Authn/z with OAUTHBEARER

Hi,

On 2025-06-30 19:42:51 -0400, Andres Freund wrote:

On 2025-07-01 00:52:49 +0200, Daniel Gustafsson wrote:

On 30 Jun 2025, at 20:33, Jacob Champion <jacob.champion@enterprisedb.com> wrote:

On Mon, Jun 30, 2025 at 10:02 AM Daniel Gustafsson <daniel@yesql.se> wrote:

On 30 Jun 2025, at 18:58, Andres Freund <andres@anarazel.de> wrote:
Probably just needs to be added to the installed_targets list.

Thanks for the report, I'll take a look today to get it fixed.

Thanks both!

Looking at the installed_targets stuff, though... why do we use `meson
install --no-rebuild` in combination with `depends:
installed_targets`? Can't we just use Meson's dependency tracking
during installation, and avoid this hazard?

I don't think that's really possible - the dependency tracking is useful to
generate granular *rebuild* information, but doesn't help with the first
build.

If we had dependency generation for the install target it could be helpful to
discover missing dependencies though.

I suspect it is because without --no-rebuild the quiet target isn't entirely
quiet.

No - the issue is that you're not allowed to run ninja while ninja is running,
as that would corrupt it's tracking (and build things multiple times). meson
install --no-rebuild would run ninja to build things...

Still, I was unable to make something that work in all build combinations
while keeping --no-rebuild (which isn't indicative of it being possible to
do).

Hm, what problem did you encounter? I don't think there should be any
difficulty?

Ping?

Greetings,

Andres Freund

#421Daniel Gustafsson
daniel@yesql.se
In reply to: Andres Freund (#420)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On 18 Jul 2025, at 19:26, Andres Freund <andres@anarazel.de> wrote:

Hm, what problem did you encounter? I don't think there should be any
difficulty?

Ping?

Ugh, In preparing for going on vacation this fell off the radar. I'll try to
get to looking at it tomorrow during downtime unless beaten to it.

--
Daniel Gustafsson

#422Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Daniel Gustafsson (#421)
1 attachment(s)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Jul 18, 2025 at 3:29 PM Daniel Gustafsson <daniel@yesql.se> wrote:

Ugh, In preparing for going on vacation this fell off the radar. I'll try to
get to looking at it tomorrow during downtime unless beaten to it.

Your earlier mail made me worried I'd missed something, but is the
attached diff what Andres was asking for? A `ninja clean; ninja
install-quiet` now works for me with this applied.

--Jacob

Attachments:

fix_installed_targets.diffapplication/octet-stream; name=fix_installed_targets.diffDownload
diff --git a/meson.build b/meson.build
index 5365aaf95e6..28706bcf3bb 100644
--- a/meson.build
+++ b/meson.build
@@ -3469,6 +3469,8 @@ installed_targets = [
   backend_targets,
   bin_targets,
   libpq_st,
+  libpq_oauth_so,
+  libpq_oauth_st,
   pl_targets,
   contrib_targets,
   nls_mo_targets,
#423Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#374)
libpq-oauth: a mid-beta naming check

On Wed, Apr 30, 2025 at 10:59 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

I still see the choice of naming (with its forced-ABI break
every major version) as needing more scrutiny, and probably worth a
Revisit entry.

It is now time to revisit.

= The Status Quo =

The libpq-oauth module is loaded on-demand, during the first use of
OAuth authentication, so users who don't want the behavior don't have
to install it. This module is named "libpq-oauth-18.so" for the PG18
release. So libpq v18 will always load the 18 OAuth behavior, libpq
v19 will load the 19 OAuth behavior, etc. Builds on HEAD have already
switched to -19, which is not yet any different from -18.

The internal API injects some libpq internals into the libpq-oauth
module. The ABI for this is assumed to break during each major version
release, so I don't have to watch the boundary like a hawk, and other
maintainers hopefully won't be saddled with breakage reports if I get
hit by a bus. (This is another advantage to using the -MAJOR naming
scheme.) And pg_conn in particular is given more protections: we can
still change its member offsets in minor versions without any ABI
breakage.

During major-version upgrades, if a packager doesn't provide a
side-by-side installation of the -18 and -19 modules, there is a
hazard: an already-loaded v18 libpq might find that the -18 module no
longer exists on disk, which would require a restart of the affected
application to pick up the v19 libpq. This is not really a consequence
of the -MAJOR naming scheme -- it's a consequence of delay-loaded
libraries that go through an ABI version bump -- but the naming scheme
makes the problem extremely visible.

The annoying part is that, if 19 doesn't change anything in the OAuth
flow compared to 18, I will basically have made busywork for our
packagers for no reason. But my goal for v19 is to replace the
internally coupled API with a public API, so that users can swap in
their own flows for use with our utilities. As far as I know, that
work necessarily includes designing a stable ABI and figuring out a
trusted place that users can put their plugins into. If we can do
both, I think we can get rid of the -MAJOR versioning scheme entirely,
because our use case will have been subsumed by the more general
framework.

So, as we approach Beta 3: can anyone think of a way that this plan will fail?

Thanks,
--Jacob

#424Jelte Fennema-Nio
postgres@jeltef.nl
In reply to: Jacob Champion (#423)
Re: libpq-oauth: a mid-beta naming check

On Tue, 5 Aug 2025 at 01:20, Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

As far as I know, that
work necessarily includes designing a stable ABI and figuring out a
trusted place that users can put their plugins into. If we can do
both, I think we can get rid of the -MAJOR versioning scheme entirely,
because our use case will have been subsumed by the more general
framework.

So, as we approach Beta 3: can anyone think of a way that this plan will fail?

It's not entirely clear what plan exactly you talk about here. Are you
saying you want to remove the -MAJOR suffix now for PG18? Or you want
to postpone doing that until PG19, when you would have designed a
stable API?

Based on my current understanding from what you wrote, I think that
second option would make sense, and the first option seems sketchy.
Because we don't know yet what the PG19 API will look like.

Also, the breakage during libpq major upgrades that you describe,
while unfortunate, doesn't seem completely terrible. This only impacts
people updating system packages in place on machines, which (based on
my experience) has started to become a minority in production setups.
Also this will obviously only impact oauth users, which I expect not
to be that many right away. If your goal is to remove this
during-upgrade breakage after PG19, then I'd say that seems totally
fine for a new feature.

#425Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jelte Fennema-Nio (#424)
Re: libpq-oauth: a mid-beta naming check

On Tue, Aug 5, 2025 at 2:39 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:

On Tue, 5 Aug 2025 at 01:20, Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

So, as we approach Beta 3: can anyone think of a way that this plan will fail?

It's not entirely clear what plan exactly you talk about here. Are you
saying you want to remove the -MAJOR suffix now for PG18? Or you want
to postpone doing that until PG19, when you would have designed a
stable API?

That is a PG19 plan. I don't want to make any changes for 18 unless
someone can see a fatal flaw; this is just my mid-beta check.

If your goal is to remove this
during-upgrade breakage after PG19, then I'd say that seems totally
fine for a new feature.

That's the hope, yes.

Thanks!
--Jacob

#426Christoph Berg
myon@debian.org
In reply to: Jacob Champion (#425)
Re: libpq-oauth: a mid-beta naming check

Re: Jacob Champion

On Tue, Aug 5, 2025 at 2:39 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:

On Tue, 5 Aug 2025 at 01:20, Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

So, as we approach Beta 3: can anyone think of a way that this plan will fail?

It's not entirely clear what plan exactly you talk about here. Are you
saying you want to remove the -MAJOR suffix now for PG18? Or you want
to postpone doing that until PG19, when you would have designed a
stable API?

That is a PG19 plan. I don't want to make any changes for 18 unless
someone can see a fatal flaw; this is just my mid-beta check.

FTR, fine with me.

If your goal is to remove this
during-upgrade breakage after PG19, then I'd say that seems totally
fine for a new feature.

That's the hope, yes.

Well, at the PG19 release time it will no longer be new. But the plan
sounds like a good one.

Christoph

#427Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#422)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Fri, Jul 18, 2025 at 4:31 PM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Your earlier mail made me worried I'd missed something, but is the
attached diff what Andres was asking for? A `ninja clean; ninja
install-quiet` now works for me with this applied.

Ping. I'll plan to commit this by the beta3 cutoff but it'd be nice to
verify that I'm not missing something obvious. :D

--Jacob

#428Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Christoph Berg (#426)
Re: libpq-oauth: a mid-beta naming check

On Tue, Aug 5, 2025 at 8:44 AM Christoph Berg <myon@debian.org> wrote:

Well, at the PG19 release time it will no longer be new. But the plan
sounds like a good one.

Excellent. Thanks everybody!

--Jacob

#429Jacob Champion
jacob.champion@enterprisedb.com
In reply to: Jacob Champion (#427)
Re: [PoC] Federated Authn/z with OAUTHBEARER

On Tue, Aug 5, 2025 at 11:54 AM Jacob Champion
<jacob.champion@enterprisedb.com> wrote:

Ping. I'll plan to commit this by the beta3 cutoff but it'd be nice to
verify that I'm not missing something obvious. :D

(Committed yesterday.)

I wonder if there's a Meson feature request in here somewhere, to be
able to compose targets from other builtin targets without having to
duplicate the internal bookkeeping.

--Jacob